[Yahoo-eng-team] [Bug 1324036] Re: Can't add authenticated iscsi volume to a vmware instance

2014-06-05 Thread Vipin Balachandran
This is unrelated to vmdk driver.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324036

Title:
  Can't add authenticated iscsi volume to a vmware instance

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The VMware driver doesn't pass volume authentication information to
  the hba when attaching an iscsi volume. Consequently, adding an iscsi
  volume which requires authentication will always fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1324036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275773] Re: VMware session not logged out on VMwareAPISession garbage collection

2014-06-05 Thread Gary Kotton
The issue was addressed by remove the __del__ from the class. This was
done in
https://github.com/openstack/nova/commit/b28530ce83302dacae90c385c5431fb1a758ef0a

** Changed in: nova
   Status: Triaged = Fix Committed

** Changed in: nova
   Status: Fix Committed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275773

Title:
  VMware session not logged out on VMwareAPISession garbage collection

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  A bug in VMwareAPISession.__del__() prevents the session being logged
  out when the session object is garbage collected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326666] [NEW] dead code in ovs/ofagent agents

2014-06-05 Thread YAMAMOTO Takashi
Public bug reported:

class Port seems like a leftover from days when agents had direct
database accesses.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/132

Title:
  dead code in ovs/ofagent agents

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  class Port seems like a leftover from days when agents had direct
  database accesses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326668] [NEW] Error: Unauthorized: Failed to modify 1 project members and update project quotas.

2014-06-05 Thread Hong-Guang
Public bug reported:

Test step:
1: admin login
2:create a new project named p_admin_1
3:give admin member and admin role to this new project
4:remove the admin role for this project


test result:

Error: Unauthorized: Failed to modify 1 project members and update project 
quotas.
Error: Unauthorized: Unable to retrieve project list.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1326668

Title:
  Error: Unauthorized: Failed to modify 1 project members and update
  project quotas.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Test step:
  1: admin login
  2:create a new project named p_admin_1
  3:give admin member and admin role to this new project
  4:remove the admin role for this project

  
  test result:

  Error: Unauthorized: Failed to modify 1 project members and update project 
quotas.
  Error: Unauthorized: Unable to retrieve project list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1326668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326676] [NEW] notification for nova unavailable

2014-06-05 Thread fullname
Public bug reported:

I am getting all ther pollster meters but not notification meters.

below are the contents of my nova.conf

instance_usage_audit_period = hour
instance_usage_audit = True
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
notification_topics=notifications
notify_on_state_change=vm_and_task_state



Thanks,
blackcat

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326676

Title:
  notification for nova  unavailable

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am getting all ther pollster meters but not notification meters.

  below are the contents of my nova.conf

  instance_usage_audit_period = hour
  instance_usage_audit = True
  notification_driver = nova.openstack.common.notifier.rpc_notifier
  notification_driver = ceilometer.compute.nova_notifier
  notification_topics=notifications
  notify_on_state_change=vm_and_task_state



  
  Thanks,
  blackcat

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250029] Re: Default port for The MS SQL Security Group is 1433 instead of 1443

2014-06-05 Thread Zhenguo Niu
** Also affects: puppet-horizon
   Importance: Undecided
   Status: New

** Changed in: puppet-horizon
 Assignee: (unassigned) = Zhenguo Niu (niu-zglinux)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1250029

Title:
  Default port for The MS SQL Security Group is 1433 instead of 1443

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in Puppet module for Horizon:
  In Progress

Bug description:
  The default port for the MS SQL Security group is 1433 instead of
  1443.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1250029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311804] Re: ip netns list starts without root_helper

2014-06-05 Thread Vladimir Kuklin
** Changed in: neutron
   Status: Confirmed = Invalid

** Also affects: fuel
   Importance: Undecided
   Status: New

** Also affects: fuel/4.1.x
   Importance: Undecided
   Status: New

** Also affects: fuel/5.0.x
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311804

Title:
  ip netns list starts without root_helper

Status in Fuel: OpenStack installer that works:
  New
Status in Fuel for OpenStack 4.1.x series:
  New
Status in Fuel for OpenStack 5.0.x series:
  New
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  CENTOS 6.5
  Reproduced on typical Openstack installation in any segmentation type with 
one L3-agent.

  In ip_lib in IpNetnsComand losted root_helper.
  Without it L3 agent can't create interfaces inside network namespace, because 
in Centos 'ip netns list' returns empty list if start without root privileges.

  [root@node-2 ~]# uname -a
  Linux node-2.domain.tld 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  [root@node-2 ~]# ip netns list
  haproxy
  qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9
  [root@node-2 ~]# su -m neutron -c 'ip netns list'
  [root@node-2 ~]#

  in the log below exception happens because namespace already exists
  (see full log in attach), but can't detected by ip netns list without
  root_wrapper.

  2014-04-23 16:15:44.760 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None] Running command: ['ip', '-o', 
'netns', 'list'] create_process /
  usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:48
  2014-04-23 16:15:44.781 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None]
  Command: ['ip', '-o', 'netns', 'list']
  Exit code: 0
  Stdout: ''
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:74
  2014-04-23 16:15:44.782 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None] Running command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/roo
  twrap.conf', 'ip', 'netns', 'add', 
'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9'] create_process 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:48
  2014-04-23 16:15:44.864 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None]
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9']
  Exit code: 255
  Stdout: ''
  Stderr: 'Could not create 
/var/run/netns/qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9: File exists\n' 
execute /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:7
  4
  Traceback (most recent call last):
    File /usr/lib/python2.6/site-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
    File /usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 
438, in process_router
  p['ip_cidr'], p['mac_address'])
    File /usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 
707, in internal_network_added
  prefix=INTERNAL_DEV_PREFIX)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/interface.py, 
line 195, in plug
  namespace_obj = ip.ensure_namespace(namespace)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
136, in ensure_namespace
  ip = self.netns.add(name)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
446, in add
  self._as_root('add', name, use_root_namespace=True)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
217, in _as_root
  kwargs.get('use_root_namespace', False))
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
70, in _as_root
  namespace)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
81, in _execute
  root_helper=root_helper)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py, line 
76, in execute
  raise RuntimeError(m)
  RuntimeError:
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9']
  Exit code: 255

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1311804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311804] Re: ip netns list starts without root_helper

2014-06-05 Thread Vladimir Kuklin
** Changed in: fuel/5.0.x
   Status: New = Fix Committed

** No longer affects: fuel/5.0.x

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311804

Title:
  ip netns list starts without root_helper

Status in Fuel: OpenStack installer that works:
  New
Status in Fuel for OpenStack 4.1.x series:
  New
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  CENTOS 6.5
  Reproduced on typical Openstack installation in any segmentation type with 
one L3-agent.

  In ip_lib in IpNetnsComand losted root_helper.
  Without it L3 agent can't create interfaces inside network namespace, because 
in Centos 'ip netns list' returns empty list if start without root privileges.

  [root@node-2 ~]# uname -a
  Linux node-2.domain.tld 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  [root@node-2 ~]# ip netns list
  haproxy
  qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9
  [root@node-2 ~]# su -m neutron -c 'ip netns list'
  [root@node-2 ~]#

  in the log below exception happens because namespace already exists
  (see full log in attach), but can't detected by ip netns list without
  root_wrapper.

  2014-04-23 16:15:44.760 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None] Running command: ['ip', '-o', 
'netns', 'list'] create_process /
  usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:48
  2014-04-23 16:15:44.781 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None]
  Command: ['ip', '-o', 'netns', 'list']
  Exit code: 0
  Stdout: ''
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:74
  2014-04-23 16:15:44.782 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None] Running command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/roo
  twrap.conf', 'ip', 'netns', 'add', 
'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9'] create_process 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:48
  2014-04-23 16:15:44.864 28240 DEBUG neutron.agent.linux.utils 
[req-d0f812f6-d987-45f5-9cff-11f1fa52fed6 None]
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9']
  Exit code: 255
  Stdout: ''
  Stderr: 'Could not create 
/var/run/netns/qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9: File exists\n' 
execute /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:7
  4
  Traceback (most recent call last):
    File /usr/lib/python2.6/site-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
    File /usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 
438, in process_router
  p['ip_cidr'], p['mac_address'])
    File /usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py, line 
707, in internal_network_added
  prefix=INTERNAL_DEV_PREFIX)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/interface.py, 
line 195, in plug
  namespace_obj = ip.ensure_namespace(namespace)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
136, in ensure_namespace
  ip = self.netns.add(name)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
446, in add
  self._as_root('add', name, use_root_namespace=True)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
217, in _as_root
  kwargs.get('use_root_namespace', False))
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
70, in _as_root
  namespace)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 
81, in _execute
  root_helper=root_helper)
    File /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py, line 
76, in execute
  raise RuntimeError(m)
  RuntimeError:
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-b582586e-70e3-4a38-8b19-039f30ce87a9']
  Exit code: 255

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1311804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326718] [NEW] create firewall fail when policy shared but rule unshared

2014-06-05 Thread Xurong Yang
Public bug reported:

openstack@openstack03:~/Vega$ neutron firewall-policy-list
+--+--++
| id   | name | firewall_rules  
   |
+--+--++
| 7884fb78-1903-4af6-af3f-55e5c7c047c9 | Demo | 
[d5578ab5-869b-48cb-be54-85ee9f15d9b2] |
| 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | Test | 
[8679da8d-200e-4311-bb7d-7febd3f46e37, |
|  |  |  
86ce188d-18ab-49f2-b664-96c497318056] |
+--+--++
openstack@openstack03:~/Vega$ neutron firewall-rule-list
+--+--+--++-+
| id   | name | firewall_policy_id  
 | summary| enabled |
+--+--+--++-+
| 8679da8d-200e-4311-bb7d-7febd3f46e37 | DenyOne  | 
949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True
|
|  |  | 
 |  source: none(none),   | |
|  |  | 
 |  dest: 192.168.0.101/32(none), | |
|  |  | 
 |  deny  | |
| 86ce188d-18ab-49f2-b664-96c497318056 | AllowAll | 
949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | ICMP,  | True
|
|  |  | 
 |  source: none(none),   | |
|  |  | 
 |  dest: none(none), | |
|  |  | 
 |  allow | |
+--+--+--++-+
openstack@openstack03:~/Vega$ neutron firewall-create --name Test Demo
Firewall Rule d5578ab5-869b-48cb-be54-85ee9f15d9b2 could not be found.

and the firewall above suspend with status=PENDING_CREATE 
openstack@openstack03:~/Vega$ neutron firewall-show Test
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 7884fb78-1903-4af6-af3f-55e5c7c047c9 |
| id | 7c59c7da-ace1-4dfa-8b04-2bc6013dbc0a |
| name   | Test |
| status | PENDING_CREATE   |
| tenant_id  | a0794fca47de4631b8e414beea4bd51b |
++--+

** Affects: neutron
 Importance: Undecided
 Assignee: Xurong Yang (idopra)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326718

Title:
  create firewall fail when policy shared but rule unshared

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  openstack@openstack03:~/Vega$ neutron firewall-policy-list
  
+--+--++
  | id   | name | firewall_rules
 |
  
+--+--++
  | 7884fb78-1903-4af6-af3f-55e5c7c047c9 | Demo | 
[d5578ab5-869b-48cb-be54-85ee9f15d9b2] |
  | 949fef5c-8dd5-4267-98fb-2ba17d2b0a96 | Test | 
[8679da8d-200e-4311-bb7d-7febd3f46e37, |
  |  |  |  
86ce188d-18ab-49f2-b664-96c497318056] |
  
+--+--++
  openstack@openstack03:~/Vega$ neutron firewall-rule-list
  
+--+--+--++-+
  | id   | name | firewall_policy_id
   | summary| enabled |
  
+--+--+--++-+
  | 

[Yahoo-eng-team] [Bug 1302814] Re: nova notifications configuration takes tenant_id in config

2014-06-05 Thread Emilien Macchi
** Changed in: neutron
   Status: In Progress = Fix Committed

** Changed in: neutron
   Status: Fix Committed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302814

Title:
  nova notifications configuration takes tenant_id in config

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Since Icehouse, Neutron is able to send notifications to Nova API about some 
networking events.
  To make it working, you have to provide nova_admin_tenant_id in 
neutron.conf, which is painful for configuration management when deploying 
OpenStack in production.

  Like in all OpenStack projects, we should have a new parameter
  nova_admin_tenant_name without hitting Keystone API to ask for the
  real ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324428] Re: net_create fail without definite segmentation_id

2014-06-05 Thread Cedric Brandily
This is not a bug, but it is the expected behavior of the current
provider networks extension.

A blueprint [1] proposes to reduce constraints on inputs.


[1]https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs
[1]https://review.openstack.org/#/q/topic:bp/provider-network-partial-specs,n,z

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324428

Title:
  net_create fail without definite segmentation_id

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  if i define provider, but no segmentation_id,  net-create fail. why not 
allocate segmentation_id automatically?
  ~$ neutron net-create test --provider:network_type=vlan 
--provider:physical_network=default
  Invalid input for operation: segmentation_id required for VLAN provider 
network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326778] [NEW] test_list_migrations_in_flavor_resize_situation fails with 400 after VERIFY_RESIZE state

2014-06-05 Thread Matt Riedemann
Public bug reported:

Fails here:

http://logs.openstack.org/93/96293/2/gate/gate-tempest-dsvm-postgres-
full/a644174/console.html

The error in the n-api log is here:

http://logs.openstack.org/93/96293/2/gate/gate-tempest-dsvm-postgres-
full/a644174/logs/screen-n-api.txt.gz#_2014-06-05_06_50_26_679

From the console log, it looks like it goes into an error state after
going to verify_resize state:

2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,573 Request 
(MigrationsAdminTest:test_list_migrations_in_flavor_resize_situation): 200 GET 
http://127.0.0.1:8774/v2/7d6640f8f8e34866b5bd00e109fe90b7/servers/88591e95-69a2-4e34-a294-90b79d8f0d55
 0.103s
2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,573 State transition 
RESIZE/resize_finish == VERIFY_RESIZE/None after 16 second wait
2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,681 Request 
(MigrationsAdminTest:test_list_migrations_in_flavor_resize_situation): 400 POST 
http://127.0.0.1:8774/v2/7d6640f8f8e34866b5bd00e109fe90b7/servers/88591e95-69a2-4e34-a294-90b79d8f0d55/action
 0.106s


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTWlncmF0aW9uc0FkbWluVGVzdFwiIEFORCBtZXNzYWdlOlwiSFRUUCBleGNlcHRpb24gdGhyb3duXFw6IEluc3RhbmNlIGhhcyBub3QgYmVlbiByZXNpemVkXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDE5NzIzNDQyMjh9

8 hits in 7 days, looks like this started on 6/5.  Fails in check and
gate queues.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: gate-failure resize testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326778

Title:
  test_list_migrations_in_flavor_resize_situation fails with 400 after
  VERIFY_RESIZE state

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Fails here:

  http://logs.openstack.org/93/96293/2/gate/gate-tempest-dsvm-postgres-
  full/a644174/console.html

  The error in the n-api log is here:

  http://logs.openstack.org/93/96293/2/gate/gate-tempest-dsvm-postgres-
  full/a644174/logs/screen-n-api.txt.gz#_2014-06-05_06_50_26_679

  From the console log, it looks like it goes into an error state after
  going to verify_resize state:

  2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,573 Request 
(MigrationsAdminTest:test_list_migrations_in_flavor_resize_situation): 200 GET 
http://127.0.0.1:8774/v2/7d6640f8f8e34866b5bd00e109fe90b7/servers/88591e95-69a2-4e34-a294-90b79d8f0d55
 0.103s
  2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,573 State transition 
RESIZE/resize_finish == VERIFY_RESIZE/None after 16 second wait
  2014-06-05 06:50:26.714 | 2014-06-05 06:50:26,681 Request 
(MigrationsAdminTest:test_list_migrations_in_flavor_resize_situation): 400 POST 
http://127.0.0.1:8774/v2/7d6640f8f8e34866b5bd00e109fe90b7/servers/88591e95-69a2-4e34-a294-90b79d8f0d55/action
 0.106s


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTWlncmF0aW9uc0FkbWluVGVzdFwiIEFORCBtZXNzYWdlOlwiSFRUUCBleGNlcHRpb24gdGhyb3duXFw6IEluc3RhbmNlIGhhcyBub3QgYmVlbiByZXNpemVkXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDE5NzIzNDQyMjh9

  8 hits in 7 days, looks like this started on 6/5.  Fails in check and
  gate queues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326781] [NEW] v2 api returns 200 with blank response (no image data) for download_image policy

2014-06-05 Thread Abhishek Kekane
Public bug reported:

v2 api returns 200 with blank response (no image data) for
download_image policy

If you have enabled download_image policy in policy.json to role:admin then 
it should return 403 error if user other admin role is calling image-download 
api.
Presently it is returning 200 with blank response (no image data). If you 
enable cache filter, then it returns 403 error correctly.

Steps to reproduce:

1. Ensure following flavor is set in glance-api.conf
   [paste-deploy]
   flavor = keystone+cachemanagement

2. Disable cache
   a. Open /etc/glance/glance-api-paste.ini file.
   b. Remove cahce from following sections.
 [pipeline:glance-api-caching]
 [pipeline:glance-api-cachemanagement]
 [pipeline:glance-api-keystone+caching]
 [pipeline:glance-api-keystone+cachemanagement]
 [pipeline:glance-api-trusted-auth+cachemanagement]
   c. Save and exit from file.
   d. Restart the g-api (glance-api) service.

3. Ensure that 'download_image' policy is set in policy.json
   download_image: role:admin

4. Download image using v2 api for role other than admin
   a. source openrc normal_user normal_user
   b. glance --os-image-api-version 2 image-download image-id
   
   Output:
   ---
   ''
   
   glance-api screen log:
   --
2014-06-05 12:45:00.711 24883 INFO glance.wsgi.server [-] Traceback 
(most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 395, 
in handle_one_response
for data in result:
  File /mnt/stack/glance/glance/notifier.py, line 228, in get_data
for chunk in self.image.get_data():
  File /mnt/stack/glance/glance/api/policy.py, line 233, in get_data
self.policy.enforce(self.context, 'download_image', {})
  File /mnt/stack/glance/glance/api/policy.py, line 143, in enforce
exception.Forbidden, action=action)
  File /mnt/stack/glance/glance/api/policy.py, line 131, in _check
return policy.check(rule, target, credentials, *args, **kwargs)
  File /mnt/stack/glance/glance/openstack/common/policy.py, line 183, 
in check
raise exc(*args, **kwargs)
Forbidden: You are not authorized to complete this action.
2014-06-05 12:45:00.711 24883 INFO glance.wsgi.server [-] 10.146.146.4 
- - [05/Jun/2014 12:45:00] GET 
/v2/images/63826dea-e281-4ffe-821b-f598c747ba54/file HTTP/1.1 200 0 0.062499

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1326781

Title:
  v2 api returns 200 with blank response (no image data) for
  download_image policy

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  v2 api returns 200 with blank response (no image data) for
  download_image policy

  If you have enabled download_image policy in policy.json to role:admin then 
it should return 403 error if user other admin role is calling image-download 
api.
  Presently it is returning 200 with blank response (no image data). If you 
enable cache filter, then it returns 403 error correctly.

  Steps to reproduce:

  1. Ensure following flavor is set in glance-api.conf
 [paste-deploy]
 flavor = keystone+cachemanagement

  2. Disable cache
 a. Open /etc/glance/glance-api-paste.ini file.
 b. Remove cahce from following sections.
   [pipeline:glance-api-caching]
   [pipeline:glance-api-cachemanagement]
   [pipeline:glance-api-keystone+caching]
   [pipeline:glance-api-keystone+cachemanagement]
   [pipeline:glance-api-trusted-auth+cachemanagement]
 c. Save and exit from file.
 d. Restart the g-api (glance-api) service.

  3. Ensure that 'download_image' policy is set in policy.json
 download_image: role:admin

  4. Download image using v2 api for role other than admin
 a. source openrc normal_user normal_user
 b. glance --os-image-api-version 2 image-download image-id
 
 Output:
 ---
 ''
 
 glance-api screen log:
 --
2014-06-05 12:45:00.711 24883 INFO glance.wsgi.server [-] Traceback 
(most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 395, 
in handle_one_response
for data in result:
  File /mnt/stack/glance/glance/notifier.py, line 228, in get_data
for chunk in self.image.get_data():
  File /mnt/stack/glance/glance/api/policy.py, line 233, in get_data
self.policy.enforce(self.context, 'download_image', {})
  File /mnt/stack/glance/glance/api/policy.py, line 143, in enforce
exception.Forbidden, action=action)
  File /mnt/stack/glance/glance/api/policy.py, line 131, in _check
return policy.check(rule, 

[Yahoo-eng-team] [Bug 1326793] [NEW] VPNaaS enhance abstract methods for service driver APIs

2014-06-05 Thread Paul Michali
Public bug reported:

Currently an ABC is defined (VPNPlugin) for the service driver APIs. For
some of the methods that must be implemented by the vendor and reference
service drivers, there is an abstract method defined in this class to
ensure that the child classes implement the method:

@abc.abstractmethod
def create_vpnservice(self, context, vpnservice):
pass

@abc.abstractmethod
def update_vpnservice(
self, context, old_vpnservice, vpnservice):
pass

@abc.abstractmethod
def delete_vpnservice(self, context, vpnservice):
pass

However, for other methods, there are no abstract methods defined in
VPNPlugin. Fortunately, the reference implmentation and every provider
currently implement these methods in their child classes, but it would
be good to enforce this in the ABC, so that any new service drivers will
be sure to implement.

This is a low-hanging fruit enhancement, ideal for new contributors.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326793

Title:
  VPNaaS enhance abstract methods for service driver APIs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently an ABC is defined (VPNPlugin) for the service driver APIs.
  For some of the methods that must be implemented by the vendor and
  reference service drivers, there is an abstract method defined in this
  class to ensure that the child classes implement the method:

  @abc.abstractmethod
  def create_vpnservice(self, context, vpnservice):
  pass

  @abc.abstractmethod
  def update_vpnservice(
  self, context, old_vpnservice, vpnservice):
  pass

  @abc.abstractmethod
  def delete_vpnservice(self, context, vpnservice):
  pass

  However, for other methods, there are no abstract methods defined in
  VPNPlugin. Fortunately, the reference implmentation and every provider
  currently implement these methods in their child classes, but it would
  be good to enforce this in the ABC, so that any new service drivers
  will be sure to implement.

  This is a low-hanging fruit enhancement, ideal for new contributors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1326793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326848] [NEW] pep8 F481 checks got caught in the gate

2014-06-05 Thread Matt Riedemann
Public bug reported:

This merged:

https://review.openstack.org/#/c/74681/

Things got caught in the gate:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRjg0MSBsb2NhbCB2YXJpYWJsZVwiIEFORCBtZXNzYWdlOlwiaXMgYXNzaWduZWQgdG8gYnV0IG5ldmVyIHVzZWRcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMTk4MTkzNDkxNX0=

This bug is just for tracking purposes so we can clean this out of the
e-r status page.

** Affects: nova
 Importance: Undecided
 Status: Fix Committed


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326848

Title:
  pep8 F481 checks got caught in the gate

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  This merged:

  https://review.openstack.org/#/c/74681/

  Things got caught in the gate:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRjg0MSBsb2NhbCB2YXJpYWJsZVwiIEFORCBtZXNzYWdlOlwiaXMgYXNzaWduZWQgdG8gYnV0IG5ldmVyIHVzZWRcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMTk4MTkzNDkxNX0=

  This bug is just for tracking purposes so we can clean this out of the
  e-r status page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267140] [NEW] The output of security group rules does not include egress rules.

2014-06-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The output of security group rules does not include egress rules.

Description of problem:
===
The output of security group rules does not include egress rules.

Version-Release number of selected component (if applicable):
=
Tested on RHEL
Icehouse: python-nova-2014.1-0.5.b1.el6.noarch

How reproducible:
=
Always

Steps to Reproduce:
===
1. Add an egress security group rule (I did it via horizon)
2. via CLI: nova secgroup-list-rules sec group name

Actual results:
===
List of ingress rules.

Expected results:
=
List of both ingress and egress rules.

** Affects: nova
 Importance: Undecided
 Assignee: Verónica Musso (veronica-a-musso)
 Status: Confirmed

-- 
The output of security group rules does not include egress rules.
https://bugs.launchpad.net/bugs/1267140
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267140] Re: The output of security group rules does not include egress rules.

2014-06-05 Thread Verónica Musso
I've checked Nova and CLI outputs and the error belongs to the first
one. Then, I am changing the affected project.

** Project changed: python-novaclient = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267140

Title:
  The output of security group rules does not include egress rules.

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The output of security group rules does not include egress rules.

  Description of problem:
  ===
  The output of security group rules does not include egress rules.

  Version-Release number of selected component (if applicable):
  =
  Tested on RHEL
  Icehouse: python-nova-2014.1-0.5.b1.el6.noarch

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  1. Add an egress security group rule (I did it via horizon)
  2. via CLI: nova secgroup-list-rules sec group name

  Actual results:
  ===
  List of ingress rules.

  Expected results:
  =
  List of both ingress and egress rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262566] Re: security group listing race

2014-06-05 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262566

Title:
  security group listing race

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  In this grenade job 
  
http://logs.openstack.org/63/62463/3/gate/gate-grenade-dsvm/4a94d81/console.html

  One process tries to list ALL security groups :
  2013-12-19 06:42:11.519 15319 INFO tempest.common.rest_client [-] Request: 
GET 
http://127.0.0.1:8774/v2/1fb77a9f8ccc417498161dbad4eeabda/os-security-groups?all_tenants=true;
  (pid=15319)

  http://logs.openstack.org/63/62463/3/gate/gate-grenade-
  dsvm/4a94d81/logs/tempest.txt.gz#_2013-12-19_06_42_11_519

  While another process deletes one:
  
http://logs.openstack.org/63/62463/3/gate/gate-grenade-dsvm/4a94d81/logs/tempest.txt.gz#_2013-12-19_06_42_11_510
  2013-12-19 06:42:11.510 15315 INFO tempest.common.rest_client [-] Request: 
DELETE 
http://127.0.0.1:8774/v2/c64e11f31d25473b91f7e1124f41f2a1/os-security-groups/27;
  (pid=15315)

  
  The test case failed on  listing ALL security group request , which  
requested closely in time  to a security group deletion.
  'messageSecurity group 27 not found./message/itemNotFound'

  
  $ nova secgroup-list --all-tenants 1  # is the cli equivalent of the failing 
request, this call should not fail with 'Security group 27 not found.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326892] [NEW] Tempest Test for List Migrations in Flavor Resize Situation Fails After Unrelated Test

2014-06-05 Thread Rahul Verma
Public bug reported:

After making a change to Cinder API, Jenkins fails on a single Tempest test 
involving flavor resizing. This is the log:
http://logs.openstack.org/39/97639/2/gate/gate-tempest-dsvm-full/6e2a9e4/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326892

Title:
  Tempest Test for List Migrations in Flavor Resize Situation Fails
  After Unrelated Test

Status in OpenStack Compute (Nova):
  New

Bug description:
  After making a change to Cinder API, Jenkins fails on a single Tempest test 
involving flavor resizing. This is the log:
  
http://logs.openstack.org/39/97639/2/gate/gate-tempest-dsvm-full/6e2a9e4/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326899] [NEW] FAIL: glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized

2014-06-05 Thread Anita Kuno
Public bug reported:

2014-05-27 22:27:18.106 | ${PYTHON:-python} -m subunit.run discover -t ./ 
./glance/tests  
2014-05-27 22:27:18.106 | 
==
2014-05-27 22:27:18.106 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized
2014-05-27 22:27:18.106 | tags: worker-0
2014-05-27 22:27:18.106 | 
--
2014-05-27 22:27:18.106 | Traceback (most recent call last):
2014-05-27 22:27:18.106 |   File glance/tests/unit/v1/test_api.py, line 904, 
in test_add_copy_from_image_authorized_upload_image_authorized
2014-05-27 22:27:18.107 | self.assertEqual(res.status_int, 201)
2014-05-27 22:27:18.107 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
2014-05-27 22:27:18.107 | self.assertThat(observed, matcher, message)
2014-05-27 22:27:18.107 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
2014-05-27 22:27:18.107 | raise mismatch_error
2014-05-27 22:27:18.107 | MismatchError: 400 != 201
2014-05-27 22:27:18.107 | 
==
2014-05-27 22:27:18.107 | FAIL: process-returncode
2014-05-27 22:27:18.107 | tags: worker-0
2014-05-27 22:27:18.107 | 
--
2014-05-27 22:27:18.107 | Binary content:
2014-05-27 22:27:18.108 |   traceback (test/plain; charset=utf8)
2014-05-27 22:27:18.108 | Ran 2249 tests in 777.846s
2014-05-27 22:27:18.108 | FAILED (id=0, failures=2, skips=33)
2014-05-27 22:27:18.108 | error: testr failed (1)
2014-05-27 22:27:18.190 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python26/.tox/py26/bin/python -m 
glance.openstack.common.lockutils python setup.py test --slowest 
--testr-args=--concurrency 1 '
2014-05-27 22:27:18.190 | ___ summary 

2014-05-27 22:27:18.190 | ERROR:   py26: commands failed

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1326899

Title:
  FAIL:
  
glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  2014-05-27 22:27:18.106 | ${PYTHON:-python} -m subunit.run discover -t ./ 
./glance/tests  
  2014-05-27 22:27:18.106 | 
==
  2014-05-27 22:27:18.106 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized
  2014-05-27 22:27:18.106 | tags: worker-0
  2014-05-27 22:27:18.106 | 
--
  2014-05-27 22:27:18.106 | Traceback (most recent call last):
  2014-05-27 22:27:18.106 |   File glance/tests/unit/v1/test_api.py, line 
904, in test_add_copy_from_image_authorized_upload_image_authorized
  2014-05-27 22:27:18.107 | self.assertEqual(res.status_int, 201)
  2014-05-27 22:27:18.107 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  2014-05-27 22:27:18.107 | self.assertThat(observed, matcher, message)
  2014-05-27 22:27:18.107 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
  2014-05-27 22:27:18.107 | raise mismatch_error
  2014-05-27 22:27:18.107 | MismatchError: 400 != 201
  2014-05-27 22:27:18.107 | 
==
  2014-05-27 22:27:18.107 | FAIL: process-returncode
  2014-05-27 22:27:18.107 | tags: worker-0
  2014-05-27 22:27:18.107 | 
--
  2014-05-27 22:27:18.107 | Binary content:
  2014-05-27 22:27:18.108 |   traceback (test/plain; charset=utf8)
  2014-05-27 22:27:18.108 | Ran 2249 tests in 777.846s
  2014-05-27 22:27:18.108 | FAILED (id=0, failures=2, skips=33)
  2014-05-27 22:27:18.108 | error: testr failed (1)
  2014-05-27 22:27:18.190 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python26/.tox/py26/bin/python -m 
glance.openstack.common.lockutils python setup.py test --slowest 
--testr-args=--concurrency 1 '
  2014-05-27 22:27:18.190 | ___ summary 

  2014-05-27 22:27:18.190 | ERROR:   py26: commands failed

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1326901] [NEW] ServiceBinaryExists - binary for nova-conductor already exists

2014-06-05 Thread Corey Bryant
Public bug reported:

We're hitting an intermittent issue where ServiceBinaryExists is raised
for nova-conductor on deployment.

From nova-conductor's upstart log ( /var/log/upstart/nova-conductor.log
):

2014-05-15 12:02:25.206 34494 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
2014-05-15 12:02:25.241 34494 INFO nova.openstack.common.service [-] Starting 8 
workers
2014-05-15 12:02:25.242 34494 INFO nova.openstack.common.service [-] Started 
child 34501
2014-05-15 12:02:25.244 34494 INFO nova.openstack.common.service [-] Started 
child 34502
2014-05-15 12:02:25.246 34494 INFO nova.openstack.common.service [-] Started 
child 34503
2014-05-15 12:02:25.246 34501 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.247 34502 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.247 34494 INFO nova.openstack.common.service [-] Started 
child 34504
2014-05-15 12:02:25.249 34503 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.251 34504 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.254 34505 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.250 34494 INFO nova.openstack.common.service [-] Started 
child 34505
2014-05-15 12:02:25.261 34494 INFO nova.openstack.common.service [-] Started 
child 34506
2014-05-15 12:02:25.263 34494 INFO nova.openstack.common.service [-] Started 
child 34507
2014-05-15 12:02:25.266 34494 INFO nova.openstack.common.service [-] Started 
child 34508
2014-05-15 12:02:25.267 34507 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.268 34506 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
2014-05-15 12:02:25.271 34508 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  match = pattern.match(integrity_error.message)
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  match = pattern.match(integrity_error.message)
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 346, in 
fire_timers
timer()
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 56, in 
__call__
cb(*args, **kw)
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in 
main
2014-05-15 12:02:25.862 34502 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.
result = function(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, 
line 480, in run_service
service.start()
  File /usr/lib/python2.7/dist-packages/nova/service.py, line 172, in start
self.service_ref = self._create_service_ref(ctxt)
  File /usr/lib/python2.7/dist-packages/nova/service.py, line 224, in 
_create_service_ref
service = self.conductor_api.service_create(context, svc_values)
  File /usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 202, in 
service_create
return self._manager.service_create(context, values)
  File /usr/lib/python2.7/dist-packages/nova/utils.py, line 966, in wrapper
return func(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 461, 
in service_create
svc = self.db.service_create(context, values)
  File /usr/lib/python2.7/dist-packages/nova/db/api.py, line 139, in 
service_create
return IMPL.service_create(context, values)
  File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 146, 
in wrapper
return f(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 521, 
in service_create
binary=values.get('binary'))
ServiceBinaryExists: Service with host glover binary nova-conductor exists.
2014-05-15 12:02:25.864 34503 ERROR nova.openstack.common.threadgroup [-] 
Service with host glover binary nova-conductor exists.
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup   

[Yahoo-eng-team] [Bug 1326901] Re: ServiceBinaryExists - binary for nova-conductor already exists

2014-06-05 Thread Corey Bryant
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326901

Title:
  ServiceBinaryExists - binary for nova-conductor already exists

Status in OpenStack Compute (Nova):
  New
Status in Ubuntu:
  New

Bug description:
  We're hitting an intermittent issue where ServiceBinaryExists is
  raised for nova-conductor on deployment.

  From nova-conductor's upstart log ( /var/log/upstart/nova-
  conductor.log ):

  2014-05-15 12:02:25.206 34494 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2014-05-15 12:02:25.241 34494 INFO nova.openstack.common.service [-] Starting 
8 workers
  2014-05-15 12:02:25.242 34494 INFO nova.openstack.common.service [-] Started 
child 34501
  2014-05-15 12:02:25.244 34494 INFO nova.openstack.common.service [-] Started 
child 34502
  2014-05-15 12:02:25.246 34494 INFO nova.openstack.common.service [-] Started 
child 34503
  2014-05-15 12:02:25.246 34501 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34502 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34494 INFO nova.openstack.common.service [-] Started 
child 34504
  2014-05-15 12:02:25.249 34503 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.251 34504 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.254 34505 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.250 34494 INFO nova.openstack.common.service [-] Started 
child 34505
  2014-05-15 12:02:25.261 34494 INFO nova.openstack.common.service [-] Started 
child 34506
  2014-05-15 12:02:25.263 34494 INFO nova.openstack.common.service [-] Started 
child 34507
  2014-05-15 12:02:25.266 34494 INFO nova.openstack.common.service [-] Started 
child 34508
  2014-05-15 12:02:25.267 34507 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.268 34506 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.271 34508 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 346, in 
fire_timers
  timer()
File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 56, in 
__call__
  cb(*args, **kw)
File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, 
in main
  2014-05-15 12:02:25.862 34502 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.
  result = function(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, 
line 480, in run_service
  service.start()
File /usr/lib/python2.7/dist-packages/nova/service.py, line 172, in start
  self.service_ref = self._create_service_ref(ctxt)
File /usr/lib/python2.7/dist-packages/nova/service.py, line 224, in 
_create_service_ref
  service = self.conductor_api.service_create(context, svc_values)
File /usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 202, in 
service_create
  return self._manager.service_create(context, values)
File /usr/lib/python2.7/dist-packages/nova/utils.py, line 966, in wrapper
  return func(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 
461, in service_create
  svc = self.db.service_create(context, values)
File /usr/lib/python2.7/dist-packages/nova/db/api.py, line 139, in 
service_create
  return IMPL.service_create(context, values)
File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 
146, in wrapper
  return f(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 
521, in service_create
  binary=values.get('binary'))
  ServiceBinaryExists: Service with host glover binary nova-conductor exists.
  2014-05-15 12:02:25.864 34503 ERROR nova.openstack.common.threadgroup [-] 
Service with host glover binary nova-conductor exists.
  2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-05-15 12:02:25.864 34503 TRACE nova.openstack.common.threadgroup   File 

[Yahoo-eng-team] [Bug 1326907] [NEW] neutron-openvswitch-agent on fedora 20: SystemError: Unable to determine kernel version for Open vSwitch with VXLAN support.

2014-06-05 Thread Lars Kellogg-Stedman
Public bug reported:

On Fedora 20, neutron-openvswitch-agent fails to start with the
following error:

2014-06-05 16:15:55.379 5266 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent SystemError: Unable
to determine kernel version for Open vSwitch with VXLAN support. To use
VXLAN tunnels with OVS, please ensure that the version is 1.10 or newer!

In neutron/agent/linux/ovs_lib.py, the code is attempting to determine
the installed version of the openvswitch module by calling:

  modinfo openvswitch

It's looking for a line containing version: , but modinfo produces
no such line; the output looks like this:

filename:   
/lib/modules/3.11.10-301.fc20.x86_64/kernel/net/openvswitch/openvswitch.ko
license:GPL
description:Open vSwitch switching datapath
depends:gre
intree: Y
vermagic:   3.11.10-301.fc20.x86_64 SMP mod_unload 
signer: Fedora kernel signing key
sig_key:03:59:1D:C5:7A:69:07:41:40:1A:1C:20:2E:2B:3D:9F:4F:ED:2A:0E
sig_hashalgo:   sha256

We can fix this by also looking at the vermagic:  line, but I question
whether this test should even be here.  The output of modinfo wasn't
designed to be machine parse-able, and it is demonstrably not stable.

** Affects: neutron
 Importance: Undecided
 Assignee: Lars Kellogg-Stedman (larsks)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326907

Title:
  neutron-openvswitch-agent on fedora 20: SystemError: Unable to
  determine kernel version for Open vSwitch with VXLAN support.

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  On Fedora 20, neutron-openvswitch-agent fails to start with the
  following error:

  2014-06-05 16:15:55.379 5266 TRACE
  neutron.plugins.openvswitch.agent.ovs_neutron_agent SystemError:
  Unable to determine kernel version for Open vSwitch with VXLAN
  support. To use VXLAN tunnels with OVS, please ensure that the version
  is 1.10 or newer!

  In neutron/agent/linux/ovs_lib.py, the code is attempting to determine
  the installed version of the openvswitch module by calling:

modinfo openvswitch

  It's looking for a line containing version: , but modinfo produces
  no such line; the output looks like this:

  filename:   
/lib/modules/3.11.10-301.fc20.x86_64/kernel/net/openvswitch/openvswitch.ko
  license:GPL
  description:Open vSwitch switching datapath
  depends:gre
  intree: Y
  vermagic:   3.11.10-301.fc20.x86_64 SMP mod_unload 
  signer: Fedora kernel signing key
  sig_key:03:59:1D:C5:7A:69:07:41:40:1A:1C:20:2E:2B:3D:9F:4F:ED:2A:0E
  sig_hashalgo:   sha256

  We can fix this by also looking at the vermagic:  line, but I
  question whether this test should even be here.  The output of
  modinfo wasn't designed to be machine parse-able, and it is
  demonstrably not stable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1326907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326607] Re: TestLoadBalancerBasic fails in Tempest

2014-06-05 Thread Eugene Nikanorov
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326607

Title:
  TestLoadBalancerBasic fails in Tempest

Status in Tempest:
  In Progress

Bug description:
  2014-06-05 01:28:56.830 | {0} 
tempest.scenario.test_load_balancer_basic.TestLoadBalancerBasic.test_load_balancer_basic
 [48.722497s] ... FAILED
  2014-06-05 01:28:56.831 | 
  2014-06-05 01:28:56.831 | Captured traceback:
  2014-06-05 01:28:56.831 | ~~~
  2014-06-05 01:28:56.831 | Traceback (most recent call last):
  2014-06-05 01:28:56.831 |   File tempest/test.py, line 126, in wrapper
  2014-06-05 01:28:56.831 | return f(self, *func_args, **func_kwargs)
  2014-06-05 01:28:56.831 |   File 
tempest/scenario/test_load_balancer_basic.py, line 305, in 
test_load_balancer_basic
  2014-06-05 01:28:56.831 | self._check_load_balancing()
  2014-06-05 01:28:56.832 |   File 
tempest/scenario/test_load_balancer_basic.py, line 279, in 
_check_load_balancing
  2014-06-05 01:28:56.832 | self._check_connection(self.vip_ip)
  2014-06-05 01:28:56.832 |   File 
tempest/scenario/test_load_balancer_basic.py, line 196, in _check_connection
  2014-06-05 01:28:56.832 | while not try_connect(check_ip, port):
  2014-06-05 01:28:56.832 |   File 
tempest/scenario/test_load_balancer_basic.py, line 188, in try_connect
  2014-06-05 01:28:56.832 | resp = 
urllib2.urlopen(http://{0}:{1}/.format(ip, port))
  2014-06-05 01:28:56.832 |   File /usr/lib/python2.7/urllib2.py, line 
126, in urlopen
  2014-06-05 01:28:56.833 | return _opener.open(url, data, timeout)
  2014-06-05 01:28:56.833 |   File /usr/lib/python2.7/urllib2.py, line 
400, in open
  2014-06-05 01:28:56.833 | response = self._open(req, data)
  2014-06-05 01:28:56.833 |   File /usr/lib/python2.7/urllib2.py, line 
418, in _open
  2014-06-05 01:28:56.833 | '_open', req)
  2014-06-05 01:28:56.833 |   File /usr/lib/python2.7/urllib2.py, line 
378, in _call_chain
  2014-06-05 01:28:56.834 | result = func(*args)
  2014-06-05 01:28:56.834 |   File /usr/lib/python2.7/urllib2.py, line 
1207, in http_open
  2014-06-05 01:28:56.834 | return self.do_open(httplib.HTTPConnection, 
req)
  2014-06-05 01:28:56.834 |   File /usr/lib/python2.7/urllib2.py, line 
1180, in do_open
  2014-06-05 01:28:56.834 | r = h.getresponse(buffering=True)
  2014-06-05 01:28:56.834 |   File /usr/lib/python2.7/httplib.py, line 
1030, in getresponse
  2014-06-05 01:28:56.835 | response.begin()
  2014-06-05 01:28:56.835 |   File /usr/lib/python2.7/httplib.py, line 
407, in begin
  2014-06-05 01:28:56.835 | version, status, reason = 
self._read_status()
  2014-06-05 01:28:56.835 |   File /usr/lib/python2.7/httplib.py, line 
371, in _read_status
  2014-06-05 01:28:56.835 | raise BadStatusLine(line)
  2014-06-05 01:28:56.835 | BadStatusLine: ''

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1326607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251520] Re: Nova compute's auto re-enable logic breaks the manual control of the service status

2014-06-05 Thread Vladik Romanovsky
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251520

Title:
  Nova compute's auto re-enable logic breaks the manual control of the
  service status

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Automatically re-enabling nova-compute service on a libvirt connection
  errors, prevents the administrator from disabling the service
  manually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231364] Re: firewalls not shown in network topology

2014-06-05 Thread Navneet Grewal
** Also affects: openstack-dashboard (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1231364

Title:
  firewalls not shown in network topology

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in “openstack-dashboard” package in Ubuntu:
  New

Bug description:
  Would be nice to also see defined firewalls in the graphic of the
  network topology.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1231364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326811] Re: Client failing with six =1.6 error

2014-06-05 Thread Denis M.
oslo.config 1.2.0 has bounded six version
https://github.com/openstack/oslo.config/commit/a55037577a69b6c3c7e425f1da7bea1575a93a8f
All projects are using oslo.config 1.2.0

As i said, we probably update version of oslo.config in global
requirements and then sync them into all other projects.

** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1326811

Title:
  Client failing with six =1.6 error

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Command Line Client:
  New
Status in Openstack Database (Trove):
  New

Bug description:

  13:20:45 + screen -S stack -p key -X stuff 'cd /opt/stack/keystone  
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug  echo $! /opt/stack/status/stack/key.pid; fg || echo key failed to 
start | tee /opt/stack/status/stack/key.failure
  '
  13:20:45 Waiting for keystone to start...
  13:20:45 + echo 'Waiting for keystone to start...'
  13:20:45 + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -k -s 
http://10.5.141.237:5000/v2.0/ /dev/null; do sleep 1; done'
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + SERVICE_ENDPOINT=http://10.5.141.237:35357/v2.0
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + export OS_TOKEN=be19c524ddc92109a224
  13:20:46 + OS_TOKEN=be19c524ddc92109a224
  13:20:46 + export OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + create_keystone_accounts
  13:20:46 ++ openstack project create admin
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:46 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:46 + ADMIN_TENANT=
  13:20:46 ++ openstack user create admin --project '' --email 
ad...@example.com --password 3de4922d8b6ac5a1aad9
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_USER=
  13:20:47 ++ openstack role create admin
  13:20:47 ++ grep ' id '
  13:20:47 ++ get_field 2
  13:20:47 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_ROLE=
  13:20:47 + openstack role add --project --user
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + exit_trap
  13:20:47 + local r=1
  13:20:47 ++ jobs -p
  13:20:47 + jobs=
  13:20:47 + [[ -n '' ]]
  13:20:47 + kill_spinner
  13:20:47 + '[' '!' -z '' ']'
  13:20:47 + exit 1

  https://rdjenkins.dyndns.org/job/Trove-Gate/3974/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1326811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326926] [NEW] Conductor passing GlanceImageService instead of nova.image.api.API object to compute_utils.get_image_metadata()

2014-06-05 Thread Leandro Ignacio Costantino
Public bug reported:

After image-api partial refactor, some calls still use a glance service
instance to call compute_utils.get_instance_metadata which expect the
object to have a 'get()' method.

Since that method is not present in GlanceImageService, and exception is
thrown and the image metadata cannot be retrieved.

Sample calling  _cold_migration(..)
2014-06-05 15:45:13.138 WARNING nova.compute.utils 
[req-7a86365f-f01a-4d49-b1c3-595e8dc9bd24 admin admin] [instance: 
290d3587-b69a-48d8-b5c0-307259e2f590] Can't access image 
40c33532-0aed-4acc-8d7a-2a45698e1f2d: 'GlanceImageService' object has no 
attribute 'get'

** Affects: nova
 Importance: Undecided
 Assignee: Jay Pipes (jaypipes)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326926

Title:
  Conductor passing GlanceImageService instead of nova.image.api.API
  object to compute_utils.get_image_metadata()

Status in OpenStack Compute (Nova):
  New

Bug description:
  After image-api partial refactor, some calls still use a glance
  service instance to call compute_utils.get_instance_metadata which
  expect the object to have a 'get()' method.

  Since that method is not present in GlanceImageService, and exception
  is thrown and the image metadata cannot be retrieved.

  Sample calling  _cold_migration(..)
  2014-06-05 15:45:13.138 WARNING nova.compute.utils 
[req-7a86365f-f01a-4d49-b1c3-595e8dc9bd24 admin admin] [instance: 
290d3587-b69a-48d8-b5c0-307259e2f590] Can't access image 
40c33532-0aed-4acc-8d7a-2a45698e1f2d: 'GlanceImageService' object has no 
attribute 'get'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302163] Re: nova boot VIF creation fails

2014-06-05 Thread Chris Ricker
** Changed in: openstack-cisco
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302163

Title:
  nova boot VIF creation fails

Status in OpenStack Compute (Nova):
  Invalid
Status in Openstack @ Cisco:
  Fix Released

Bug description:
  aio, cisco, including https://review.openstack.org/#/c/85121/

  nova boots timeout with:

  2014-04-03 20:21:44.973 1816 WARNING nova.virt.libvirt.driver 
[req-bff726de-c4bf
  -4a46-a1a1-8793499f3380 f94b73f2d0c54b6e8fe7bc4525db9a32 
59839cbf923e46eea50ff71
  85ecae8b7] Timeout waiting for vif plugging callback for instance 
74574690-c46a-
  423a-b327-24a141c84137
  2014-04-03 20:21:45.928 1816 ERROR nova.compute.manager 
[req-bff726de-c4bf-4a46-
  a1a1-8793499f3380 f94b73f2d0c54b6e8fe7bc4525db9a32 
59839cbf923e46eea50ff7185ecae
  8b7] [instance: 74574690-c46a-423a-b327-24a141c84137] Instance failed to spawn
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137] Traceback (most recent call last):
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137]   File 
/usr/lib/python2.7/dist-packages/nova/compute/m
  anager.py, line 1718, in _spawn
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137] block_device_info)
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libv
  irt/driver.py, line 2251, in spawn
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137] block_device_info)
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libv
  irt/driver.py, line 3654, in _create_domain_and_network
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137] raise exception.VirtualInterfaceCreateException()
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137] VirtualInterfaceCreateException: Virtual Interface 
crea
  tion failed
  2014-04-03 20:21:45.928 1816 TRACE nova.compute.manager [instance: 
74574690-c46a
  -423a-b327-24a141c84137]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326937] [NEW] brocade ml2 mechanism driver depends on templates of brocade plugin

2014-06-05 Thread Shiv Haris
Public bug reported:

Brocade ml2 mechanism driver includes  templates from the brocade plugin
directory.

If the brocade plugin is not installed on a system then this file
inclusion will fail for the md.

This was an unfortunate typo, the fix is:

--- INDEX:/neutron/plugins/ml2/drivers/brocade/nos/nosdriver.py
+++ WORKDIR:/neutron/plugins/ml2/drivers/brocade/nos/nosdriver.py
@@ -26,7 +26,7 @@ from ncclient import manager
 
 from neutron.openstack.common import excutils
 from neutron.openstack.common import log as logging
-from neutron.plugins.brocade.nos import nctemplates as template
+from neutron.plugins.ml2.drivers.brocade.nos import nctemplates as template

** Affects: neutron
 Importance: Undecided
 Assignee: Shiv Haris (shh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Shiv Haris (shh)

** Changed in: neutron
Milestone: None = juno-1

** Changed in: neutron
Milestone: juno-1 = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326937

Title:
  brocade ml2 mechanism driver depends on templates of brocade plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Brocade ml2 mechanism driver includes  templates from the brocade
  plugin directory.

  If the brocade plugin is not installed on a system then this file
  inclusion will fail for the md.

  This was an unfortunate typo, the fix is:

  --- INDEX:/neutron/plugins/ml2/drivers/brocade/nos/nosdriver.py
  +++ WORKDIR:/neutron/plugins/ml2/drivers/brocade/nos/nosdriver.py
  @@ -26,7 +26,7 @@ from ncclient import manager
   
   from neutron.openstack.common import excutils
   from neutron.openstack.common import log as logging
  -from neutron.plugins.brocade.nos import nctemplates as template
  +from neutron.plugins.ml2.drivers.brocade.nos import nctemplates as template

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1326937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310931] Re: nova migration-list fails with unicode error

2014-06-05 Thread Justin Shepherd
Marking as fix released as the original poster confirmed that the
issue has been resolved.

** Changed in: nova
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310931

Title:
  nova migration-list fails with unicode error

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Doing a nova migration-list I get the following error in the nova-
  api logs:

  ERROR Exception handling resource: 'unicode' object does not support item 
deletion
  TRACE nova.api.openstack.wsgi Traceback (most recent call last):
  TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 983, in _process_stack
  TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, 
request, action_args)
  TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 1070, in dispatch
  TRACE nova.api.openstack.wsgi return method(req=request, **action_args)
  TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 74, in index
  TRACE nova.api.openstack.wsgi return {'migrations': output(migrations)}
  TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 37, in output
  TRACE nova.api.openstack.wsgi del obj['deleted']
  TRACE nova.api.openstack.wsgi TypeError: 'unicode' object does not support 
item deletion
  Returning 400 to user: The server could not comply with the request since it 
is either malformed or otherwise incorrect. __call__ 
/opt/nova/nova/api/openstack/wsgi.py:1215

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326955] [NEW] v1 API GET on image member not implemented

2014-06-05 Thread Scott Devoid
Public bug reported:

Despite having a client call `glanceclient.image_members.get(image_id,
member_id)` [1], the GET call on /v1/image/uuid/members/id is not
implemented in the v1 API and returns a 405: Method Not Allowed error.

I suspect that this was an unintentional omission. The method is listed
in the router, but in image_members the comment indicates that a 405
response is intentional. [2,3] It shouldn't be hard for me to implement
the fix, but I want to make sure that there wasn't an intentional reason
for leaving the API call out?

[1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/image_members.py#L34
[2] https://github.com/openstack/glance/blob/master/glance/api/v1/router.py#L71
[3] 
https://github.com/openstack/glance/blob/master/glance/api/v1/members.py#L105

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1326955

Title:
  v1 API GET on image member not implemented

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Despite having a client call `glanceclient.image_members.get(image_id,
  member_id)` [1], the GET call on /v1/image/uuid/members/id is not
  implemented in the v1 API and returns a 405: Method Not Allowed error.

  I suspect that this was an unintentional omission. The method is
  listed in the router, but in image_members the comment indicates that
  a 405 response is intentional. [2,3] It shouldn't be hard for me to
  implement the fix, but I want to make sure that there wasn't an
  intentional reason for leaving the API call out?

  [1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/image_members.py#L34
  [2] 
https://github.com/openstack/glance/blob/master/glance/api/v1/router.py#L71
  [3] 
https://github.com/openstack/glance/blob/master/glance/api/v1/members.py#L105

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1326955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326958] [NEW] default security groups listing doesn't work when neutron is managing security groups

2014-06-05 Thread Matt Fischer
Public bug reported:

Neutron does not seem to implement the default security groups calls, so
when neutron is managing security groups, nova tries to pass the call
off to it (I think) and fails. I think this bug is really against
neutron and nova, but I'm not sure where to start. I'm not sure if
anyone else is trying to use this call or not and maybe it should just
be dropped. The API doesn't support it and the docs on it are wrong.

http://docs.openstack.org/api/openstack-compute/2/content/ext-os-
security-group-default-rules.html  (note that the example URLs in that
doc are missing the word default)

curl -i 'http://1.2.3.4:8774/v2/f5ad8f41cd8540ca83b6998b83bf9bba/os-
security-group-default-rules' -X GET -H X-Auth-Project-Id: admin  -H
Accept: application/json -H X-Auth-Token:
487b898af056401b806786623e3c2656

2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 917, in 
__call__
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack content_type, body, 
accept)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 983, in 
_process_stack
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 1070, in 
dispatch
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_group_default_rules.py,
 line 181, in index
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack for rule in 
self.security_group_api.get_all_default_rules(context):
2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack AttributeError: 
'NativeNeutronSecurityGroupAPI' object has no attribute 'get_all_default_rules'

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Neutron does not seem to implement the default security groups calls, so
  when neutron is managing security groups, nova tries to pass the call
  off to it (I think) 

[Yahoo-eng-team] [Bug 1326965] [NEW] instance doesn't work after a failed migration

2014-06-05 Thread Stephen Gordon
Public bug reported:

Description of problem:
while trying to run instance migration on an instance, the migration failed 
because of incorrect ssh key authentication between the nova compute nodes. 
After the failed migration:
1. The instance failed to run.
2. Couldn't be deleted.
3. Its volumes couldn't be detached, snapshoted or copied from. 

Thus all of the consistent data was not available.

Version-Release number of selected component (if applicable):
python-nova-2014.1-0.11.b2.fc21.noarch
openstack-nova-api-2014.1-0.11.b2.fc21.noarch
openstack-nova-cert-2014.1-0.11.b2.fc21.noarch
openstack-nova-conductor-2014.1-0.11.b2.fc21.noarch
openstack-nova-scheduler-2014.1-0.11.b2.fc21.noarch
openstack-nova-compute-2014.1-0.11.b2.fc21.noarch
python-novaclient-2.15.0-1.fc20.noarch
openstack-nova-common-2014.1-0.11.b2.fc21.noarch
openstack-nova-console-2014.1-0.11.b2.fc21.noarch
openstack-nova-novncproxy-2014.1-0.11.b2.fc21.noarch

How reproducible:
100%

Steps to Reproduce:
1. launch an instance
2. try to migrate it. 
3. see instance status.

Actual results:
the instance is not responding to any action.

Expected results:
if the migration fails before doing anything to the instance's files the 
instance should be up and running  the an error about the failed migration 
should appear.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326965

Title:
  instance doesn't work after a failed migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  while trying to run instance migration on an instance, the migration failed 
because of incorrect ssh key authentication between the nova compute nodes. 
  After the failed migration:
  1. The instance failed to run.
  2. Couldn't be deleted.
  3. Its volumes couldn't be detached, snapshoted or copied from. 

  Thus all of the consistent data was not available.

  Version-Release number of selected component (if applicable):
  python-nova-2014.1-0.11.b2.fc21.noarch
  openstack-nova-api-2014.1-0.11.b2.fc21.noarch
  openstack-nova-cert-2014.1-0.11.b2.fc21.noarch
  openstack-nova-conductor-2014.1-0.11.b2.fc21.noarch
  openstack-nova-scheduler-2014.1-0.11.b2.fc21.noarch
  openstack-nova-compute-2014.1-0.11.b2.fc21.noarch
  python-novaclient-2.15.0-1.fc20.noarch
  openstack-nova-common-2014.1-0.11.b2.fc21.noarch
  openstack-nova-console-2014.1-0.11.b2.fc21.noarch
  openstack-nova-novncproxy-2014.1-0.11.b2.fc21.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance
  2. try to migrate it. 
  3. see instance status.

  Actual results:
  the instance is not responding to any action.

  Expected results:
  if the migration fails before doing anything to the instance's files the 
instance should be up and running  the an error about the failed migration 
should appear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326974] [NEW] nova-clear-rabbit-queues removed from Juno (master branch of nova). Make appropriate changes to nova-spec file

2014-06-05 Thread Shraddha Pandhe
Public bug reported:

Current nova spec file expects the file nova-clear-rabbit-queues in
the bindir.

https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs
/openstack-nova.spec#L562

Until icehouse, nova was generating that file:
https://github.com/openstack/nova/blob/stable/icehouse/setup.cfg#L42

But the file is gone in Juno:
https://github.com/openstack/nova/blob/master/setup.cfg

Need to update the Anvil spec file to something like:

#if $older_than('2014.1')
%{_bindir}/nova-clear-rabbit-queues
#end if

** Affects: anvil
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: anvil
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1326974

Title:
  nova-clear-rabbit-queues removed from Juno (master branch of nova).
  Make appropriate changes to nova-spec file

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  Current nova spec file expects the file nova-clear-rabbit-queues in
  the bindir.

  https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs
  /openstack-nova.spec#L562

  Until icehouse, nova was generating that file:
  https://github.com/openstack/nova/blob/stable/icehouse/setup.cfg#L42

  But the file is gone in Juno:
  https://github.com/openstack/nova/blob/master/setup.cfg

  Need to update the Anvil spec file to something like:

  #if $older_than('2014.1')
  %{_bindir}/nova-clear-rabbit-queues
  #end if

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1326974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326811] Re: Client failing with six =1.6 error

2014-06-05 Thread Dolph Mathews
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326811

Title:
  Client failing with six =1.6 error

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Command Line Client:
  New
Status in Openstack Database (Trove):
  New

Bug description:

  13:20:45 + screen -S stack -p key -X stuff 'cd /opt/stack/keystone  
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug  echo $! /opt/stack/status/stack/key.pid; fg || echo key failed to 
start | tee /opt/stack/status/stack/key.failure
  '
  13:20:45 Waiting for keystone to start...
  13:20:45 + echo 'Waiting for keystone to start...'
  13:20:45 + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -k -s 
http://10.5.141.237:5000/v2.0/ /dev/null; do sleep 1; done'
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + SERVICE_ENDPOINT=http://10.5.141.237:35357/v2.0
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + export OS_TOKEN=be19c524ddc92109a224
  13:20:46 + OS_TOKEN=be19c524ddc92109a224
  13:20:46 + export OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + create_keystone_accounts
  13:20:46 ++ openstack project create admin
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:46 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:46 + ADMIN_TENANT=
  13:20:46 ++ openstack user create admin --project '' --email 
ad...@example.com --password 3de4922d8b6ac5a1aad9
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_USER=
  13:20:47 ++ openstack role create admin
  13:20:47 ++ grep ' id '
  13:20:47 ++ get_field 2
  13:20:47 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_ROLE=
  13:20:47 + openstack role add --project --user
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + exit_trap
  13:20:47 + local r=1
  13:20:47 ++ jobs -p
  13:20:47 + jobs=
  13:20:47 + [[ -n '' ]]
  13:20:47 + kill_spinner
  13:20:47 + '[' '!' -z '' ']'
  13:20:47 + exit 1

  https://rdjenkins.dyndns.org/job/Trove-Gate/3974/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1326811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326811] Re: Client failing with six =1.6 error

2014-06-05 Thread Nikhil Manchanda
** Changed in: trove
   Status: New = Invalid

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326811

Title:
  Client failing with six =1.6 error

Status in devstack - openstack dev environments:
  New
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Command Line Client:
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:

  13:20:45 + screen -S stack -p key -X stuff 'cd /opt/stack/keystone  
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug  echo $! /opt/stack/status/stack/key.pid; fg || echo key failed to 
start | tee /opt/stack/status/stack/key.failure
  '
  13:20:45 Waiting for keystone to start...
  13:20:45 + echo 'Waiting for keystone to start...'
  13:20:45 + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -k -s 
http://10.5.141.237:5000/v2.0/ /dev/null; do sleep 1; done'
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + SERVICE_ENDPOINT=http://10.5.141.237:35357/v2.0
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + export OS_TOKEN=be19c524ddc92109a224
  13:20:46 + OS_TOKEN=be19c524ddc92109a224
  13:20:46 + export OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + create_keystone_accounts
  13:20:46 ++ openstack project create admin
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:46 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:46 + ADMIN_TENANT=
  13:20:46 ++ openstack user create admin --project '' --email 
ad...@example.com --password 3de4922d8b6ac5a1aad9
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_USER=
  13:20:47 ++ openstack role create admin
  13:20:47 ++ grep ' id '
  13:20:47 ++ get_field 2
  13:20:47 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_ROLE=
  13:20:47 + openstack role add --project --user
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + exit_trap
  13:20:47 + local r=1
  13:20:47 ++ jobs -p
  13:20:47 + jobs=
  13:20:47 + [[ -n '' ]]
  13:20:47 + kill_spinner
  13:20:47 + '[' '!' -z '' ']'
  13:20:47 + exit 1

  https://rdjenkins.dyndns.org/job/Trove-Gate/3974/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1326811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308715] Re: Deadlock on quota_usages

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308715

Title:
  Deadlock on quota_usages

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  We are getting deadlocks for concurrent quota reservations that we did
  not see in grizzly:

  see https://bugs.launchpad.net/nova/+bug/1283987

  The deadlock handling needs to be fixed as per above, but we shouldn't
  be deadlocking, here. It seems this is due to bad indexes in the
  database:

  mysql show index from quota_usages;
  
+--++-+--+-+---+-+--++--++-+---+
  | Table| Non_unique | Key_name| Seq_in_index 
| Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type 
| Comment | Index_comment |
  
+--++-+--+-+---+-+--++--++-+---+
  | quota_usages |  0 | PRIMARY |1 
| id  | A |   8 | NULL | NULL   |  | BTREE  
| |   |
  | quota_usages |  1 | ix_quota_usages_project_id  |1 
| project_id  | A |   8 | NULL | NULL   | YES  | BTREE  
| |   |
  | quota_usages |  1 | ix_quota_usages_user_id_deleted |1 
| user_id | A |   8 | NULL | NULL   | YES  | BTREE  
| |   |
  | quota_usages |  1 | ix_quota_usages_user_id_deleted |2 
| deleted | A |   8 | NULL | NULL   | YES  | BTREE  
| |   |
  
+--++-+--+-+---+-+--++--++-+---+
  4 rows in set (0.01 sec)
   
  mysql explain select * from quota_usages where project_id='foo' and 
user_id='bar' and deleted=0;
  
++-+--+--+++-+---+--++
  | id | select_type | table| type | possible_keys  
| key| key_len | ref   | rows | 
Extra  |
  
++-+--+--+++-+---+--++
  |  1 | SIMPLE  | quota_usages | ref  | 
ix_quota_usages_project_id,ix_quota_usages_user_id_deleted | 
ix_quota_usages_project_id | 768 | const |1 | Using index condition; 
Using where |
  
++-+--+--+++-+---+--++
  1 row in set (0.00 sec)

  We should have an index on project_id/deleted and
  project_id/user_id/deleted instead of the current values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321872] Re: Resize/migrate stopped instance fails with Neutron event timeout

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321872

Title:
  Resize/migrate stopped instance fails with Neutron event timeout

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I originally thought this was bug 1306342 but that's a different
  issue, that was when not having neutron configured properly for
  calling back to nova and nova would timeout on spawn waiting for a
  notification from neutron that networking was setup.

  This is a different issue where resize/migrate fails if you started
  from a stopped instance and using neutron.  In this case, the
  _create_domain_and_network method in the libvirt driver passes in
  power_on=False since the instance was stopped before the
  resize/migration.  The virtual interface isn't plugged in that case so
  we're waiting on a neutron event that's not going to happen, and we
  hit the eventlet timeout which then tries to destroy the non-running
  domain, and that fails with a libvirtError telling you that the domain
  isn't running in the first place.

  The fix is to check the power_on flag in _create_domain_and_network
  and if it's False, don't wait for neutron events, same as if
  vifs_already_plugged=False is passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307408] Re: VMWare - Destroy fails when Claim is not successful

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307408

Title:
  VMWare - Destroy fails when Claim is not successful

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  If Claim is not successful, compute manager triggers a call to destroy 
instance.
  Destroy fails since the compute node (cluster) is set only after claim is 
successful.

  This issue occurs when multiple parallel nova boot operations are
  triggered simultaneously.

  Snippet from nova-compute.log
  2014-04-06 22:48:52.454 DEBUG nova.compute.utils 
[req-0663cdf1-9969-446a-af08-299f18366394 demo demo] 
[instance: b22186ec-9f05-4f7d-a0d6-2276baeb6572] Insufficient 
compute resources: Free memory 975.00 MB  requested 2000 MB. from 
(pid=9041) notify_about_instance_usage 
/opt/stack/nova/nova/compute/utils.py:336
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] Traceback (most recent call last):
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572]   File 
/opt/stack/nova/nova/compute/manager.py, line 1289, in _build_instance
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] with rt.instance_claim(context, 
instance, limits):
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572]   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 249, in inner
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] return f(*args, **kwargs)
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572]   File 
/opt/stack/nova/nova/compute/resource_tracker.py, line 122, in instance_claim
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] overhead=overhead, limits=limits)
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572]   File 
/opt/stack/nova/nova/compute/claims.py, line 95, in __init__
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] self._claim_test(resources, 
limits)
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572]   File 
/opt/stack/nova/nova/compute/claims.py, line 148, in _claim_test
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] ; .join(reasons))
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] ComputeResourcesUnavailable: 
Insufficient compute resources: Free memory 975.00 MB  requested 2000 MB.
  2014-04-06 22:48:52.454 TRACE nova.compute.utils [instance: 
b22186ec-9f05-4f7d-a0d6-2276baeb6572] 
  2014-04-06 22:48:52.455 DEBUG nova.compute.manager 
[req-0663cdf1-9969-446a-af08-299f18366394 demo demo] 
[instance: b22186ec-9f05-4f7d-a0d6-2276baeb6572] Clean up 
resource before rescheduling. from (pid=9041) _reschedule_or_error 
/opt/stack/nova/nova/compute/manager.py:1401
  2014-04-06 22:48:52.455 AUDIT nova.compute.manager 
[req-0663cdf1-9969-446a-af08-299f18366394 demo demo] 
[instance: b22186ec-9f05-4f7d-a0d6-2276baeb6572] Terminating 
instance
  2014-04-06 22:48:52.544 DEBUG nova.network.api 
[req-8cf2f302-42af-46e2-b745-fa30902c3319 demo demo] 
Updating cache with info: [] from (pid=9041) 
update_instance_cache_with_nw_info /opt/stack/nova/nova/network/api.py:74
  2014-04-06 22:48:52.555 DEBUG nova.objects.instance 
[req-0663cdf1-9969-446a-af08-299f18366394 demo demo] 
Lazy-loading `system_metadata' on Instance uuid 
b22186ec-9f05-4f7d-a0d6-2276baeb6572 from (pid=9041) obj_load_attr 
/opt/stack/nova/nova/objects/instance.py:519
  2014-04-06 22:48:52.563 DEBUG nova.compute.manager 
[req-8cf2f302-42af-46e2-b745-fa30902c3319 demo demo] 
[instance: 1deeb6c0-ed7f-4f5a-bdcc-97765803d18b] Deallocating 
network for instance from (pid=9041) _deallocate_network 
/opt/stack/nova/nova/compute/manager.py:1784
  2014-04-06 

[Yahoo-eng-team] [Bug 1300380] Re: errors_out_migration decorator does not work with RPC calls

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300380

Title:
  errors_out_migration decorator does not work with RPC calls

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The errors_out_migration decorator in nova/compute/manager.py is
  attempting to use positional arguments to get the migration parameter.
  However, at run time, the decorated methods are called via RPC calls
  which specify all of their arguments as keyword arguments.  The
  decorator needs to be fixed so that it works with RPC calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297485] Re: Libvirt driver lifecycle event not recorded by nova in timely manner

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297485

Title:
  Libvirt driver lifecycle event not recorded by nova in timely manner

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I am using a nova instance to run an automated install of Fedora,
  RHEL, Ubuntu, and Windows.  After the installers finish the instance
  is shut down.  However, Nova continues to report that the instance is
  still active for up to 5 minutes after it has been shut down.  virsh
  list shows no instances running while nova reports one being active.

  This was a bug back in Folson, but was fixed in the last release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294210] Re: InvalidCPUInfo exception not catched in conductor _live_migrate

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294210

Title:
  InvalidCPUInfo exception not catched in conductor _live_migrate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Environment: 
  - fresh devstack multinode installation using nova-network.
  - one controller node with a compute node (node_1) 
  - one compute node (node_2)

  Both compute nodes has different cpu info features.

  When live migrating from node_1 to node_2 an InvalidCPUInfo is raised in 
live_mirgate method in conductor's manager.
  The exception is not captured and the roll back of the task_state is not 
performed. 
  The instance stays in 'migrating' state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262566] Re: security group listing race

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262566

Title:
  security group listing race

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  In this grenade job 
  
http://logs.openstack.org/63/62463/3/gate/gate-grenade-dsvm/4a94d81/console.html

  One process tries to list ALL security groups :
  2013-12-19 06:42:11.519 15319 INFO tempest.common.rest_client [-] Request: 
GET 
http://127.0.0.1:8774/v2/1fb77a9f8ccc417498161dbad4eeabda/os-security-groups?all_tenants=true;
  (pid=15319)

  http://logs.openstack.org/63/62463/3/gate/gate-grenade-
  dsvm/4a94d81/logs/tempest.txt.gz#_2013-12-19_06_42_11_519

  While another process deletes one:
  
http://logs.openstack.org/63/62463/3/gate/gate-grenade-dsvm/4a94d81/logs/tempest.txt.gz#_2013-12-19_06_42_11_510
  2013-12-19 06:42:11.510 15315 INFO tempest.common.rest_client [-] Request: 
DELETE 
http://127.0.0.1:8774/v2/c64e11f31d25473b91f7e1124f41f2a1/os-security-groups/27;
  (pid=15315)

  
  The test case failed on  listing ALL security group request , which  
requested closely in time  to a security group deletion.
  'messageSecurity group 27 not found./message/itemNotFound'

  
  $ nova secgroup-list --all-tenants 1  # is the cli equivalent of the failing 
request, this call should not fail with 'Security group 27 not found.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269418] Re: [OSSA 2014-017] nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573)

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269418

Title:
  [OSSA 2014-017] nova rescue doesn't put VM into RESCUE status on
  vmware (CVE-2014-2573)

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  nova rescue of VM on vmWare will create a additional VM ($ORIGINAL_ID-
  rescue), but after that, the original VM has status ACTIVE. This leads
  to

  [root@jhenner-node ~(keystone_admin)]# nova unrescue foo
  ERROR: Cannot 'unrescue' while instance is in vm_state stopped (HTTP 409) 
(Request-ID: req-792cabb2-2102-47c5-9b15-96c74a9a4819)

  the original can be deleted, which then causes leaking of the -rescue
  VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284709] Re: nova evacuate fails with neutron

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284709

Title:
  nova evacuate fails with neutron

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  When I deploy nova with a shared storage and with
  neutron/ML2/linuxbridge, I have an error when I want to to use nova
  evacuate

  command : 
  # nova evacuate 1fa486f3-259c-4e1e-ae82-8b52606f1efd devstack2 
--on-shared-storage

  here are the logs on the compute node (devstack2) which will host the
  VM after the evacuation:

  2014-02-25 16:08:39.918 ERROR nova.compute.manager 
[req-8307b6e1-b6ee-423e-9baf-d05a0ac5e91d admin admin] [ins
  tance: 
  1fa486f3-259c-4e1e-ae82-8b52606f1efd] Setting instance vm_state to ERROR
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] Traceback (most recent call last):
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/nova/nova/compute/manager.py, line 5261, in 
_error_out_instance_on_exception
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] yield
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/nova/nova/compute/manager.py, line 2267, in rebuild_instance
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] extra_usage_info=extra_usage_info)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/nova/nova/conductor/api.py, line 271, in notify_usage_exists
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] system_metadata, extra_usage_info)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/nova/nova/conductor/rpcapi.py, line 428, in notify_usage_exists
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] extra_usage_info=extra_usage_info_p)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/rpc/client.py, line 150, in call
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] wait_for_reply=True, timeout=timeout)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/transport.py, line 87, in _send
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] timeout=timeout)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py, line 393, in 
send
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] return self._send(target, ctxt, 
message, wait_for_reply, timeout)
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py, line 386, in 
_send
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] raise result
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] TypeError: 'NoneType' object is not 
iterable
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] Traceback (most recent call last):
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] 
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 133, in 
_dispatch_and_reply
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] incoming.message))
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd] 
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   File 
/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 176, in 
_dispatch
  2014-02-25 16:08:39.918 12410 TRACE nova.compute.manager [instance: 
1fa486f3-259c-4e1e-ae82-8b52606f1efd]   

[Yahoo-eng-team] [Bug 1321082] Re: libvirt driver detach_volume fails after migration failure

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321082

Title:
  libvirt driver detach_volume fails after migration failure

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  When a VM with an attached iSCSI disk fails to migrate,  the rollback
  methods does not detach the disk from target host. What happens is
  _lookup_by_name() fails, since the VM does not exist on the target
  host.  In detach_volume(), it is supposed to print a warning based on
  the correct error code being returned, instead of throwing the
  exception. However,  this is not happening, because _lookup_by_name()
  throws an InstanceNotFound exception, rather than a
  libvirt.libvirtError exception. So we also need to catch
  InstanceNotFound exception, so that detach_volume() can continue to
  execute as expected.

  Here's the exception log that I have:

  2014-05-16 16:30:22.328 41419 WARNING nova.compute.manager 
[req-3db28fed-c287-4b41-ac95-9a37a619c75c 0 4be9915c10c8426cbfe948940f7c8af1] 
[instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] Detaching volume from unknown 
instance
  2014-05-16 16:30:22.331 41419 ERROR nova.compute.manager 
[req-3db28fed-c287-4b41-ac95-9a37a619c75c 0 4be9915c10c8426cbfe948940f7c8af1] 
[instance: 3e1d0d56-3370-4d05-8210-0485fa31757c] Failed to detach volume 
98a940e5-051f-4d0f-a8c7-859a5079d95e from /dev/vdb
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c] Traceback (most recent call last):
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4218, in 
_detach_volume
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c] encryption=encryption)
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 1356, in 
detach_volume
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c] virt_dom = 
self._lookup_by_name(instance_name)
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 3477, in 
_lookup_by_name
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c] raise 
exception.InstanceNotFound(instance_id=instance_name)
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c] InstanceNotFound: Instance 
rhel65_113-3e1d0d56-0002 could not be found.
  2014-05-16 16:30:22.331 41419 TRACE nova.compute.manager [instance: 
3e1d0d56-3370-4d05-8210-0485fa31757c]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317180] Re: Hyper-v fails to attach volumes when using v1 volume utilites

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317180

Title:
  Hyper-v fails to attach volumes when using v1 volume utilites

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The following patch
  
https://github.com/openstack/nova/commit/4c2f36bfe006cb0ef89ca7a706223f30488a182e
  #diff-5c6ee11140977e63b54542e2ff5763d3R22 caused a regression by
  changing the eventlet.subprocess.Popen with the builtin
  subprocess.Popen (by using the nova.utils execute method) without
  changing the way the args were parsed.

  In this module, the execution args were parsed separated by
  whitespaces, which is not allowed by the builtin subprocess.Popen,
  causing a not found error. This error is returned for example when
  attaching a volume, at the point where iscsicli tool is used to login
  the iSCSI target or portal.

  Trace:
  http://paste.openstack.org/show/79418/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309228] Re: [OSSA 2014-015] User gets group auth if same id (CVE-2014-0204)

2014-06-05 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1309228

Title:
  [OSSA 2014-015] User gets group auth if same id (CVE-2014-0204)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  
  If a user has the same ID as a group and that group has roles granted to it, 
the user gets the roles (even if they're not in the group).

  Note that Keystone typically assigns IDs and with uuid4 you're not
  going to get a user with the same ID as a group, but some setups use
  LDAP so the IDs come from the LDAP entries.

  Here's instructions on how to recreate:

  1) Start with LDAP system (set up with devstack)

  2) Create a user with an id of suspectid

  $ ldapadd -D cn=Manager,dc=openstack,dc=org -w ofs5dac

  dn: cn=suspectid,ou=Users,dc=openstack,dc=org
  objectclass: inetorgperson
  sn: suspect
  userPassword: blkpwd

  3) Create a group with an id of suspectid

  $ ldapadd -D cn=Manager,dc=openstack,dc=org -w ofs5dac

  dn: cn=suspectid,ou=UserGroups,dc=openstack,dc=org
  objectclass: groupOfNames
  ou: suspect
  member: cn=dumb,dc=nonexistent

  $ openstack --os-identity-api=3 --os-auth-url http://localhost:5000/v3
  group list

  4) Grant a role to the group on a project

  $ openstack --os-identity-api=3 --os-auth-url http://localhost:5000/v3
  role add --group suspect --project demo admin

  5) Get a token as the user, notice that the user has the group's
  access.

  
  $ curl -s \
-H Content-Type: application/json \
-d '
  { auth: {
  passwordCredentials: {
username: suspect,
password: blkpwd
  },
  tenantName: demo
}
  }' \
http://localhost:35357/v2.0/tokens | python -m json.tool

  ---

  roles: [
  {
  name: admin
  }
  ],

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1309228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309239] Re: _heal_instance_info_cache should info info_cache as an expected_attrs

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309239

Title:
  _heal_instance_info_cache should info info_cache as an expected_attrs

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The _heal_instance_info cache() is used to heal the cache access
  instance['info_cache'] so we should always load the info_cache so we
  can avoid the extra query to get it when accessed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309067] Re: Compute _init_instance changes instance from object to dict causing an AttributeError

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309067

Title:
  Compute _init_instance changes instance from object to dict causing an
  AttributeError

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  On a compute restart sometimes the following error occurs and the
  compute fails to start:

  2014-04-17 03:36:09.527 26115 ERROR nova.openstack.common.threadgroup [-] 
'dict' object has no attribute 'task_state'
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
 line 117, in wait
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
 line 49, in wait
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 168, in wait
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/eventlet/event.py, line 
116, in wait
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/eventlet/hubs/hub.py, 
line 187, in switch
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/eventlet/greenthread.py,
 line 194, in main
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/openstack/common/service.py,
 line 480, in run_service
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/service.py, line 
177, in start
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 871, in init_host
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup   File 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 731, in _init_instance
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup if 
instance.task_state == task_states.DELETING:
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup 
AttributeError: 'dict' object has no attribute 'task_state'
  2014-04-17 03:36:09.527 26115 TRACE nova.openstack.common.threadgroup
  2014-04-17 04:01:15.145 27377 DEBUG nova.servicegroup.api [-] ServiceGroup 
driver defined as an instance of db __new__ 
/opt/rackstack/615.9/nova/lib/python2.6/site-packages/nova/servicegroup/api.py:62

  The issue appears to lie with the line immediately above which performs an 
instance update and overwrites the instance object with a dict.
  instance = self._instance_update(context, instance.uuid, task_state=None)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315138] Re: stable backports failing with sub_unit.log was 50 MB of uncompressed data!!!

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315138

Title:
  stable backports failing with sub_unit.log was  50 MB of
  uncompressed data!!!

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  Since this merged today: https://review.openstack.org/#/c/85797/2

  We have jobs failing in the stable branches which are backports:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiKyBlY2hvICdzdWJfdW5pdC5sb2cgd2FzID4gNTAgTUIgb2YgdW5jb21wcmVzc2VkIGRhdGEhISEnXCIgQU5EIHRhZ3M6Y29uc29sZSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5ODk3NTA0Njc4MH0=

  Seems this should only be enforced on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1315138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298690] Re: sqlite regexp() function doesn't behave like mysql

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298690

Title:
  sqlite regexp() function doesn't behave like mysql

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  In bug 1298494 I recently saw a case where the unit tests (using
  sqlite) behaved differently than devstack with mysql.

  The issue seems to be when we do

  filters = {'uuid': group.members, 'deleted_at': None}
  instances = instance_obj.InstanceList.get_by_filters(
  context, filters=filters)

  
  Eventually down in db/sqlalchemy/api.py we end up calling

  query = query.filter(column_attr.op(db_regexp_op)(
   str(filters[filter_name])))

  where str(filters[filter_name]) is the string 'None'.

  When using mysql, a regexp comparison of the string 'None' against a
  NULL field fails to match.

  Since sqlite doesn't have its own regexp function we provide one in
  openstack/common/db/sqlalchemy/session.py.  In the buggy case we end
  up calling it as regexp('None', None), where the types are unicode
  and NoneType.  However, we end up converting the second arg to text
  type before calling reg.search() on it, so it matches.

  This is a bug, we want the unit tests to behave like the real system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298494] Re: nova server-group-list doesn't show members of the group

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298494

Title:
  nova server-group-list doesn't show members of the group

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  With current devstack I ensured I had GroupAntiAffinityFilter in
  scheduler_default_filters in /etc/nova/nova.conf, restarted nova-
  scheduler, then ran:

  
  nova server-group-create --policy anti-affinity antiaffinitygroup

  
  nova server-group-list
  
+--+---++-+--+
  | Id   | Name  | Policies 
  | Members | Metadata |
  
+--+---++-+--+
  | 5d639349-1b77-43df-b13f-ed586e73b3ac | antiaffinitygroup | 
[u'anti-affinity'] | []  | {}   |
  
+--+---++-+--+

  
  nova boot --flavor=1 --image=cirros-0.3.1-x86_64-uec --hint 
group=5d639349-1b77-43df-b13f-ed586e73b3ac cirros0

  nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | a7a3ec40-85d9-4b72-a522-d1c0684f3ada | cirros0 | ACTIVE | -  | 
Running | private=10.4.128.2 |
  
+--+-+++-++


  Then I tried listing the groups, and it didn't print the newly-booted
  instance as a member:

  nova server-group-list
  
+--+---++-+--+
  | Id   | Name  | Policies 
  | Members | Metadata |
  
+--+---++-+--+
  | 5d639349-1b77-43df-b13f-ed586e73b3ac | antiaffinitygroup | 
[u'anti-affinity'] | []  | {}   |
  
+--+---++-+--+

  
  Rerunning the nova command with --debug we see that the problem is in nova, 
not novaclient:

  RESP BODY: {server_groups: [{members: [], metadata: {}, id:
  5d639349-1b77-43df-b13f-ed586e73b3ac, policies: [anti-affinity],
  name: antiaffinitygroup}]}

  
  Looking at the database, we see that the instance is actually tracked as a 
member of the list (along with two other instances that haven't been marked as 
deleted yet, which is also a bug I think).

  mysql select * from instance_group_member;
  
+-+++-++--+--+
  | created_at  | updated_at | deleted_at | deleted | id | instance_id  
| group_id |
  
+-+++-++--+--+
  | 2014-03-26 20:19:14 | NULL   | NULL   |   0 |  1 | 
d289502b-57fc-46f6-b39d-66a1db3a9ebc |1 |
  | 2014-03-26 20:25:04 | NULL   | NULL   |   0 |  2 | 
e07f1f15-4e93-4845-9203-bf928c196a78 |1 |
  | 2014-03-26 20:35:11 | NULL   | NULL   |   0 |  3 | 
a7a3ec40-85d9-4b72-a522-d1c0684f3ada |1 |
  
+-+++-++--+--+
  3 rows in set (0.00 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300810] Re: test_availability_zone_detail v3 API sample test is racey

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300810

Title:
  test_availability_zone_detail v3 API sample test is racey

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The failure occurs when tests are run on a slower machine.  Or it can
  be reproduced by introducing a time.sleep(5) at the start of the
  
nova/tests/integrated/v3/test_availability_zone.py:test_availability_zone_detail
  method.

  11:48:26 raise NoMatch('\n'.join(error))
  11:48:26 NoMatch: Extra list items in template:
  11:48:26 {u'zone_name': u'internal', u'hosts': {u'network': {u'nova-network': 
{u'active': True, u'available': True, u'updated_at': None}}, u'conductor': 
{u'nova-conductor': {u'active': True, u'available': True, u'updated_at': 
None}}, u'cells': {u'nova-cells': {u'active': True, u'available': True, 
u'updated_at': None}}, u'cert': {u'nova-cert': {u'active': True, u'available': 
True, u'updated_at': None}}, u'scheduler': {u'nova-scheduler': {u'active': 
True, u'available': True, u'updated_at': None}}, u'consoleauth': 
{u'nova-consoleauth': {u'active': True, u'available': True, u'updated_at': 
None}}}, u'zone_state': {u'available': True}}
  11:48:26 {u'zone_name': u'nova', u'hosts': {u'compute': {u'nova-compute': 
{u'active': True, u'available': True, u'updated_at': None}}}, u'zone_state': 
{u'available': True}}
  11:48:26 Extra list items in Response:
  11:48:26 {u'zone_name': u'internal', u'hosts': {u'network': {u'nova-network': 
{u'active': True, u'available': True, u'updated_at': 
u'2014-03-31T16:01:14.130232'}}, u'conductor': {u'nova-conductor': {u'active': 
True, u'available': True, u'updated_at': u'2014-03-31T16:01:13.843489'}}, 
u'cells': {u'nova-cells': {u'active': True, u'available': True, u'updated_at': 
None}}, u'cert': {u'nova-cert': {u'active': True, u'available': True, 
u'updated_at': u'2014-03-31T16:01:13.868539'}}, u'scheduler': 
{u'nova-scheduler': {u'active': True, u'available': True, u'updated_at': 
u'2014-03-31T16:01:14.397896'}}, u'consoleauth': {u'nova-consoleauth': 
{u'active': True, u'available': True, u'updated_at': 
u'2014-03-31T16:01:13.872836'}}}, u'zone_state': {u'available': True}}
  11:48:26 {u'zone_name': u'nova', u'hosts': {u'compute': {u'nova-compute': 
{u'active': True, u'available': True, u'updated_at': 
u'2014-03-31T16:01:13.863846'}}}, u'zone_state': {u'available': True}}

  Once the services are started they will eventually update their
  updated_at time, but it doesn't happen immediately.  So the sample
  check fails because it's expecting updated_at to be None but it could
  have a timestamp value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289627] Re: VMware NoPermission faults do not log what permission was missing

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289627

Title:
  VMware NoPermission faults do not log what permission was missing

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Oslo VMware library for OpenStack projects:
  Fix Committed

Bug description:
  NoPermission object has a privilegeId that tells us which permission
  the user did not have. Presently the VMware nova driver does not log
  this data. This is very useful for debugging user permissions problems
  on vCenter or ESX.

  
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1289627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280532] Re: Detach volume fails with Unexpected KeyError in EC2 interface.

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280532

Title:
  Detach volume fails with Unexpected KeyError in EC2 interface.

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Detach volume fails with Unexpected KeyError in EC2 interface when I
  detach a attaching status volume.

  The volume with attaching status don't contain a
  propertyinstance_uuid, a KeyError will be raised at the following
  function.

  def _get_instance_from_volume(self, context, volume):
  if volume['instance_uuid']:
  ..

  Attaching volume dict:

  {
'status': u'attaching',
'volume_type_id': u'None',
'display_name': None,
'attach_time': '',
'availability_zone': u'nova',
'created_at': u'2014-02-13T16: 50: 53.620080',
'attach_status': 'detached',
'display_description': None,
'volume_metadata': {

},
'snapshot_id': None,
'mountpoint': '',
'id': u'99d118ee-3666-4983-8825-f8c096bccbd1',
'size': 1
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279199] Re: vmwareapi: unrescue instance failed due to can't detach disk from running instance

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279199

Title:
  vmwareapi: unrescue instance failed due to can't detach disk from
  running instance

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I use openstack icehouse build, when unrescue a rescued instance based on 
vcenter driver, the instance became to ERROR status, and when nova show  we hit 
error below:
  {u'message': u'The attempted operation cannot be performed in the current 
state (Powered on).', u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 258, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)

|
  |  |   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2797, in 
unrescue_instance   
  |
  |  | network_info)


  |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 675, in 
unrescue
 |
  |  | _vmops.unrescue(instance)


  |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 1132, in 
unrescue
 |
  |  | 
self._volumeops.detach_disk_from_vm(vm_rescue_ref, r_instance, device)  

   |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/volumeops.py, line 129, 
in detach_disk_from_vm  
 |
  |  | 
self._session._wait_for_task(instance_uuid, reconfig_task)  

   |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 890, in 
_wait_for_task  
 |
  |  | ret_val = done.wait()


  |
  |  |   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in wait 

|
  |  | return hubs.get_hub().switch()   


  |
  |  |   File 
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 187, in switch

|
  |  | return self.greenlet.switch()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1249519] Re: VMware: deleting instance snapshot too soon leaves instance in Image Uploading state

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249519

Title:
  VMware: deleting instance snapshot too soon leaves instance in Image
  Uploading state

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  branch: stable/havana
  driver: VMwareVCDriver

  When using the nova VMwareVCDriver, the following scenario will cause
  an instance to be stuck in Image Uploading state:

  1. Create an instance
  2. Snapshot the instance
  3. While the image is in Queuing state, immediately delete the image

  The screen-n-cpu.log shows:

  2013-11-08 14:45:29.334 DEBUG glanceclient.common.http [-] curl -i -X PUT -H 
'X-Service-Catalog: [{endpoints: [{adminURL: 
http://172.30.0.3:8776/v1/61df65834f494153af76939ffbf5e1a0;, region: 
RegionOne, internalURL: 
http://172.30.0.3:8776/v1/61df65834f494153af76939ffbf5e1a0;, id: 
536fa98e69a544bca1086f07acdf7663, publicURL: 
http://172.30.0.3:8776/v1/61df65834f494153af76939ffbf5e1a0}], 
endpoints_links: [], type: volume, name: cinder}]' -H 
'X-Identity-Status: Confirmed' -H 'X-Auth-Token: 
bd579bd82586cca09e43c975944ef24d' -H 'x-image-meta-property-owner_id: 
61df65834f494153af76939ffbf5e1a0' -H 'x-image-meta-container_format: bare' -H 
'Transfer-Encoding: chunked' -H 'x-glance-registry-purge-props: true' -H 
'X-Tenant-Id: 61df65834f494153af76939ffbf5e1a0' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-property-vmware_image_version: 1' -H 
'x-image-meta-property-vmware_adaptertype: lsiLogic' -H 'X-Roles: admin' -H 
'X-User-Id: 65aedf8343994f329508a502518a
 7a0f' -H 'x-image-meta-is_public: false' -H 
'x-image-meta-property-vmware_ostype: otherGuest' -H 'x-image-meta-size: 
41125888' -H 'Content-Type: application/octet-stream' -H 
'x-image-meta-disk_format: vmdk' -H 'x-image-meta-name: ax1_snap' -d 
'ThreadSafePipe maxsize=10' 
http://172.30.0.3:9292/v1/images/6cf638ec-844e-41f1-8597-c696a2d946da from 
(pid=15600) log_curl_request 
/opt/stack/python-glanceclient/glanceclient/common/http.py:142
  2013-11-08 14:45:30.138 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... from (pid=15600) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-08 14:45:30.138 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
354436bd11ce4e07af07157381ff6147 from (pid=15600) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-08 14:45:30.139 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is 
940f2aa02d024f5abecaefec16d44fc5. from (pid=15600) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-08 14:45:30.141 DEBUG amqp [-] Closed channel #1 from (pid=15600) 
_do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-08 14:45:30.141 DEBUG amqp [-] using channel_id: 1 from (pid=15600) 
__init__ /usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-08 14:45:30.142 DEBUG amqp [-] Channel open from (pid=15600) _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-08 14:45:39.350 DEBUG glanceclient.common.http [-] 
  HTTP/1.1 403 Forbidden
  date: Fri, 08 Nov 2013 22:45:29 GMT
  content-length: 54
  content-type: text/plain; charset=UTF-8
  x-openstack-request-id: req-7d82c3c6-4d70-4506-8380-fe9e58b34801

  403 Forbidden

  Forbidden to update deleted image.

 
   from (pid=15600) log_http_response 
/opt/stack/python-glanceclient/glanceclient/common/http.py:152
  2013-11-08 14:45:39.351 ERROR glanceclient.common.http [-] Request returned 
failure status.
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/queue.py, line 107, 
in switch
  self.greenlet.switch(value)
File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
194, in main
  result = function(*args, **kwargs)
File /opt/stack/nova/nova/virt/vmwareapi/io_util.py, line 106, in _inner
  data=self.input)
File /opt/stack/nova/nova/image/glance.py, line 395, in update
  _reraise_translated_image_exception(image_id)
File /opt/stack/nova/nova/image/glance.py, line 393, in update
  image_id, **image_meta)
File /opt/stack/nova/nova/image/glance.py, line 212, in call
  return getattr(client.images, method)(*args, **kwargs)
File /opt/stack/python-glanceclient/glanceclient/v1/images.py, line 291, 
in update
  'PUT', url, headers=hdrs, body=image_data)
File /opt/stack/python-glanceclient/glanceclient/common/http.py, line 
288, in raw_request
  return self._http_request(url, method, **kwargs)
File /opt/stack/python-glanceclient/glanceclient/common/http.py, line 
248, in _http_request
  raise exc.from_response(resp, body_str)
  ImageNotAuthorized: Not authorized for image 

[Yahoo-eng-team] [Bug 1320855] Re: sql: migration from 37 to 38 version fails

2014-06-05 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320855

Title:
  sql: migration from 37 to 38 version fails

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released

Bug description:
  Migration from Havana to Icehouse fails due to db_sync error, when
  migrating 37 to 38 sql schema:

  CRITICAL keystone [-] OperationalError: (OperationalError) (1005,
  Can't create table 'keystone.assignment' (errno: 150)) \nCREATE
  TABLE assignment (\n\ttype
  ENUM('UserProject','GroupProject','UserDomain','GroupDomain') NOT
  NULL, \n\tactor_id VARCHAR(64) NOT NULL, \n\ttarget_id VARCHAR(64) NOT
  NULL, \n\trole_id VARCHAR(64) NOT NULL, \n\tinherited BOOL NOT NULL,
  \n\tPRIMARY KEY (type, actor_id, target_id, role_id), \n\tFOREIGN
  KEY(role_id) REFERENCES role (id), \n\tCHECK (inherited IN (0,
  1))\n)\n\n ()#0122014-05-19 09:57:51.445 40373 TRACE keystone
  Traceback (most recent call last):#0122014-05-19 09:57:51.445 40373
  TRACE keystone   File /usr/bin/keystone-manage, line 51, in
  module#0122014-05-19 09:57:51.445 40373 TRACE keystone
  cli.main(argv=sys.argv, config_files=config_files)#0122014-05-19
  09:57:51.445 40373 TRACE keystone   File /usr/lib/python2.7/dist-
  packages/keystone/cli.py, line 191, in main#0122014-05-19
  09:57:51.445 40373 TRACE keystone
  CONF.command.cmd_class.main()#0122014-05-19 09:57:51.445 40373 TRACE
  keystone   File /usr/lib/python2.7/dist-packages/keystone/cli.py,
  line 67, in main#0122014-05-19 09:57:51.445 40373 TRACE keystone
  migration_helpers.sync_database_to_version(extension,
  version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
  /usr/lib/python2.7/dist-
  packages/keystone/common/sql/migration_helpers.py, line 139, in
  sync_database_to_version#0122014-05-19 09:57:51.445 40373 TRACE
  keystone migration.db_sync(sql.get_engine(), abs_path,
  version=version)#0122014-05-19 09:57:51.445 40373 TRACE keystone
  File /usr/lib/python2.7/dist-
  packages/keystone/openstack/common/db/sqlalchemy/migration.py, line
  197, in db_sync#0122014-05-19 09:57:51.445 40373 TRACE keystone
  return versioning_api.upgrade(engine, repository,
  version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
  /usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line
  186, in upgrade#0122014-05-19 09:57:51.445 40373 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1320855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314968] Re: Installing test-requirements fails because pysendfile.2.0.0.tar.gz cannot be found

2014-06-05 Thread Alan Pevec
** Changed in: glance/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1314968

Title:
  Installing test-requirements fails because pysendfile.2.0.0.tar.gz
  cannot be found

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  Fix Released

Bug description:
  2014-05-01 01:43:10.804 | Downloading/unpacking pysendfile==2.0.0 (from -r 
/home/jenkins/workspace/gate-glance-pep8/test-requirements.txt (line 23))
  2014-05-01 01:43:10.804 |   
http://pypi.openstack.org/openstack/pysendfile/2.0.0 uses an insecure transport 
scheme (http). Consider using https if pypi.openstack.org has it available
  2014-05-01 01:43:10.804 |   http://pypi.openstack.org/openstack/pysendfile/ 
uses an insecure transport scheme (http). Consider using https if 
pypi.openstack.org has it available
  2014-05-01 01:43:10.804 |   
http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz uses an insecure 
transport scheme (http). Consider using https if pysendfile.googlecode.com has 
it available
  2014-05-01 01:43:10.804 |   HTTP error 404 while getting 
http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz (from -f)
  2014-05-01 01:43:10.804 | Cleaning up...
  2014-05-01 01:43:10.804 | Exception:
  2014-05-01 01:43:10.804 | Traceback (most recent call last):
  2014-05-01 01:43:10.804 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main
  2014-05-01 01:43:10.804 | status = self.run(options, args)
  2014-05-01 01:43:10.805 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 278, in run
  2014-05-01 01:43:10.805 | requirement_set.prepare_files(finder, 
force_root_egg_info=self.bundle, bundle=self.bundle)
  2014-05-01 01:43:10.805 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1197, in prepare_files
  2014-05-01 01:43:10.805 | do_download,
  2014-05-01 01:43:10.805 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py,
 line 1375, in unpack_url
  2014-05-01 01:43:10.805 | self.session,
  2014-05-01 01:43:10.805 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py,
 line 547, in unpack_http_url
  2014-05-01 01:43:10.805 | resp.raise_for_status()
  2014-05-01 01:43:10.805 |   File 
/home/jenkins/workspace/gate-glance-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/models.py,
 line 773, in raise_for_status
  2014-05-01 01:43:10.805 | raise HTTPError(http_error_msg, response=self)
  2014-05-01 01:43:10.805 | HTTPError: 404 Client Error: Not Found
  2014-05-01 01:43:10.805 | 
  2014-05-01 01:43:10.806 | Storing debug log for failure in 
/home/jenkins/.pip/pip.log
  2014-05-01 01:43:10.806 | 
  2014-05-01 01:43:10.806 | ERROR: could not install deps 
[-r/home/jenkins/workspace/gate-glance-pep8/requirements.txt, 
-r/home/jenkins/workspace/gate-glance-pep8/test-requirements.txt]

  Following fix would be submitted against this bug:
  index bef062d..986b853 100644
  --- a/test-requirements.txt
  +++ b/test-requirements.txt
  @@ -19,7 +19,6 @@ psutil=1.1.1
   # Optional packages that should be installed when testing
   MySQL-python
   psycopg2
  --f http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz
   pysendfile==2.0.0
   qpid-python
   xattr=0.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1314968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308218] Re: keystone.tenant.list_users returns user multiple times

2014-06-05 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1308218

Title:
  keystone.tenant.list_users returns user multiple times

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released

Bug description:
  With the icehouse code base, if you call keystone v2's
  keystone.tenant.list_users(tenant_id) it returns each user * the
  number of roles the user has in the project.

  My assignment table for my test server looks like this for one
  specific project_id:

  mysql select * from assignment where 
target_id='0f031cca55704f87af9630d939c1ebd3'\G
  *** 1. row ***
   type: UserProject
   actor_id: 665cae4478fb47a1ab21eecf95ea200c
  target_id: 0f031cca55704f87af9630d939c1ebd3
role_id: dc6dbe0f687d4afb8f2634fb2a3a61c2
  inherited: 0
  *** 2. row ***
   type: UserProject
   actor_id: 665cae4478fb47a1ab21eecf95ea200c
  target_id: 0f031cca55704f87af9630d939c1ebd3
role_id: bd089cb8a31c47af9aff36e40fe8e99e
  inherited: 0
  *** 3. row ***
   type: UserProject
   actor_id: 665cae4478fb47a1ab21eecf95ea200c
  target_id: 0f031cca55704f87af9630d939c1ebd3
role_id: 9ee0b22736dd4fc480432929dfa1e899
  inherited: 0
  *** 4. row ***
   type: UserProject
   actor_id: 665cae4478fb47a1ab21eecf95ea200c
  target_id: 0f031cca55704f87af9630d939c1ebd3
role_id: 9fe2ff9ee4384b1894a90878d3e92bab
  inherited: 0
  *** 5. row ***
   type: UserProject
   actor_id: 665cae4478fb47a1ab21eecf95ea200c
  target_id: 0f031cca55704f87af9630d939c1ebd3
role_id: b804871ba2c543fdbc0e20bc0ebcd658
  inherited: 0
  5 rows in set (0.01 sec)

  So user '665cae4478fb47a1ab21eecf95ea200c' has 5 roles in project
  '0f031cca55704f87af9630d939c1ebd3'. With a keystone client connection
  to v2.0, I get the same user returned 5 times:

  tenants.list_users('0f031cca55704f87af9630d939c1ebd3')
  [User {u'username': u'ctina', u'name': u'ctina', u'enabled': True, 
u'tenantId': u'0f031cca55704f87af9630d939c1ebd3', u'id': 
u'665cae4478fb47a1ab21eecf95ea200c', u'email': None}, User {u'username': 
u'ctina', u'name': u'ctina', u'enabled': True, u'tenantId': 
u'0f031cca55704f87af9630d939c1ebd3', u'id': 
u'665cae4478fb47a1ab21eecf95ea200c', u'email': None}, User {u'username': 
u'ctina', u'name': u'ctina', u'enabled': True, u'tenantId': 
u'0f031cca55704f87af9630d939c1ebd3', u'id': 
u'665cae4478fb47a1ab21eecf95ea200c', u'email': None}, User {u'username': 
u'ctina', u'name': u'ctina', u'enabled': True, u'tenantId': 
u'0f031cca55704f87af9630d939c1ebd3', u'id': 
u'665cae4478fb47a1ab21eecf95ea200c', u'email': None}, User {u'username': 
u'ctina', u'name': u'ctina', u'enabled': True, u'tenantId': 
u'0f031cca55704f87af9630d939c1ebd3', u'id': 
u'665cae4478fb47a1ab21eecf95ea200c', u'email': None}]

  The Havana code calls the following:
  def list_user_ids_for_project(self, tenant_id):
  session = self.get_session()
  self.get_project(tenant_id)
  query = session.query(UserProjectGrant)
  query = query.filter(UserProjectGrant.project_id ==
   tenant_id)
  project_refs = query.all()
  return [project_ref.user_id for project_ref in project_refs]

  class UserProjectGrant(sql.ModelBase, BaseGrant):
  __tablename__ = 'user_project_metadata'
  user_id = sql.Column(sql.String(64), primary_key=True)
  project_id = sql.Column(sql.String(64), sql.ForeignKey('project.id'),
  primary_key=True)
  data = sql.Column(sql.JsonBlob())

  The user_project_metadata table has the roles listed as a dictionary
  inside of the 'data' column, so each user has only one entry. The
  Icehouse code calls the same list_user_ids_for_project but it uses the
  assignment table which has one entry for each user/project/role
  combination, leading to a user to potentially have multiple entries
  per project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1308218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293698] Re: Can't map user description using LDAP

2014-06-05 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1293698

Title:
  Can't map user description using LDAP

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released

Bug description:
  
  There's no way to set a mapping for the description attribute.

  First, there's no user_desc_attribute config option (there's a
  tenant_desc_attribute), although there doesn't need to be, but

  Second, if you try to set
  user_additional_attribute_mapping=description:description the server
  ignores it. The log says:

WARNING keystone.common.ldap.core [-] Invalid additional attribute
  mapping: description:description. Value description must use one
  of password, enabled, default_project_id, name, email.

  Why only allow the attributes that keystone knows about? Those
  attributes already have user_*_attribute config options anyways!

  Third, when keystone gets the users, it doesn't include the extra attr
  mapping attrs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1293698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257273] Re: Glance download fails when size is 0

2014-06-05 Thread Alan Pevec
** Changed in: glance/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257273

Title:
  Glance download fails when size is 0

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  Fix Released

Bug description:
  Glance images are not being fetched by glance's API v1 when the size
  is 0. There are 2 things wrong with this behaviour:

  1) Active images should always be ready to be downloaded, regardless they're 
locally or remotely stored.
  2) The size shouldn't be the way to verify whether an image has some data or 
not.

  
https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v1/images.py#n455

  This is happening in the API v1, but it doesn't seem to be true for
  v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281216] Re: Keystone Havana Authentication Error using samAccountName in Active Directory

2014-06-05 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1281216

Title:
  Keystone Havana Authentication Error using samAccountName in Active
  Directory

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released

Bug description:
  When using Active Directory as the LDAP backend for Keystone, if I use
  the cn attribute for user_id_attribute and user_name_attribute,
  authentication works fine.  However, if I try to use samAccountName,
  authentication fails.  For example, keystone user-list returns the
  following error:

  Authorization Failed: An unexpected error prevented the server from
  fulfilling your request. 'name' (HTTP 500)

  and the login screen in Horizon shows: An error occurred
  authenticating. Please try again later.

  Also, the following trace is shown in the keystone.log:

  2014-02-17 06:48:37.472 8207 ERROR keystone.common.wsgi [-] 'name'
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 238, in 
__call__
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 127, in 
authenticate
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi auth_token_data, 
roles_ref=roles_ref, catalog_ref=catalog_ref)
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/manager.py, line 44, in 
_wrapper
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi return f(*args, 
**kw)
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/providers/uuid.py, line 362, 
in issue_v2_token
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi token_ref, 
roles_ref, catalog_ref)
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/providers/uuid.py, line 57, 
in format_token
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi 'name': 
user_ref['name'],
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi KeyError: 'name'
  2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi 
  2014-02-17 06:48:37.474 8207 INFO access [-] 192.168.1.128 - - 
[17/Feb/2014:11:48:37 +] POST http://192.168.1.128:35357/v2.0/tokens 
HTTP/1.0 500 150

  It appears that the user_ref has no 'name' property when I try to use
  samAccountName.  This seems to have worked in Grizzly but does not
  work in Havana.  Below are the applicable lines from the
  keystone.conf:

  [ldap]
  query_scope = sub
  url = LDAP://192.168.1.253
  user = CN=ldapuser,CN=Users,DC=mydomain,DC=net
  password = ldapuserpassword
  suffix = DC=mydomain,DC=net
  use_dumb_member = True
  dumb_member = CN=ldapuser,CN=Users,DC=mydomain,DC=net

  user_tree_dn = CN=Users,DC=mydomain,DC=net
  user_objectclass = organizationalPerson
  user_id_attribute = samAccountName
  user_name_attribute = samAccountName
  user_mail_attribute = mail
  user_enabled_attribute = userAccountControl
  user_enabled_mask = 2
  user_enabled_default = 512
  user_attribute_ignore = password,tenant_id,tenants
  user_allow_create = False
  user_allow_update = False
  user_allow_delete = False

  tenant_tree_dn = OU=Projects,OU=OpenStack,DC=mydomain,DC=net
  tenant_objectclass = organizationalUnit
  tenant_id_attribute = ou
  tenant_member_attribute = member
  tenant_name_attribute = ou
  tenant_desc_attribute = description
  tenant_enabled_attribute = extensionName
  tenant_attribute_ignore = description,businessCategory,extensionName
  tenant_allow_create = True
  tenant_allow_update = True
  tenant_allow_delete = True

  role_tree_dn = OU=Roles,OU=OpenStack,DC=mydomain,DC=net
  role_objectclass = organizationalRole
  role_id_attribute = cn
  role_name_attribute = cn
  role_member_attribute = roleOccupant
  role_allow_create = True
  role_allow_update = True
  role_allow_delete = True

  Again, if I change the user_id_attribute and the user_name_attribute
  to cn then everything works fine.  Please advise.  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1281216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311047] Re: Cannot launch instances in French (using the modal)

2014-06-05 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1311047

Title:
  Cannot launch instances in French (using the modal)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  I noticed this in Icehouse with Horizon translated in French: the
  Launch instance form is broken due to a syntax error in the quota
  handling javascript when strings contain a single quote.

  Patch on the way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1311047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315739] Re: local_settings misses policy references for services

2014-06-05 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1315739

Title:
  local_settings misses policy references for services

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  local_settings.py example lists:
  #POLICY_FILES = {
  # 'identity': 'keystone_policy.json',
  # 'compute': 'nova_policy.json'
  #}

  
  whereas
  settings.py lists instead:
  POLICY_FILES = {
  'identity': 'keystone_policy.json',
  'compute': 'nova_policy.json',
  'volume': 'cinder_policy.json',
  'image': 'glance_policy.json',
  'orchestration': 'heat_policy.json',
  }

  I think, we should hint users to customize volume, image etc. as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1315739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308058] Re: Cannot create volume from glance image without checksum

2014-06-05 Thread Alan Pevec
** Changed in: cinder/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308058

Title:
  Cannot create volume from glance image without checksum

Status in Cinder:
  Fix Committed
Status in Cinder icehouse series:
  Fix Released
Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  It is no longer possible to create a volume from an image that does
  not have a checksum set.

  
https://github.com/openstack/cinder/commit/da13c6285bb0aee55cfbc93f55ce2e2b7d6a28f2
  - this patch removes the default of None from the getattr call.

  If this is intended it would be nice to see something more informative
  in the logs.

  2014-04-15 11:52:26.035 19000 ERROR cinder.api.middleware.fault 
[req-cf0f7b89-a9c1-4a10-b1ac-ddf415a28f24 c139cd16ac474d2184237ba837a04141 
83d5198d5f5a461798c6b843f57540d
  f - - -] Caught error: checksum
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault Traceback 
(most recent call last):
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/middleware/fault.py, line 75, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault app_iter 
= application(self.environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 615, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault response 
= self.app(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 895, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
content_type, body, accept)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 943, in _process_stack
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 1019, in dispatch
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
method(req=request, **action_args)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/v2/volumes.py, 

[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-06-05 Thread Alan Pevec
** Changed in: ceilometer/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218462] Re: Logging output shown when running tests

2014-06-05 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1218462

Title:
  Logging output shown when running tests

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  When running ./run_tests.sh at the moment, logging information related
  to requests shows up in the test output:

  (Updated description)
  INFO:openstack_auth.views:Logging out user test_user

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1218462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279199] Re: vmwareapi: unrescue instance failed due to can't detach disk from running instance

2014-06-05 Thread Alan Pevec
** Changed in: nova/icehouse
   Importance: Undecided = High

** Changed in: nova/icehouse
 Assignee: (unassigned) = Gary Kotton (garyk)

** Tags removed: havana-backport-potential icehouse-backport-potential
in-stable-havana in-stable-icehouse

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = Fix Committed

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
 Assignee: (unassigned) = Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279199

Title:
  vmwareapi: unrescue instance failed due to can't detach disk from
  running instance

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I use openstack icehouse build, when unrescue a rescued instance based on 
vcenter driver, the instance became to ERROR status, and when nova show  we hit 
error below:
  {u'message': u'The attempted operation cannot be performed in the current 
state (Powered on).', u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 258, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)

|
  |  |   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2797, in 
unrescue_instance   
  |
  |  | network_info)


  |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 675, in 
unrescue
 |
  |  | _vmops.unrescue(instance)


  |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 1132, in 
unrescue
 |
  |  | 
self._volumeops.detach_disk_from_vm(vm_rescue_ref, r_instance, device)  

   |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/volumeops.py, line 129, 
in detach_disk_from_vm  
 |
  |  | 
self._session._wait_for_task(instance_uuid, reconfig_task)  

   |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 890, in 
_wait_for_task  
 |
  |  | ret_val = done.wait()


  |
  |  |   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in wait 

|
  |  | return hubs.get_hub().switch()   


  |
  |  |   File 

[Yahoo-eng-team] [Bug 1324755] Re: disk consumption report incorrect in host-describe and simple-tenant-usage

2014-06-05 Thread Joe Gordon
This is by design, in general nova tracks the amount of resources
allocated out not the amount being used.  This is in part to make
scheduling more repeatable. Further more simple-tenant-usage is very
much a proof of concept and not something to really be used for anything
real.

What would you like to use the actual usage information for? If I
remember correctly there are plans in progress to support the model
where nova tracks actual usage and uses it for scheduling (as an
optional mode).

Overall this is more of a feature and not a bug per se.

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324755

Title:
  disk consumption report incorrect in host-describe and simple-tenant-
  usage

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  simple-tenant-usage and host use

  resource['disk_gb'] += (instance['root_gb'] +
  instance['ephemeral_gb'])

  to report disk size, however, the eph is maximum value can be
  allocated to an instance , not its current usage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323222] Re: Nova fails display relevant error message when booting multiple instances exceeds quota

2014-06-05 Thread Joe Gordon
From what you are describing, this is a known issue that has to be fixed
with a feature.

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323222

Title:
  Nova fails display relevant error message when booting multiple
  instances exceeds quota

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  1. Booting multiple instances at once (using --num-instaces option) and 
amount exceeds quota.
  2. Only allowed amount is created

  Actual Result:
  some instances failed to create due to quota limits, but no actual message to 
inform users

  Expected Result
  A. Error message describing the amount of instances that failed to create
  or
  B. All instances should fail with appropriate message

  
  More Info:
  Using Horizon, the entire operation failes before creating any instances with 
error message:
  The requested 6 instances cannot be launched as you only have 4 of your 
quota available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326998] [NEW] UnboundLocalError: local variable 'domain' referenced before assignment

2014-06-05 Thread Sam Morrison
Public bug reported:

Get this sometimes when an instance fails to build


2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] Traceback (most recent call last):
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] set_access_ip=set_access_ip)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] return function(self, context, *args, 
**kwargs)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] six.reraise(self.type_, self.value, 
self.tb)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1720, in _spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] block_device_info)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2253, in 
spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] block_device_info)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3651, in 
_create_domain_and_network
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] domain.destroy()
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] UnboundLocalError: local variable 
'domain' referenced before assignment

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326998

Title:
  UnboundLocalError: local variable 'domain' referenced before
  assignment

Status in OpenStack Compute (Nova):
  New

Bug description:
  Get this sometimes when an instance fails to build


  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] Traceback (most recent call last):
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] set_access_ip=set_access_ip)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] return function(self, context, *args, 
**kwargs)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] six.reraise(self.type_, self.value, 
self.tb)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 

[Yahoo-eng-team] [Bug 1327000] [NEW] Should not schedule dhcp-agent when dhcp port creation

2014-06-05 Thread Itsuro Oda
Public bug reported:

Intended operation:
---
1. create network and subnet 
  neutron net-create net
  neutron subnet-create --name sub net 20.0.0.0/24
2. create port for dhcp server. It is intended to assign specific ip address.
 neutron port-create --name dhcp --device-id reserved_dhcp_port --fixed-ip 
ip_address=20.0.0.10,subnet_id=sub net -- --device_owner network:dhcp
3. then schedule dhcp-agent manually 
 neutron dhcp-agent-network-add 275f7a3f-0251-485c-aea1-9913e173dd1e net
---

But now force scheduling occurs at port creation (2.). So it is not
available to schedule manually (3.).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327000

Title:
  Should not schedule dhcp-agent when dhcp port creation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Intended operation:
  ---
  1. create network and subnet 
neutron net-create net
neutron subnet-create --name sub net 20.0.0.0/24
  2. create port for dhcp server. It is intended to assign specific ip address.
   neutron port-create --name dhcp --device-id reserved_dhcp_port --fixed-ip 
ip_address=20.0.0.10,subnet_id=sub net -- --device_owner network:dhcp
  3. then schedule dhcp-agent manually 
   neutron dhcp-agent-network-add 275f7a3f-0251-485c-aea1-9913e173dd1e net
  ---

  But now force scheduling occurs at port creation (2.). So it is not
  available to schedule manually (3.).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326958] Re: default security groups listing doesn't work when neutron is managing security groups

2014-06-05 Thread Matt Fischer
This is a useful feature for us as an operator, so I'd like to see
option 2. I've added Neutron as an affected project. Depending on how
the discussion goes we can remove Nova as affected.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326958

Title:
  default security groups listing doesn't work when neutron is managing
  security groups

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Neutron does not seem to implement the default security groups calls,
  so when neutron is managing security groups, nova tries to pass the
  call off to it (I think) and fails. I think this bug is really against
  neutron and nova, but I'm not sure where to start. I'm not sure if
  anyone else is trying to use this call or not and maybe it should just
  be dropped. The API doesn't support it and the docs on it are wrong.

  http://docs.openstack.org/api/openstack-compute/2/content/ext-os-
  security-group-default-rules.html  (note that the example URLs in that
  doc are missing the word default)

  curl -i 'http://1.2.3.4:8774/v2/f5ad8f41cd8540ca83b6998b83bf9bba/os-
  security-group-default-rules' -X GET -H X-Auth-Project-Id: admin  -H
  Accept: application/json -H X-Auth-Token:
  487b898af056401b806786623e3c2656

  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 917, in 
__call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack content_type, body, 
accept)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 983, in 
_process_stack
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 1070, in 
dispatch
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-06-05 

[Yahoo-eng-team] [Bug 1166110] Re: Multiple nics on same network not allowed

2014-06-05 Thread Joe Gordon
this isn't in progress any more and this  more of a new feature then a
bug per se.

** Changed in: nova
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1166110

Title:
  Multiple nics on same network not allowed

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I have a use case which requires a Nova VM to have multiple interfaces
  in the same Quantum network.

  The current Nova implementation does not allow that.
  This happens regardless of whether network or port is specified. 

  This behavior is enforced by the code in nova/network/quantumv2/api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1166110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308846] Re: attach_volume doesn't work in cells when api is icehouse and compute is havana

2014-06-05 Thread Joe Gordon
Although I cannot validate this myself, I am not surprised to see this,
in our rolling upgrade testing we didn't use cells. At this point I
don't think we can really retroactively fix this, I think the best we
can do is try to get better cells testing in the gate.

Sam, just to be clear this breaks if you use cells?

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308846

Title:
  attach_volume doesn't work in cells when api is icehouse and compute
  is havana

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  This affects Havana not Icehouse

  The method signature of attach_volume changed from Havana - Icehouse

  -def attach_volume(self, context, instance, volume_id, device=None):
  +def attach_volume(self, context, instance, volume_id, device=None,
  +  disk_bus=None, device_type=None):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327001] [NEW] brocade mechanism driver should be derived from ml2 plugin base class

2014-06-05 Thread Shiv Haris
Public bug reported:

ml2 plugin based class functions need to be available to the mechanism
driver.

mechanism drivers should derived from ml2 plugin base class

** Affects: neutron
 Importance: Medium
 Assignee: Shiv Haris (shh)
 Status: Triaged

** Changed in: neutron
 Assignee: (unassigned) = Shiv Haris (shh)

** Changed in: neutron
   Status: New = Triaged

** Changed in: neutron
   Importance: Undecided = Medium

** Changed in: neutron
Milestone: None = juno-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327001

Title:
  brocade mechanism driver should be derived from ml2 plugin base class

Status in OpenStack Neutron (virtual network service):
  Triaged

Bug description:
  ml2 plugin based class functions need to be available to the mechanism
  driver.

  mechanism drivers should derived from ml2 plugin base class

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327005] [NEW] Need change host to host_name in host resources

2014-06-05 Thread tinytmy
Public bug reported:

step to reproduce:
In python Terminal ,
 from novaclient.v1_1 import client
 ct = 
 client.Client(admin,password,admin,http://192.168.1.100:5000/v2.0;)
 ct.hosts.get(hostname)

error:
File stdin, line 1, in module
  File /opt/stack/python-novaclient/novaclient/v1_1/hosts.py, line 24, in 
__repr__
return Host: %s % self.host_name
  File 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py, 
line 464, in __getattr__
raise AttributeError(k)
AttributeError: host_name

** Affects: nova
 Importance: Undecided
 Assignee: tinytmy (tangmeiyan77)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = tinytmy (tangmeiyan77)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327005

Title:
  Need change host to host_name in host resources

Status in OpenStack Compute (Nova):
  New

Bug description:
  step to reproduce:
  In python Terminal ,
   from novaclient.v1_1 import client
   ct = 
client.Client(admin,password,admin,http://192.168.1.100:5000/v2.0;)
   ct.hosts.get(hostname)

  error:
  File stdin, line 1, in module
File /opt/stack/python-novaclient/novaclient/v1_1/hosts.py, line 24, in 
__repr__
  return Host: %s % self.host_name
File 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py, 
line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: host_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326958] Re: default security groups listing doesn't work when neutron is managing security groups

2014-06-05 Thread Aaron Rosen
** No longer affects: neutron

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326958

Title:
  default security groups listing doesn't work when neutron is managing
  security groups

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Neutron does not seem to implement the default security groups calls,
  so when neutron is managing security groups, nova tries to pass the
  call off to it (I think) and fails. I think this bug is really against
  neutron and nova, but I'm not sure where to start. I'm not sure if
  anyone else is trying to use this call or not and maybe it should just
  be dropped. The API doesn't support it and the docs on it are wrong.

  http://docs.openstack.org/api/openstack-compute/2/content/ext-os-
  security-group-default-rules.html  (note that the example URLs in that
  doc are missing the word default)

  curl -i 'http://1.2.3.4:8774/v2/f5ad8f41cd8540ca83b6998b83bf9bba/os-
  security-group-default-rules' -X GET -H X-Auth-Project-Id: admin  -H
  Accept: application/json -H X-Auth-Token:
  487b898af056401b806786623e3c2656

  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 917, in 
__call__
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack content_type, body, 
accept)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 983, in 
_process_stack
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 1070, in 
dispatch
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_group_default_rules.py,
 line 181, in index
  2014-06-05 20:31:24.643 9148 TRACE nova.api.openstack for 

[Yahoo-eng-team] [Bug 1327020] [NEW] FAIL: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern[compute, image, volume]

2014-06-05 Thread Anita Kuno
Public bug reported:

2014-06-05 23:26:45,370 Creating ssh connection to '172.24.4.2' as 'cirros' 
with public key authentication
2014-06-05 23:27:45,431 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 (timed out). Number attempts: 1. Retry after 2 seconds.
2014-06-05 23:27:50,932 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 2. Retry 
after 3 seconds.
2014-06-05 23:27:57,437 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 3. Retry 
after 4 seconds.
2014-06-05 23:28:04,940 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 4. Retry 
after 5 seconds.
2014-06-05 23:28:13,444 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 5. Retry 
after 6 seconds.
2014-06-05 23:28:22,952 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 6. Retry 
after 7 seconds.
2014-06-05 23:28:33,460 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 7. Retry 
after 8 seconds.
2014-06-05 23:28:44,972 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 8. Retry 
after 9 seconds.
2014-06-05 23:28:57,484 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 9. Retry 
after 10 seconds.
2014-06-05 23:29:10,996 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 10. Retry 
after 11 seconds.
2014-06-05 23:29:25,508 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 11. Retry 
after 12 seconds.
2014-06-05 23:29:41,020 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 12. Retry 
after 13 seconds.
2014-06-05 23:29:57,533 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 ([Errno 113] No route to host). Number attempts: 13. Retry 
after 14 seconds.
2014-06-05 23:30:15,044 Failed to establish authenticated ssh connection to 
cirros@172.24.4.2 after 13 attempts
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh Traceback (most recent 
call last):
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh   File 
tempest/common/ssh.py, line 75, in _get_ssh_connection
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh 
timeout=self.channel_timeout, pkey=self.pkey)
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
connect
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/util.py, line 278, in 
retry_on_signal
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh return function()
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
lambda
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh   File 
/usr/lib/python2.7/socket.py, line 224, in meth
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh return 
getattr(self._sock,name)(*args)
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh error: [Errno 113] No 
route to host
2014-06-05 23:30:15.044 1396 TRACE tempest.common.ssh
}}}

Traceback (most recent call last):
  File tempest/test.py, line 126, in wrapper
return f(self, *func_args, **func_kwargs)
  File tempest/scenario/test_volume_boot_pattern.py, line 160, in 
test_volume_boot_pattern
self._check_content_of_written_file(ssh_client_for_instance_2nd, text)
  File tempest/scenario/test_volume_boot_pattern.py, line 132, in 
_check_content_of_written_file
actual = self._get_content(ssh_client)
  File tempest/scenario/test_volume_boot_pattern.py, line 119, in _get_content
return ssh_client.exec_command('cat /tmp/text')
  File tempest/common/utils/linux/remote_client.py, line 47, in exec_command
return self.ssh_client.exec_command(cmd)
  File tempest/common/ssh.py, line 110, in exec_command
ssh = self._get_ssh_connection()
  File tempest/common/ssh.py, line 87, in _get_ssh_connection
password=self.password)
SSHTimeout: Connection to the 172.24.4.2 via SSH timed out.
User: cirros, Password: None

Full log here: http://logs.openstack.org/65/98265/1/check/check-grenade-
dsvm/666a92e/logs/testr_results.html.gz

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received 

[Yahoo-eng-team] [Bug 1327028] [NEW] add availability_zone for host show

2014-06-05 Thread tinytmy
Public bug reported:

when we get host with hostname, the return content does not contain 
availability_zone.
I think this is need to be contained.

** Affects: nova
 Importance: Undecided
 Assignee: tinytmy (tangmeiyan77)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = tinytmy (tangmeiyan77)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327028

Title:
  add availability_zone for host show

Status in OpenStack Compute (Nova):
  New

Bug description:
  when we get host with hostname, the return content does not contain 
availability_zone.
  I think this is need to be contained.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244457] Re: ServiceCatalogException: Invalid service catalog service: compute

2014-06-05 Thread Attila Fazekas
message:ServiceCatalogException AND tags:horizon_error.txt 11 hit in
7 days.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU2VydmljZUNhdGFsb2dFeGNlcHRpb25cIiBBTkQgdGFnczpcImhvcml6b25fZXJyb3IudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDIwMjgwOTM2ODUsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Changed in: horizon
   Status: Expired = New

** Also affects: grenade
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1244457

Title:
  ServiceCatalogException: Invalid service catalog service: compute

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the following review - https://review.openstack.org/#/c/53712/

  We failed the tempest tests on the dashboard scenario tests for the pg 
version of the job: 
  2013-10-24 21:26:00.445 | 
==
  2013-10-24 21:26:00.445 | FAIL: 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
--
  2013-10-24 21:26:00.446 | _StringException: Empty attachments:
  2013-10-24 21:26:00.446 |   pythonlogging:''
  2013-10-24 21:26:00.446 |   stderr
  2013-10-24 21:26:00.446 |   stdout
  2013-10-24 21:26:00.446 | 
  2013-10-24 21:26:00.446 | Traceback (most recent call last):
  2013-10-24 21:26:00.446 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 73, in test_basic_scenario
  2013-10-24 21:26:00.447 | self.user_login()
  2013-10-24 21:26:00.447 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 64, in user_login
  2013-10-24 21:26:00.447 | self.opener.open(req, urllib.urlencode(params))
  2013-10-24 21:26:00.447 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.447 | response = meth(req, response)
  2013-10-24 21:26:00.447 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.447 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 438, 
in error
  2013-10-24 21:26:00.448 | result = self._call_chain(*args)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.448 | result = func(*args)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 625, 
in http_error_302
  2013-10-24 21:26:00.448 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.448 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.449 | response = meth(req, response)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.449 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 438, 
in error
  2013-10-24 21:26:00.449 | result = self._call_chain(*args)
  2013-10-24 21:26:00.449 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.449 | result = func(*args)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 625, 
in http_error_302
  2013-10-24 21:26:00.450 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 406, 
in open
  2013-10-24 21:26:00.450 | response = meth(req, response)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 519, 
in http_response
  2013-10-24 21:26:00.450 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.450 |   File /usr/lib/python2.7/urllib2.py, line 444, 
in error
  2013-10-24 21:26:00.451 | return self._call_chain(*args)
  2013-10-24 21:26:00.451 |   File /usr/lib/python2.7/urllib2.py, line 378, 
in _call_chain
  2013-10-24 21:26:00.451 | result = func(*args)
  2013-10-24 21:26:00.451 |   File /usr/lib/python2.7/urllib2.py, line 527, 
in http_error_default
  2013-10-24 21:26:00.451 | raise HTTPError(req.get_full_url(), code, msg, 
hdrs, fp)
  2013-10-24 21:26:00.451 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

  The horizon logs have the following error info:

  [Thu Oct 24 21:18:43 2013] [error] Internal Server Error: /project/
  [Thu Oct 24 21:18:43 2013] [error] Traceback (most recent call last):
  [Thu Oct 24 21:18:43 2013] [error]   File 
/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py, line 
115, in get_response
  [Thu Oct 24 21:18:43 2013] [error] 

[Yahoo-eng-team] [Bug 1210598] Re: Over quota errors originating from Neutron result in 500 error in Nova

2014-06-05 Thread jichenjc
** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1210598

Title:
  Over quota errors originating from Neutron result in 500 error in Nova

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When using the floating IP and SecGroup extensions in Nova with
  Neutron, and a quota limit is exceeded in Neutron,  the exception from
  the Neutron client results in  Nova generating a 500 error.

  This is unhelpful to to user, and not consistent with nova-networking

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1210598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1201717] Re: replacing python-memcached with pylibmc in token.backends.memcache

2014-06-05 Thread Morgan Fainberg
This is no longer needed, as the dogpile KVS backend can support pylibmc
directly and BMemcached as well.

** Changed in: keystone
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1201717

Title:
  replacing python-memcached with pylibmc in token.backends.memcache

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  I would suggest to replace python-memcached with pylibmc in
  token.backends.memcache. pylibmc is a lot of more performant than
  python-memcached. Are there any points speaking against the usage of
  pylibmc?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1201717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206972] Re: Remove Expired tokens upon token create

2014-06-05 Thread Morgan Fainberg
Let's focus on the Revocation Events instead.

** Changed in: keystone
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1206972

Title:
  Remove Expired tokens upon token create

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  Unless the deployer schedules an external job, the expired tokens are
  never removed from the token table.  For simple deployments, Keystone
  needs a way to automatically clean up the token table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1206972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267724] Re: How to enable gre/vxlan/vlan/flat network at one cloud at the same time ?

2014-06-05 Thread li,chen
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267724

Title:
  How to enable gre/vxlan/vlan/flat network at one cloud at the same
  time  ?

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi all,

  I’m doing some function test based on neutron + ml2 plugin.

  I want my cloud can support all kind of network, so I can do further
  comparison tests between different type of network.

  So, I create 4 networks:

  neutron net-list
  
+--+-+--+
  | id   | name 
   | subnets
   |
  
+--+-+--+
  | 1314f7bb-9b52-4db8-a677-a751e52aad0e | gre-1| 
c0774200-7aff-44bd-b122-4264368947da 20.1.100.0/24  |
  | 4e7d06f0-3547-446d-98ca-3adac416e370  | flat-1| 
83df18e1-ab2e-4983-8892-66d7699c4e9a 192.168.13.0/24 |
  | c7e26ebc-078b-4375-b313-795a89a9d8bd | vlan-1  | 
22789dfc-e41e-412c-a325-10a210f176c5 30.1.100.0/24   |
  | fcd5c1a8-34ab-4e0c-9e4d-d99d168aa300  | vxlan-3 | 
534558b0-c0a4-4c7e-add5-1f0abcb91cc3 40.1.100.0/24  |
  
+--+-+--+

  Because my machine only have 1 NIC port can be used for instances data
  network, so I start two dhcp agents:

  neutron agent-list
  
+--++-+---++
  | id   | agent_type | host| 
alive | admin_state_up |
  
+--++-+---++
  | 05e23822-0966-4c7c-9b16-687484385383 | Open vSwitch agent | b-compute05 | 
:-)   | True   |
  | 1267a2c6-f7cb-49d9-b579-18e986139878 | Open vSwitch agent | b-compute06 | 
:-)   | True   |
  | 55f457bf-9ffe-417b-ad50-5878c8a71aab | DHCP agent | b-compute05 | 
:-)   | True   |
  | 928495d3-fac0-4fbf-b958-36c3627d9b18 | Open vSwitch agent | b-compute01 | 
:-)   | True   |
  | 934c721b-8c7d-4605-8e03-400676665afc | Open vSwitch agent | b-network01 | 
:-)   | True   |
  | bd491c90-3597-45ea-b4a0-f37610f2ed9b | DHCP agent | b-network01 | 
:-)   | True   |
  | e07c8133-a3f6-4864-adb2-318f2233fe63 | Linux bridge agent | b-compute02 | 
xxx   | True   |
  | e1070c1e-fcb6-43fc-b2a0-a81e688b814a | Open vSwitch agent | b-compute02 | 
:-)   | True   |
  
+--++-+---++

  The DHCP agent started on b-compute05 is working for network flat-1 and 
vlan-1.
  The DHCP agent started on b-network01  is working for network gre-1 and 
vxlan-3.

  The Open vSwitch agent on b-compute05 and b-compute06 is configured to 
working for flat and vlan.
  The  Open vSwitch agent on b-compute01 and b-compute02 is configured to 
working for vxlan and gre.

  Then I start to create new instances.

  Here comes the issues:

  1.Network will not be auto scheduled to the right DHCP agent.
  It just randomly chose one of the active DHCP agent, and ignore whether the 
DHCP agent can work for that type of network or not.
  And no error message can be found in /var/log/neutron/dhcp-agent.log. 
  Everything looks just fine.
  Only, active instances will never get IP addresses from DHCP.
  I have to assign network to the right DHCP by hand.

  2.Similar issues to nova-scheduler.
  Because nova-scheduler scheduler instances without awareness of what type of 
network compute node support.
  So, it will scheduler instances to the wrong compute node that do not 
actually support the kind of network.
  These instances will end with error status, and with error message in 
/var/log/nova/compute.log:

  2014-01-10 14:59:48.454 9085 ERROR nova.compute.manager 
[req-f3863a12-30e9-420d-a44a-0dd9c0bd1412 c4633e89685d41c4a2d20a2234b5025e 
45c69667e2a64c889719ef8d8e0dd098] [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] Instance failed to spawn
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] Traceback (most recent call last):
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1413, in _spawn
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] block_device_info)
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: