[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments

2014-02-13 Thread Xurong Yang
** Also affects: python-swiftclient
   Importance: Undecided
   Status: New

** Changed in: python-swiftclient
   Status: New = In Progress

** Changed in: python-swiftclient
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229324

Title:
  extraneous vim editor configuration comments

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for heat:
  Fix Committed
Status in Python client library for Ironic:
  In Progress
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  In Progress
Status in OpenStack Data Processing (Savanna):
  New
Status in Storyboard database creator:
  New
Status in OpenStack Object Storage (Swift):
  In Progress
Status in Tempest:
  Fix Released
Status in Trove - Database as a Service:
  New
Status in Tuskar:
  In Progress

Bug description:
  Many of the source code files have a beginning line

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

  This should be deleted.

  Many of these lines are in the ceilometer/openstack/common directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279685] [NEW] Cannot connect to embedded vnc console

2014-02-13 Thread Arata Notsu
Public bug reported:

To reproduce,  simply use devstack, run stack.sh, create an instance and go to 
Instance Details.
Then, VNC console in Console tab fails with a message Failed to connect to 
server (code: 1006).

When open console directly (by clicking Click here to show only
console), the console works normally.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279685

Title:
  Cannot connect to embedded vnc console

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To reproduce,  simply use devstack, run stack.sh, create an instance and go 
to Instance Details.
  Then, VNC console in Console tab fails with a message Failed to connect to 
server (code: 1006).

  When open console directly (by clicking Click here to show only
  console), the console works normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265466] Re: Nova boot fail and raise NoValidHost when use specific aggregate

2014-02-13 Thread Qin Zhao
Hi Chen Zheng, I was not able to reproduce your problem today. Here is
what I did:

1. create one controller and two compute (zhaoqin-RHEL-GPFS-tmp and
zhaoqin-RHEL-GPFS-tmp1)

2. create two host groups

[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova aggregate-details gpfs
++--+---++---+
| Id | Name | Availability Zone | Hosts  | Metadata 
 |
++--+---++---+
| 1  | gpfs | nova  | [u'zhaoqin-RHEL-GPFS-tmp'] | {u'gpfs': 
u'true', u'availability_zone': u'nova'} |
++--+---++---+
[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova aggregate-details gluster
++-+---+-+--+
| Id | Name| Availability Zone | Hosts   | Metadata 
|
++-+---+-+--+
| 2  | gluster | nova  | [u'zhaoqin-RHEL-GPFS-tmp1'] | {u'gluster': 
u'true', u'availability_zone': u'nova'} |
++-+---+-+--+


3. create two falvors

[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova flavor-show gpfs
+++
| Property   | Value  |
+++
| name   | gpfs   |
| ram| 64 |
| OS-FLV-DISABLED:disabled   | False  |
| vcpus  | 1  |
| extra_specs| {u'gpfs': u'true'} |
| swap   ||
| os-flavor-access:is_public | True   |
| rxtx_factor| 1.0|
| OS-FLV-EXT-DATA:ephemeral  | 0  |
| disk   | 1  |
| id | 10 |
+++
[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova flavor-show gluster
++---+
| Property   | Value |
++---+
| name   | gluster   |
| ram| 64|
| OS-FLV-DISABLED:disabled   | False |
| vcpus  | 1 |
| extra_specs| {u'gluster': u'true'} |
| swap   |   |
| os-flavor-access:is_public | True  |
| rxtx_factor| 1.0   |
| OS-FLV-EXT-DATA:ephemeral  | 0 |
| disk   | 1 |
| id | 11|
++---+

4. VM can be scheduled to the host group specified by the flavor

[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova boot --image dcc42858-f4f8
-45ef-b2ba-33a66c806afc --flavor gluster gluster-vm

[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova boot --image dcc42858-f4f8
-45ef-b2ba-33a66c806afc --flavor gpfs gpfs-vm

[root@zhaoqin-RHEL-GPFS-tmp nova]# nova show gluster-vm | grep hostname 
| OS-EXT-SRV-ATTR:hypervisor_hostname  | zhaoqin-RHEL-GPFS-tmp1 
 |
[root@zhaoqin-RHEL-GPFS-tmp nova]# nova show gpfs-vm | grep hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | zhaoqin-RHEL-GPFS-tmp  


** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265466

Title:
  Nova boot fail and raise NoValidHost when use specific aggregate

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  1. Set the nova.conf 
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
 and restart compute, scheduler service
  2. Create an aggregate and add a host to it with metadata test_meta=1
  3. Modify exist flavor to add the same key:value test_meta=1
  4. Boot an instance with this specific flavor and came across 
  | fault| {u'message': u'NV-67B7376 No valid 
host was found. ', u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, line 
107, in schedule_run_instance |
  |  | raise 

[Yahoo-eng-team] [Bug 1279719] [NEW] Fail to createVM with extra_spec using ComputeCapabilitiesFilter

2014-02-13 Thread wingwj
Public bug reported:

Fails to createVM with extra_spec using ComputeCapabilitiesFilter, the
scheduler will always fail to find a suitable host.



Here's the test steps:

1. Create an aggregate, and set its metadata, like ssd=True.

2. Add one host to this aggregate.

3. Create a new flavor, set extra_spcs like ssd=True.

4. Create a new VM using this flavor.

5. Creation failed due to no valid hosts.

-
Let's look at the codes:
In ComputeCapabilitiesFilter, it'll match hosts' capacities with extra_spec.

Before in Grizzly, there's a periodic_task named '_report_driver_status()' to 
report hosts' capacities.
But in Havana, the task is canceled. So the capacities won't be updated, the 
value is always 'None'.

So, if you boot a VM with extra_spec, those hosts will be filtered out.
And the exception will be raised.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279719

Title:
  Fail to createVM with extra_spec using ComputeCapabilitiesFilter

Status in OpenStack Compute (Nova):
  New

Bug description:
  Fails to createVM with extra_spec using ComputeCapabilitiesFilter, the
  scheduler will always fail to find a suitable host.

  

  Here's the test steps:

  1. Create an aggregate, and set its metadata, like ssd=True.

  2. Add one host to this aggregate.

  3. Create a new flavor, set extra_spcs like ssd=True.

  4. Create a new VM using this flavor.

  5. Creation failed due to no valid hosts.

  -
  Let's look at the codes:
  In ComputeCapabilitiesFilter, it'll match hosts' capacities with extra_spec.

  Before in Grizzly, there's a periodic_task named '_report_driver_status()' to 
report hosts' capacities.
  But in Havana, the task is canceled. So the capacities won't be updated, the 
value is always 'None'.

  So, if you boot a VM with extra_spec, those hosts will be filtered out.
  And the exception will be raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279722] [NEW] cann't connect the keystone after long time used

2014-02-13 Thread Ivan-Zhu
Public bug reported:

I install openstack via devstack on my ubuntu 12.04, successfully in the 
morning . And run some tempest tests on my laptop.
But at afternoon, the client cann't connect keystone. 
when run: nova --debug list
REQ: curl -i 'http://localhost:35357/v2.0/tokens' -X POST -H Content-Type: 
application/json -H Accept: application/json -H User-Agent: 
python-novaclient -d '{auth: {tenantName: admin, passwordCredentials: 
{username: admin, password: pass}}}'

DEBUG (shell:777) HTTPConnectionPool(host='localhost', port=35357): Max retries 
exceeded with url: /v2.0/tokens (Caused by class 'socket.error': [Errno 111] 
Connection refused)
Traceback (most recent call last):
  File /opt/stack/python-novaclient/novaclient/shell.py, line 774, in main
OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File /opt/stack/python-novaclient/novaclient/shell.py, line 685, in main
self.cs.authenticate()
  File /opt/stack/python-novaclient/novaclient/v1_1/client.py, line 169, in 
authenticate
self.client.authenticate()
  File /opt/stack/python-novaclient/novaclient/client.py, line 353, in 
authenticate
auth_url = self._v2_auth(auth_url)
  File /opt/stack/python-novaclient/novaclient/client.py, line 440, in 
_v2_auth
return self._authenticate(url, body)
  File /opt/stack/python-novaclient/novaclient/client.py, line 453, in 
_authenticate
**kwargs)
  File /opt/stack/python-novaclient/novaclient/client.py, line 213, in 
_time_request
resp, body = self.request(url, method, **kwargs)
  File /opt/stack/python-novaclient/novaclient/client.py, line 185, in request
**kwargs)
  File /usr/local/lib/python2.7/dist-packages/requests/sessions.py, line 383, 
in request
resp = self.send(prep, **send_kwargs)
  File /usr/local/lib/python2.7/dist-packages/requests/sessions.py, line 486, 
in send
r = adapter.send(request, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/requests/adapters.py, line 378, 
in send
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='localhost', port=35357): Max retries 
exceeded with url: /v2.0/tokens (Caused by class 'socket.error': [Errno 111] 
Connection refused)
ERROR: HTTPConnectionPool(host='localhost', port=35357): Max retries exceeded 
with url: /v2.0/tokens (Caused by class 'socket.error': [Errno 111] 
Connection refused)


when run: keystone --debug tenant-list
WARNING: Bypassing authentication using a token  endpoint (authentication 
credentials are being ignored).
DEBUG:keystoneclient.session:REQ: curl -i -X GET 
http://localhost:35357/v2.0/tenants -H User-Agent: python-keystoneclient -H 
X-Auth-Token: token
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
localhost
Unable to establish connection to http://localhost:35357/v2.0/tenants


stack@stack:/opt/stack$ ps -aux | grep keystone
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
stack12163  0.0  0.6 217744 51076 pts/0S+   17:39   0:00 python 
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug


the wiresharke summary is:( catch the tcp packets via tcpdump 'tcp port 35357' 
-i lo)
1   0.00127.0.0.1   127.0.0.1   TCP 74  42820  
openstack-id [SYN] Seq=0 Win=32792 Len=0 MSS=16396 SACK_PERM=1 TSval=6371058 
TSecr=0 WS=128
2   0.21127.0.0.1   127.0.0.1   TCP 54  
openstack-id  42820 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0


kill the keystone proccess, and run it again, the issue remain.

./unstack.sh and then ./stack.sh, the issue remain.

reboot the laptop, it's ok.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1279722

Title:
  cann't connect the keystone after long time used

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I install openstack via devstack on my ubuntu 12.04, successfully in the 
morning . And run some tempest tests on my laptop.
  But at afternoon, the client cann't connect keystone. 
  when run: nova --debug list
  REQ: curl -i 'http://localhost:35357/v2.0/tokens' -X POST -H Content-Type: 
application/json -H Accept: application/json -H User-Agent: 
python-novaclient -d '{auth: {tenantName: admin, passwordCredentials: 
{username: admin, password: pass}}}'

  DEBUG (shell:777) HTTPConnectionPool(host='localhost', port=35357): Max 
retries exceeded with url: /v2.0/tokens (Caused by class 'socket.error': 
[Errno 111] Connection refused)
  Traceback (most recent call last):
File /opt/stack/python-novaclient/novaclient/shell.py, line 774, in main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File /opt/stack/python-novaclient/novaclient/shell.py, line 685, in main
  self.cs.authenticate()
File 

[Yahoo-eng-team] [Bug 1268614] Re: pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-13 Thread Eoghan Glynn
** Also affects: ceilometer/havana
   Importance: Undecided
   Status: New

** Changed in: ceilometer/havana
 Assignee: (unassigned) = Ildiko Vancsa (ildiko-vancsa)

** Changed in: ceilometer/havana
   Importance: Undecided = Critical

** Changed in: ceilometer/havana
   Status: New = Fix Committed

** Changed in: ceilometer/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268614

Title:
  pep8 gating fails due to tools/config/check_uptodate.sh

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I see several changes, including
  https://review.openstack.org/#/c/63735/ , failed pep8 gating with
  error from check_uptodate tool:

  
  2014-01-13 14:06:39.643 | pep8 runtests: commands[1] | 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh
  2014-01-13 14:06:39.649 |   /home/jenkins/workspace/gate-nova-pep8$ 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh 
  2014-01-13 14:06:43.581 | 2741,2746d2740
  2014-01-13 14:06:43.581 |  # (optional) indicate whether to set the 
X-Service-Catalog
  2014-01-13 14:06:43.581 |  # header. If False, middleware will not ask for 
service
  2014-01-13 14:06:43.581 |  # catalog on token validation and will not set 
the X-Service-
  2014-01-13 14:06:43.581 |  # Catalog header. (boolean value)
  2014-01-13 14:06:43.581 |  #include_service_catalog=true
  2014-01-13 14:06:43.582 |  
  2014-01-13 14:06:43.582 | E: nova.conf.sample is not up to date, please run 
tools/config/generate_sample.sh
  2014-01-13 14:06:43.582 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279739] [NEW] nova.cmd.rpc_zmq_receiver:main is missing

2014-02-13 Thread yong sheng gong
Public bug reported:

I am trying to run devstack with zeromq, but the zeromq failed.

al/bin/nova-rpc-zmq-receiver  echo $! /opt/stack/status/stack/zeromq.pid; fg 
|| echo zeromq failed to start | tee /opt/stack/status/stack/zeromq.failure
[1] 25102
cd /opt/stack/nova  /usr/local/bin/nova-rpc-zmq-receiver
Traceback (most recent call last):
  File /usr/local/bin/nova-rpc-zmq-receiver, line 6, in module
from nova.cmd.rpc_zmq_receiver import main
ImportError: No module named rpc_zmq_receiver
zeromq failed to start


I found at https://github.com/openstack/nova/blob/master/setup.cfg:
nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main

but at https://github.com/openstack/nova/tree/master/nova/cmd:
we have no rpc_zmq_receiver module at all.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279739

Title:
  nova.cmd.rpc_zmq_receiver:main is missing

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am trying to run devstack with zeromq, but the zeromq failed.

  al/bin/nova-rpc-zmq-receiver  echo $! /opt/stack/status/stack/zeromq.pid; 
fg || echo zeromq failed to start | tee 
/opt/stack/status/stack/zeromq.failure
  [1] 25102
  cd /opt/stack/nova  /usr/local/bin/nova-rpc-zmq-receiver
  Traceback (most recent call last):
File /usr/local/bin/nova-rpc-zmq-receiver, line 6, in module
  from nova.cmd.rpc_zmq_receiver import main
  ImportError: No module named rpc_zmq_receiver
  zeromq failed to start

  
  I found at https://github.com/openstack/nova/blob/master/setup.cfg:
  nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main

  but at https://github.com/openstack/nova/tree/master/nova/cmd:
  we have no rpc_zmq_receiver module at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279749] [NEW] ading/removing admin to a group will crash horizon

2014-02-13 Thread Dafna Ron
Public bug reported:

I created a group and added admin user to the group. 
this caused horizon to crash. 
admin is added to the group though so if you try to remove it from the group, 
horizon will crash as well: 

2014-02-13 11:03:43,926 7731 ERROR django.request Internal Server Error: 
/dashboard/admin/groups/dbcd231f592e4f9da055e5215fac8061/manage_members/
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 
111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
  File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 86, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 
48, in view
return self.dispatch(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 
69, in dispatch
return handler(request, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 158, in 
get
context = self.get_context_data(**kwargs)
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/groups/views.py,
 line 117, in get_context_data
context['group'] = self._get_group()
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/groups/views.py,
 line 89, in _get_group
self._group = api.keystone.group_get(self.request, group_id)
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/keystone.py,
 line 424, in group_get
return manager.get(group_id)
  File /usr/lib/python2.6/site-packages/keystoneclient/v3/groups.py, line 78, 
in get
group_id=base.getid(group))
  File /usr/lib/python2.6/site-packages/keystoneclient/base.py, line 70, in 
func
return f(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/keystoneclient/base.py, line 325, in 
get
self.key)
  File /usr/lib/python2.6/site-packages/keystoneclient/base.py, line 132, in 
_get
resp, body = self.client.get(url)
  File /usr/lib/python2.6/site-packages/keystoneclient/httpclient.py, line 
655, in get
return self._cs_request(url, 'GET', **kwargs)
  File /usr/lib/python2.6/site-packages/keystoneclient/httpclient.py, line 
651, in _cs_request
**kwargs)
  File /usr/lib/python2.6/site-packages/keystoneclient/httpclient.py, line 
610, in request
**request_kwargs)
  File /usr/lib/python2.6/site-packages/keystoneclient/httpclient.py, line 
124, in request
raise exceptions.from_response(resp, method, url)
Unauthorized: The request you have made requires authentication. (HTTP 401)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279749

Title:
  ading/removing admin to a group will crash horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I created a group and added admin user to the group. 
  this caused horizon to crash. 
  admin is added to the group though so if you try to remove it from the group, 
horizon will crash as well: 

  2014-02-13 11:03:43,926 7731 ERROR django.request Internal Server Error: 
/dashboard/admin/groups/dbcd231f592e4f9da055e5215fac8061/manage_members/
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 
111, in get_response
  response = callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 86, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 
48, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 
69, in dispatch
  return handler(request, *args, **kwargs)
File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 158, 
in get
  context = self.get_context_data(**kwargs)
File 

[Yahoo-eng-team] [Bug 1279751] [NEW] Disallow create port if network is not shared and port/net tenant_id doesn't match

2014-02-13 Thread Zang MingJie
Public bug reported:

Admin user is able to create port whose tenant is different from the
owner network even the network is not shared. It isn't intended.

The port is usable, but if trying to detach the port from an instance,
the following exception occurred:

2014-02-13 18:29:23.922 18042 ERROR nova.openstack.common.rpc.amqp 
[req-63a58698-42cb-4e83-99eb-46bb419c1b35 87009c0961304b17a93031efd3f1651d 
0867dafdae904943b4a846870b717fda] Excepti
on during message handling
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp **args)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 400, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp payload)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 290, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp pass
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 276, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 341, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 318, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 305, in 
decorated_function
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2142, in 
terminate_instance
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp 
do_terminate_instance(instance, bdms, clean_shutdown)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
248, in inner
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2134, in 
do_terminate_instance
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp 
reservations=reservations)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/hooks.py, line 105, in inner
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp rv = 
f(*args, **kwargs)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2105, in 
_delete_instance
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp 
user_id=user_id)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2077, in 
_delete_instance
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp 
clean_shutdown=clean_shutdown)
2014-02-13 18:29:23.922 18042 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1967, in 
_shutdown_instance

[Yahoo-eng-team] [Bug 1279769] [NEW] duplicated config-option registering

2014-02-13 Thread Isaku Yamahata
Public bug reported:

Some config options(interface_driver, use_namespaces, periodic_interval) are 
defined in multiple sources in ad-hoc way.
It may cause DuplicateOptError exception when using those module at the same 
time.
Right now the exception is avoided in ad-hoc way by each executables.
Those definition/registering should be consolidated.

This is the blocker for BP of l3 agent consolidation.
https://blueprints.launchpad.net/neutron/+spec/l3-agent-consolidation

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Isaku Yamahata (yamahata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279769

Title:
  duplicated config-option registering

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Some config options(interface_driver, use_namespaces, periodic_interval) are 
defined in multiple sources in ad-hoc way.
  It may cause DuplicateOptError exception when using those module at the same 
time.
  Right now the exception is avoided in ad-hoc way by each executables.
  Those definition/registering should be consolidated.

  This is the blocker for BP of l3 agent consolidation.
  https://blueprints.launchpad.net/neutron/+spec/l3-agent-consolidation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments

2014-02-13 Thread Xurong Yang
** Also affects: taskflow
   Importance: Undecided
   Status: New

** Changed in: taskflow
   Status: New = In Progress

** Changed in: taskflow
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229324

Title:
  extraneous vim editor configuration comments

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for heat:
  Fix Committed
Status in Python client library for Ironic:
  In Progress
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  In Progress
Status in OpenStack Data Processing (Savanna):
  New
Status in Storyboard database creator:
  New
Status in OpenStack Object Storage (Swift):
  In Progress
Status in Taskflow for task-oriented systems.:
  In Progress
Status in Tempest:
  Fix Released
Status in Trove - Database as a Service:
  New
Status in Tuskar:
  In Progress

Bug description:
  Many of the source code files have a beginning line

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

  This should be deleted.

  Many of these lines are in the ceilometer/openstack/common directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254174] Re: bulk-delete-floating-ip does not free used quota

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = Medium

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Yaguang Tang (heut2008)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254174

Title:
  bulk-delete-floating-ip does not free used quota

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  The bulk-create-floating-ip and bulk-delete-floating-ip commands do
  not interact with floating_ip quotas.  This is by design, since
  they're for admins rather than tenants.

  However, in one case this causes a bug.  If a tenant initially
  allocates the floating IP with create-floating-ip and consumed quota,
  and the admin later deletes the floating Ip with bulk-delete-floating-
  ip, the floating IP is freed but the quota is still consumed.

  So we should change bulk-delete-floating-ip to release any quota that
  was associated with those floating IP addresses.  (In many cases there
  will not be any so we need to check.)

  This is https://bugzilla.redhat.com/show_bug.cgi?id=1029756 (but that
  bug is mostly private so won't people outside Red Hat much good).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260233] Re: db migration (agents constraint) fails when using ryu plugin

2014-02-13 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1254246 ***
https://bugs.launchpad.net/bugs/1254246

This is the same issue as bug 1254246.

** Tags added: db

** This bug has been marked a duplicate of bug 1254246
   somehow getting duplicate openvswitch agents for the same host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260233

Title:
  db migration (agents constraint) fails when using ryu plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Ryu plugin does not support agent extension yet.
  Therefore, 511471cc46b_agent_ext_model_supp.py does not contain ryu plugin, 
and agents table is not created.
  However, 1fcfc149aca4_agents_unique_by_type_and_host.py does not consider 
this case.
  I think that migration_for_plugins of 1fcfc149aca4 should be the same as 
511471cc46b's.

  2013-12-12 01:08:44 INFO  [alembic.migration] Running upgrade e197124d4b9 - 
1fcfc149aca4, Add a unique constraint on (agent_type, host) columns to prevent 
a race
  2013-12-12 01:08:44 condition when an agent entry is 'upserted'.
  2013-12-12 01:08:44 Traceback (most recent call last):
  2013-12-12 01:08:44   File /usr/local/bin/neutron-db-manage, line 10, in 
module
  2013-12-12 01:08:44 sys.exit(main())
  2013-12-12 01:08:44   File /opt/stack/neutron/neutron/db/migration/cli.py, 
line 143, in main
  2013-12-12 01:08:44 CONF.command.func(config, CONF.command.name)
  2013-12-12 01:08:44   File /opt/stack/neutron/neutron/db/migration/cli.py, 
line 80, in do_upgrade_downgrade
  2013-12-12 01:08:44 do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
  2013-12-12 01:08:44   File /opt/stack/neutron/neutron/db/migration/cli.py, 
line 59, in do_alembic_command
  2013-12-12 01:08:44 getattr(alembic_command, cmd)(config, *args, **kwargs)
  2013-12-12 01:08:44   File 
/usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in 
upgrade
  2013-12-12 01:08:44 script.run_env()
  2013-12-12 01:08:44   File 
/usr/local/lib/python2.7/dist-packages/alembic/script.py, line 193, in run_env
  2013-12-12 01:08:44 util.load_python_file(self.dir, 'env.py')
  2013-12-12 01:08:44   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 177, in 
load_python_file
  2013-12-12 01:08:44 module = load_module(module_id, path)
  2013-12-12 01:08:44   File 
/usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 39, in 
load_module
  2013-12-12 01:08:44 return imp.load_source(module_id, path, fp)
  2013-12-12 01:08:44   File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, line 105, 
in module
  2013-12-12 01:08:44 run_migrations_online()
  2013-12-12 01:08:44   File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, line 89, 
in run_migrations_online
  2013-12-12 01:08:44 options=build_options())
  2013-12-12 01:08:44   File string, line 7, in run_migrations
  2013-12-12 01:08:44   File 
/usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 652, in 
run_migrations
  2013-12-12 01:08:45 self.get_context().run_migrations(**kw)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 224, in 
run_migrations
  2013-12-12 01:08:45 change(**kw)
  2013-12-12 01:08:45   File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py,
 line 50, in upgrade
  2013-12-12 01:08:45 local_cols=['agent_type', 'host']
  2013-12-12 01:08:45   File string, line 7, in create_unique_constraint
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 539, in 
create_unique_constraint
  2013-12-12 01:08:45 schema=schema, **kw)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 135, in 
add_constraint
  2013-12-12 01:08:45 self._exec(schema.AddConstraint(const))
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in _exec
  2013-12-12 01:08:45 conn.execute(construct, *multiparams, **params)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1449, 
in execute
  2013-12-12 01:08:45 params)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1542, 
in _execute_ddl
  2013-12-12 01:08:45 compiled
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1698, 
in _execute_context
  2013-12-12 01:08:45 context)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1691, 
in _execute_context
  2013-12-12 01:08:45 context)
  2013-12-12 01:08:45   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
331, in do_execute
  2013-12-12 01:08:45 

[Yahoo-eng-team] [Bug 1255532] Re: Import of unexisting openstack.common.test.py file

2014-02-13 Thread Ilya Pekelny
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255532

Title:
  Import of unexisting openstack.common.test.py file

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  New

Bug description:
  In keystone/openstack/common/db/sqlalchemy/test_migrations.py, line 30 
imports `test` what is not exists. Oslo project contains  appropriate file. 
Solution ­— synchronize keystone.openstack.common with corresponding file in 
Oslo.
  The bug reproduces under nosetests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1255532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279813] [NEW] excutils.save_and_reraise_exception should be used when reraising an exception

2014-02-13 Thread Akihiro Motoki
Public bug reported:

excutils.save_and_reraise_exception should be used when reraising an
exception, as described in openstack.common.excutils.

I don't see any issues due to this but it is better to be fixed.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279813

Title:
  excutils.save_and_reraise_exception should be used when reraising an
  exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  excutils.save_and_reraise_exception should be used when reraising an
  exception, as described in openstack.common.excutils.

  I don't see any issues due to this but it is better to be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279820] [NEW] libvirt: some tests taking longer than 2 seconds

2014-02-13 Thread Gary Kotton
Public bug reported:

py27 develop-inst-nodeps: /home/gkotton/nova
py27 runtests: commands[0] | python -m nova.openstack.common.lockutils python 
setup.py test --slowest --testr-args=nova.tests.virt.libvirt
[pbr] Excluding argparse: Python 2.6 only dependency
running test
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list 
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp1dQ_3A
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpSmMObk
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpqnRKjm
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpMKqiiM
Ran 572 tests in 13.283s (-0.814s)
PASSED (id=358)
Slowest Tests
Test id 
Runtime (s)
--
  ---
nova.tests.virt.libvirt.test_libvirt.LibvirtConnTestCase.test_pre_live_migration_plug_vifs_retry_works
  2.080
nova.tests.virt.libvirt.test_libvirt.LibvirtConnTestCase.test_pre_live_migration_plug_vifs_retry_fails
  2.079
nova.tests.virt.libvirt.test_libvirt_volume.LibvirtVolumeTestCase.test_libvirt_kvm_iser_volume_with_multipath
   2.011
nova.tests.virt.libvirt.test_libvirt_volume.LibvirtVolumeTestCase.test_libvirt_kvm_iser_volume_with_multipath_getmpdev
  2.010
nova.tests.virt.libvirt.test_libvirt.HostStateTestCase.test_update_status   
1.024
nova.tests.virt.libvirt.test_dmcrypt.LibvirtDmcryptTestCase.test_create_volume  
1.019
nova.tests.virt.libvirt.test_dmcrypt.LibvirtDmcryptTestCase.test_delete_volume  
1.016
nova.tests.virt.libvirt.test_libvirt.CacheConcurrencyTestCase.test_same_fname_concurrency
   0.886
nova.tests.virt.libvirt.test_libvirt.LibvirtConnTestCase.test_power_on  
0.795
nova.tests.virt.libvirt.test_libvirt.LibvirtConnTestCase.test_hard_reboot   
0.643
py33 create: /home/gkotton/nova/.tox/py33
ERROR: InterpreterNotFound: python3.3

** Affects: nova
 Importance: Medium
 Assignee: Gary Kotton (garyk)
 Status: New


** Tags: libvirt

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279820

Title:
  libvirt: some tests taking longer than 2 seconds

Status in OpenStack Compute (Nova):
  New

Bug description:
  py27 develop-inst-nodeps: /home/gkotton/nova
  py27 runtests: commands[0] | python -m nova.openstack.common.lockutils python 
setup.py test --slowest --testr-args=nova.tests.virt.libvirt
  [pbr] Excluding argparse: Python 2.6 only dependency
  running test
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests --list 
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp1dQ_3A
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpSmMObk
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpqnRKjm
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpMKqiiM
  Ran 572 tests in 13.283s 

[Yahoo-eng-team] [Bug 1279823] [NEW] Deleting enabled domain results in confusing error

2014-02-13 Thread Steven Hardy
Public bug reported:

# openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain create heat
+-+---+
| Field   | Value   
  |
+-+---+
| enabled | True
  |
| id  | b1816241c3bd4a67b4059dcf62526e31
  |
| links   | {u'self': 
u'http://192.168.122.214:5000/v3/domains/b1816241c3bd4a67b4059dcf62526e31'} |
| name| heat
  |
+-+---+

# openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain delete heat
ERROR: cliff.app You are not authorized to perform the requested action, delete 
a domain that is not disabled. (HTTP 403)

This, to me at least, is confusing - from a user perspective, it sounds
like an instruction to delete a domain that is not disabled (i.e one
which is enabled, which it is!), rather than information that you can
only delete a domain which is not *enabled*

Rewording this slightly would make the user-visible error clearer IMO:

# openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain delete heat
ERROR: cliff.app You are not authorized to perform the requested action, can't 
delete a domain that is enabled. (HTTP 403)

** Affects: keystone
 Importance: Undecided
 Assignee: Steven Hardy (shardy)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Steven Hardy (shardy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1279823

Title:
  Deleting enabled domain results in confusing error

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  # openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain create heat
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | enabled | True  
|
  | id  | b1816241c3bd4a67b4059dcf62526e31  
|
  | links   | {u'self': 
u'http://192.168.122.214:5000/v3/domains/b1816241c3bd4a67b4059dcf62526e31'} |
  | name| heat  
|
  
+-+---+

  # openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain delete heat
  ERROR: cliff.app You are not authorized to perform the requested action, 
delete a domain that is not disabled. (HTTP 403)

  This, to me at least, is confusing - from a user perspective, it
  sounds like an instruction to delete a domain that is not disabled
  (i.e one which is enabled, which it is!), rather than information that
  you can only delete a domain which is not *enabled*

  Rewording this slightly would make the user-visible error clearer IMO:

  # openstack --os-token foobar --os-url=http://127.0.0.1:5000/v3 
--os-identity-api-version=3 domain delete heat
  ERROR: cliff.app You are not authorized to perform the requested action, 
can't delete a domain that is enabled. (HTTP 403)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1279823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240753] Re: don't use paste to configure authtoken

2014-02-13 Thread Tomas Sedovic
** Changed in: tripleo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240753

Title:
  don't use paste to configure authtoken

Status in Cinder:
  Fix Released
Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Several services (Nova/Cinder) still default to using api-paste.ini
  for keystoneclient's authtoken configuration. We should move towards
  using a more editable config files (nova.conf/cinder.conf) for this...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1240753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments

2014-02-13 Thread Dolph Mathews
Added hacking, as I'm already seeing regressions for this that should be
automatically gated.

** Also affects: hacking
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229324

Title:
  extraneous vim editor configuration comments

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Hacking Guidelines:
  New
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for heat:
  Fix Committed
Status in Python client library for Ironic:
  In Progress
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  In Progress
Status in OpenStack Data Processing (Savanna):
  New
Status in Storyboard database creator:
  New
Status in OpenStack Object Storage (Swift):
  In Progress
Status in Taskflow for task-oriented systems.:
  In Progress
Status in Tempest:
  Fix Released
Status in Trove - Database as a Service:
  New
Status in Tuskar:
  In Progress

Bug description:
  Many of the source code files have a beginning line

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

  This should be deleted.

  Many of these lines are in the ceilometer/openstack/common directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279857] [NEW] RFE: libguestfs logging should be connected up to openstack logging

2014-02-13 Thread Richard Jones
Public bug reported:

https://bugzilla.redhat.com/show_bug.cgi?id=1064948

We were trying to chase up a bug in libguestfs integration with
OpenStack.  It was made much harder because the only way to diagnose
the bug was to manually run the nova service after manually setting
environment variables:
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs

It would be much nicer if:

(1) There was a Nova setting to enable debugging, like:
  libguestfs_debug = 1
or something along those lines.

(2) Nova used the events API to collect libguestfs debug messages
and push them into Openstack's own logging system.  See code
example below.

-

Here is how you enable logging programmatically and capture
the log messages.

(a) As soon as possible after creating the guestfs handle, call
either (or better, both) of these functions:

g.set_trace (1) # just traces libguestfs API calls
g.set_verbose (1)   # verbose debugging

(b) Register an event handler like this:

events = guestfs.EVENT_APPLIANCE | guestfs.EVENT_LIBRARY \
 | guestfs.EVENT_WARNING | guestfs.EVENT_TRACE
g.set_event_callback (log_callback, events)

(c) The log_callback function should look something like this:

def log_callback (ev,eh,buf,array):
if ev == guestfs.EVENT_APPLIANCE:
buf = buf.rstrip()
# What just happened?
LOG.debug (event=%s eh=%d buf='%s' array=%s %
   (guestfs.event_to_string (ev), eh, buf, array))

There is a fully working example here:

https://github.com/libguestfs/libguestfs/blob/master/python/t/420-log-
messages.py

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279857

Title:
  RFE: libguestfs logging should be connected up to openstack logging

Status in OpenStack Compute (Nova):
  New

Bug description:
  https://bugzilla.redhat.com/show_bug.cgi?id=1064948

  We were trying to chase up a bug in libguestfs integration with
  OpenStack.  It was made much harder because the only way to diagnose
  the bug was to manually run the nova service after manually setting
  environment variables:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs

  It would be much nicer if:

  (1) There was a Nova setting to enable debugging, like:
libguestfs_debug = 1
  or something along those lines.

  (2) Nova used the events API to collect libguestfs debug messages
  and push them into Openstack's own logging system.  See code
  example below.

  -

  Here is how you enable logging programmatically and capture
  the log messages.

  (a) As soon as possible after creating the guestfs handle, call
  either (or better, both) of these functions:

  g.set_trace (1) # just traces libguestfs API calls
  g.set_verbose (1)   # verbose debugging

  (b) Register an event handler like this:

  events = guestfs.EVENT_APPLIANCE | guestfs.EVENT_LIBRARY \
   | guestfs.EVENT_WARNING | guestfs.EVENT_TRACE
  g.set_event_callback (log_callback, events)

  (c) The log_callback function should look something like this:

  def log_callback (ev,eh,buf,array):
  if ev == guestfs.EVENT_APPLIANCE:
  buf = buf.rstrip()
  # What just happened?
  LOG.debug (event=%s eh=%d buf='%s' array=%s %
 (guestfs.event_to_string (ev), eh, buf, array))

  There is a fully working example here:

  https://github.com/libguestfs/libguestfs/blob/master/python/t/420-log-
  messages.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279858] [NEW] nova-compute shouldn't spawn two libguestfs appliances every time an instance is launched

2014-02-13 Thread Richard Jones
Public bug reported:

https://bugzilla.redhat.com/show_bug.cgi?id=1064947

Using RHELOSP 4.0 GA bits, I'm finding that when I launch the Cirros
0.3.1 image, separate calls to libguestfs within the nova codebase cause
qemu-kvm to be run twice *before* the instance is launched.  This is
suboptimal.

One libguestfs call (file injection) can be disabled by setting
libvirt_inject_partition=-2, but this does not work for the second one
(checking to see if the volume partition/filesystem can be extended).
The codepath for the second call is approximately:

/nova/virt/disk/api.py extend()
/nova/virt/disk/api.py is_image_partitionless()
/nova/virt/disk/vfs/guestfs.py VFSGuestFS.setup()

It would be good if all of this could be done with one libguestfs
instance which could also be disabled in the global nova config.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279858

Title:
  nova-compute shouldn't spawn two libguestfs appliances every time an
  instance is launched

Status in OpenStack Compute (Nova):
  New

Bug description:
  https://bugzilla.redhat.com/show_bug.cgi?id=1064947

  Using RHELOSP 4.0 GA bits, I'm finding that when I launch the Cirros
  0.3.1 image, separate calls to libguestfs within the nova codebase
  cause qemu-kvm to be run twice *before* the instance is launched.
  This is suboptimal.

  One libguestfs call (file injection) can be disabled by setting
  libvirt_inject_partition=-2, but this does not work for the second one
  (checking to see if the volume partition/filesystem can be extended).
  The codepath for the second call is approximately:

  /nova/virt/disk/api.py extend()
  /nova/virt/disk/api.py is_image_partitionless()
  /nova/virt/disk/vfs/guestfs.py VFSGuestFS.setup()

  It would be good if all of this could be done with one libguestfs
  instance which could also be disabled in the global nova config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258256] Re: Live upgrade from Havana broken by commit 62e9829

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258256

Title:
  Live upgrade from Havana broken by commit 62e9829

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  Commit 62e9829 inadvertently broke live upgrades from Havana to
  master. This was not really related to the patch itself, other than
  that it bumped the Instance version which uncovered a bunch of issues
  in the object infrastructure that weren't yet ready to handle this
  properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245502] Re: Grizzly - Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245502

Title:
  Grizzly - Havana nova upgrade failure: Cannot drop index
  'instance_uuid'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  I was running Ubuntu 13.04 and upgraded to 13.10.  I was running the
  Ubuntu Precise 1:2013.1.3-0ubuntu1.1 on 13.04 and am now on 13.10's
  provided 1:2013.2~rc2-0ubuntu1.

  After getting the box up, migrating the nova database failed with the
  below error.  I am using MySQL.

  
  # nova-manage -v db sync
  2013-10-27 01:47:03.615 24457 INFO migrate.versioning.api [-] 161 - 162...
  2013-10-27 01:47:03.673 24457 INFO migrate.versioning.api [-] done
  ...
  ...
  2013-10-27 01:47:16.373 24457 INFO migrate.versioning.api [-] 184 - 185...
  Command failed, please check log for more info
  2013-10-27 01:47:17.835 24457 CRITICAL nova [-] (OperationalError) (1553, 
Cannot drop index 'instance_uuid': needed in a foreign key constraint) 'ALTER 
TABLE instance_info_caches DROP INDEX instance_uuid' ()
  2013-10-27 01:47:17.835 24457 TRACE nova Traceback (most recent call last):
  2013-10-27 01:47:17.835 24457 TRACE nova   File /usr/bin/nova-manage, line 
10, in module
  2013-10-27 01:47:17.835 24457 TRACE nova sys.exit(main())
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/manage.py, line 1377, in main
  2013-10-27 01:47:17.835 24457 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/manage.py, line 885, in sync
  2013-10-27 01:47:17.835 24457 TRACE nova return migration.db_sync(version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/migration.py, line 33, in db_sync
  2013-10-27 01:47:17.835 24457 TRACE nova return 
IMPL.db_sync(version=version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 75, in 
db_sync
  2013-10-27 01:47:17.835 24457 TRACE nova return 
versioning_api.upgrade(get_engine(), repository, version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line 186, in 
upgrade
  2013-10-27 01:47:17.835 24457 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
  2013-10-27 01:47:17.835 24457 TRACE nova   File string, line 2, in 
_migrate
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 40, in 
patched_with_engine
  2013-10-27 01:47:17.835 24457 TRACE nova return f(*a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line 366, in 
_migrate
  2013-10-27 01:47:17.835 24457 TRACE nova schema.runchange(ver, change, 
changeset.step)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 91, in 
runchange
  2013-10-27 01:47:17.835 24457 TRACE nova change.run(self.engine, step)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line 145, 
in run
  2013-10-27 01:47:17.835 24457 TRACE nova script_func(engine)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.py,
 line 129, in upgrade
  2013-10-27 01:47:17.835 24457 TRACE nova return 
_uc_rename(migrate_engine, upgrade=True)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.py,
 line 112, in _uc_rename
  2013-10-27 01:47:17.835 24457 TRACE nova old_name, *(columns))
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/utils.py, line 197, in 
drop_unique_constraint
  2013-10-27 01:47:17.835 24457 TRACE nova uc.drop()
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/constraint.py, line 59, in 
drop
  2013-10-27 01:47:17.835 24457 TRACE nova 
self.__do_imports('constraintdropper', *a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/constraint.py, line 32, in 
__do_imports
  2013-10-27 01:47:17.835 24457 TRACE nova run_single_visitor(engine, 
visitorcallable, self, *a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/databases/visitor.py, line 
75, in run_single_visitor
  2013-10-27 01:47:17.835 24457 TRACE nova 

[Yahoo-eng-team] [Bug 1245719] Re: RBD backed instance can't shutdown and restart

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245719

Title:
  RBD backed instance can't shutdown and restart

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Ubuntu:
  Confirmed

Bug description:
  Version: Havana w/ Ubuntu Repos. with Ceph for RBD.

  
  When creating Launching a instance with Boot from image (Creates a new 
volume) this creates the instance fine and all is well however if you shutdown 
the instance I can't turn it back on again.

  
  I get the following error in the nova-compute.log when trying to power on an 
shutdown instance.

  
###
  2013-10-29 00:48:33.859 2746 WARNING nova.compute.utils 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] [instance: 
cc370f6d-4be0-4cd3-9f20-bf86f5ad7c09] Can't access image $
  2013-10-29 00:48:34.040 2746 WARNING nova.virt.libvirt.vif 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Deprecated: The LibvirtHybridOVSBridgeDriver 
VIF driver is now de$
  2013-10-29 00:48:34.578 2746 ERROR nova.openstack.common.rpc.amqp 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Exception during message handling
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 353, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1832, in 
start_instance
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
self._power_on(context, instance)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1819, in 
_power_on
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-29 

[Yahoo-eng-team] [Bug 1240349] Re: publish_errors cfg option is broken

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240349

Title:
  publish_errors cfg option is broken

Status in Cinder:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Trove - Database as a Service:
  In Progress

Bug description:
  Our nova.conf contains a publish_errors option, which doesn't work
  because we don't have the necessary oslo modules:

  # publish error events (boolean value)
  publish_errors=true

  [root@ip9-12-17-141 ˜]# Traceback (most recent call last):
File /usr/bin/nova-api, line 10, in module
  sys.exit(main())
File /usr/lib/python2.6/site-packages/nova/cmd/api.py, line 41, in main
  logging.setup(nova)
File /usr/lib/python2.6/site-packages/nova/openstack/common/log.py, line 
372, in setup
  _setup_logging_from_conf()
File /usr/lib/python2.6/site-packages/nova/openstack/common/log.py, line 
472, in _setup_logging_from_conf
  logging.ERROR)
File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
40, in import_object
  return import_class(import_str)(*args, **kwargs)
File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
30, in import_class
  __import__(mod_str)
  ImportError: No module named log_handler

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1240349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1222979] Re: Errored instance can't be deleted if volume deleted first

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1222979

Title:
  Errored instance can't be deleted if volume deleted first

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  1. Create a bootable volume   nova volume-create --image-id image_id 10
  2. Boot a vm using the volume created in step 1  nova boot --flavor 1 
--image image_id --block-device-mapping vda=vol_id:::0 instance1

  If the instance fails to spawn in step 2, the instance ends up in an
  ERROR state. The volume goes back to available.  The hard part is
  creating a situation in which step 2 fails.  One way is to create
  enough quantum ports to exceed your port quota prior to attempting to
  spawn the instance.

  3. Delete the volume.
  4. Attempt to delete the instance.  An exception gets thrown  by 
driver.destroy because the volume is not found but the exception is not ignored 
and the instance can never be deleted.  Exceptions from _cleanup_volumes get 
ignored for this same reason.  I think another exception handler needs to be 
added to also ignore VolumeNotFound from driver.destroy.

  I've reproduced this with current code from trunk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1222979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270693] Re: _last_vol_usage_poll was not properly updated

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270693

Title:
  _last_vol_usage_poll was not properly updated

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  I found this error in jinkins's log:
  http://logs.openstack.org/02/67402/4/check/gate-nova-
  python27/b132ac8/console.html

  2014-01-20 02:36:59.295 | 
==
  2014-01-20 02:36:59.295 | FAIL: 
nova.tests.compute.test_compute.ComputeVolumeTestCase.test_poll_volume_usage_with_data
  2014-01-20 02:36:59.295 | tags: worker-0
  2014-01-20 02:36:59.296 | 
--
  2014-01-20 02:36:59.296 | Empty attachments:
  2014-01-20 02:36:59.296 |   stderr
  2014-01-20 02:36:59.296 |   stdout
  2014-01-20 02:36:59.297 | 
  2014-01-20 02:36:59.297 | pythonlogging:'': {{{
  2014-01-20 02:36:59.297 | INFO [nova.virt.driver] Loading compute driver 
'nova.virt.fake.FakeDriver'
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Auditing 
locally available compute resources
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Free ram 
(MB): 7680
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Free disk 
(GB): 1028
  2014-01-20 02:36:59.299 | AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
  2014-01-20 02:36:59.299 | INFO [nova.compute.resource_tracker] 
Compute_service record created for fake-mini:fakenode1
  2014-01-20 02:36:59.299 | AUDIT [nova.compute.manager] Deleting orphan 
compute node 2
  2014-01-20 02:36:59.300 | }}}
  2014-01-20 02:36:59.300 | 
  2014-01-20 02:36:59.300 | Traceback (most recent call last):
  2014-01-20 02:36:59.300 |   File nova/tests/compute/test_compute.py, line 
577, in test_poll_volume_usage_with_data
  2014-01-20 02:36:59.301 | self.compute._last_vol_usage_poll)
  2014-01-20 02:36:59.301 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2014-01-20 02:36:59.301 | raise self.failureException(msg)
  2014-01-20 02:36:59.302 | AssertionError: _last_vol_usage_poll was not 
properly updated 1390185067.18

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276268] Re: nova compute hang with file injection off, config drive off, neutron networking

2014-02-13 Thread Alan Pevec
*** This bug is a duplicate of bug 1273478 ***
https://bugs.launchpad.net/bugs/1273478

** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276268

Title:
  nova compute hang with file injection off, config drive off, neutron
  networking

Status in OpenStack Compute (Nova):
  Triaged
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  While trying to change file injection to default off
  (https://review.openstack.org/#/c/70239/) we observed nova-compute
  hang [the log stops hard about 10 minutes before the test run
  finishes). Thought to be a bug in the patch, we then reproduced this
  with the config setting done purely in devstack, while trying to avoid
  the kernel-hang with neutron isolated networks + file injection (not
  sure of the number).

  a trace of the hung process threads:
  http://paste.openstack.org/show/62463/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265512] Re: VMware: unnecesary session termination

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265512

Title:
  VMware: unnecesary session termination

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  In some cases, the session with the VC is terminated and restarted again. 
This can happen for example when the user does:
  nova list (and there are no running VMs)
  In addition to the restart of the session the operation also waits 2 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246258] Re: UnboundLocalError: local variable 'network_name' referenced before assignment

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246258

Title:
  UnboundLocalError: local variable 'network_name' referenced before
  assignment

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  The exception occurs when trying to create/delete an instance that is
  using a network that is not owned by the admin tenant. This prevents
  the deletion of the instance.

  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1616, in 
run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
do_run_instance()
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
246, in inner
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1615, in 
do_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 965, in 
_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
notify(error, msg=unicode(e))  # notify that build failed
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 949, in 
_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
instance, image_meta, legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1078, in 
_build_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
filter_properties, bdms, legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1122, in 
_reschedule_or_error
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
self._log_original_error(exc_info, instance_uuid)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1117, in 
_reschedule_or_error
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 

[Yahoo-eng-team] [Bug 1241337] Re: VM do not resume if attach an volume when suspended

2014-02-13 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241337

Title:
  VM do not resume if attach an volume when suspended

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  
  1) nova suspend vm2
  2) nova attach vm2 6ac2e985-9586-438f-a027-bc9591fa5b43 /dev/sdb
  3) nova volume-attach vm2 6ac2e985-9586-438f-a027-bc9591fa5b43 /dev/sdb
  4) nova resume vm2

  VM failed to resume and nova compute report the following errors.

  2013-10-18 12:16:33.175 ERROR nova.openstack.common.rpc.amqp 
[req-a8b196e3-dbd5-45f4-814e-56a715b07fdf admin admin] Exception during message 
handling
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in _process_data
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 354, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 90, in wrapped
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 73, in wrapped
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 244, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 230, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 295, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 272, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 259, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3314, in resume_instance
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1969, in resume
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3206, in 
_create_domain_and_network
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
{'connection_info': jsonutils.dumps(connection_info)})
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 420, in 
block_device_mapping_update
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp context, 
bdm_id, values)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/conductor/api.py, line 170, in 
block_device_mapping_update
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp context, 
values, create=False)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/conductor/rpcapi.py, line 244, in 
block_device_mapping_update_or_create
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
values=values, create=create)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/rpcclient.py, line 85, in call
  

[Yahoo-eng-team] [Bug 1253755] Re: keystone.token.backends.sql list_revoked_tokens performs very poorly

2014-02-13 Thread Alan Pevec
** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone/havana
   Status: New = Fix Committed

** Changed in: keystone/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1253755

Title:
  keystone.token.backends.sql list_revoked_tokens performs very poorly

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Committed

Bug description:
  The query that it makes use of is extremely inefficient, as it must
  retrieve the massive 'extra' field when it does not need it. Also
  there is no index that covers both expires and valid, so we can only
  do a range query on expires and then filter for valid.

  Test situation is a poorly tuned mysql that has a token table with
  865000 rows, 35000 of which are revoked (2000 of which are unexpired).

  Adding an index on token+valid did speed the query up some, but it
  still took on average 2 seconds to return all ~2000 revoked token
  rows. Also changing the query to only query the id and expires columns
  resulted in the query taking 0.02 seconds to run, leading to a much
  more responsive experience throughout the cloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1253755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245590] Re: List Trusts generates HTTP Error 500

2014-02-13 Thread Alan Pevec
** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone/havana
   Status: New = Fix Committed

** Changed in: keystone/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1245590

Title:
  List Trusts generates HTTP Error 500

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Committed

Bug description:
  We are getting an HTTP 500 error when we try to list all trusts. We can list 
individual trusts, but not the generic list.
   

  GET REST Request:
   
  curl -v -X GET http://10.1.8.20:35357/v3/OS-TRUST/trusts -H X-Auth-Token: 
ed241ae1e986319086f3
   
    

  REST Response:
   
  {
  error: {
  message: An unexpected error prevented the server from fulfilling 
your request. 'id',
  code: 500,
  title: Internal Server Error
  }
  }
   
  - 
   
  /var/log/keystone/keystone.log file entry:

  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 
238, in __call__
  result = method(context, **params)
File 
/usr/local/lib/python2.7/dist-packages/keystone/common/controller.py, line 
158, in inner
  return f(self, context, *args, **kwargs)
File 
/usr/local/lib/python2.7/dist-packages/keystone/trust/controllers.py, line 
213, in list_trusts
  self._fill_in_roles(context, trust, global_roles)
File 
/usr/local/lib/python2.7/dist-packages/keystone/trust/controllers.py, line 
109, in _fill_in_roles
  if x['id'] == trust_role['id']]
  KeyError: 'id'
  2013-10-28 09:49:04 INFO [access] 15.253.57.88 - - [28/Oct/2013:16:49:04 
+] GET http://havanatest:35357/v3/OS-TRUST/trusts HTTP/1.0 500 148

  --

  /var/log/keystone/keystone.log file entry with ERROR debug
  statements added:

  2013-10-28 09:49:04ERROR [keystone.trust.controllers] QQ 
trust_role = {u'name': u'disney_user'}
  2013-10-28 09:49:04ERROR [keystone.trust.controllers] QQ 
trust_role = {u'name': u'disney_user'}
  2013-10-28 09:49:04ERROR [keystone.common.wsgi] 'id'
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 
238, in __call__
  result = method(context, **params)
File 
/usr/local/lib/python2.7/dist-packages/keystone/common/controller.py, line 
158, in inner
  return f(self, context, *args, **kwargs)
File 
/usr/local/lib/python2.7/dist-packages/keystone/trust/controllers.py, line 
213, in list_trusts
  self._fill_in_roles(context, trust, global_roles)
File 
/usr/local/lib/python2.7/dist-packages/keystone/trust/controllers.py, line 
110, in _fill_in_roles
  if x['id'] == trust_role['id']]
  KeyError: 'id'
  2013-10-28 09:49:04 INFO [access] 15.253.57.88 - - [28/Oct/2013:16:49:04 
+] GET http://havanatest:35357/v3/OS-TRUST/trusts HTTP/1.0 500 148

  --

  Method causing the error with inserted LOG statements:

  def _fill_in_roles(self, context, trust, global_roles):
  if trust.get('expires_at') is not None:
  trust['expires_at'] = (timeutils.isotime
 (trust['expires_at'],
  subsecond=True))

  if 'roles' not in trust:
  trust['roles'] = []
  trust_full_roles = []
  for trust_role in trust['roles']:
  LOG.error(_(QQ trust_role = %s) % trust_role )
   if isinstance(trust_role, basestring):
  trust_role = {'id': trust_role}
  LOG.error(_(QQ trust_role = %s) % trust_role )

  matching_roles = [x for x in global_roles
if x['id'] == trust_role['id']]
  if matching_roles:
  full_role = identity.controllers.RoleV3.wrap_member(
  context, matching_roles[0])['role']
  trust_full_roles.append(full_role)
  trust['roles'] = trust_full_roles
  trust['roles_links'] = {
  'self': (self.base_url() + /%s/roles % trust['id']),
  'next': None,
  'previous': None}

  -

  Quick change I made to get around the problem:

 def _fill_in_roles(self, context, trust, global_roles):
  if trust.get('expires_at') is not None:
  trust['expires_at'] = (timeutils.isotime
 (trust['expires_at'],
  subsecond=True))

  if 'roles' not in trust:
  trust['roles'] = []
  trust_full_roles = []
  for trust_role in trust['roles']:
  

[Yahoo-eng-team] [Bug 1279907] Re: Latest keystoneclient breaks tests

2014-02-13 Thread Heather Whisenhunt
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279907

Title:
  Latest keystoneclient breaks tests

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The new release of keystoneclient (0.6.0) introduces some new
  metaclass magic which breaks our mocking in master :(

  We probably need to modify the test to use mock instead of mox, as the
  issue seems to be that mox misinterprets the class type due to the
  metaclass.

  Immediate workaround while we workout the solution is probably to
  temporarily cap keystoneclient to 0.5.1 which did not have this issue.

  Traceback (most recent call last):
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 449, in 
test_trust_init
  self._stubs_v3(method='trust')
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 83, in 
_stubs_v3
  self.m.StubOutClassWithMocks(kc_v3, Client)
File /usr/lib/python2.7/site-packages/mox.py, line 366, in 
StubOutClassWithMocks
  raise TypeError('Given attr is not a Class.  Use StubOutWithMock.')
  TypeError: Given attr is not a Class.  Use StubOutWithMock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1279907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262785] Re: devstack-exercises floating_ips broken

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262785

Title:
  devstack-exercises floating_ips broken

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed

Bug description:
  Enabling the Q_USE_DEBUG_COMMAND=True in devstack localrc uses the 
set_neutron_debug using namespace fails. This usually shows as :
  [Call Trace]
  /opt/stack/new/devstack/exercises/volumes.sh:147:ping_check
  /opt/stack/new/devstack/functions:1700:_ping_check_neutron
  /opt/stack/new/devstack/lib/neutron:886:die
  [ERROR] /opt/stack/new/devstack/lib/neutron:886 [Fail] Couldn't ping server
  =
  SKIP boot_from_volume
  SKIP client-env
  SKIP marconi
  SKIP savanna
  SKIP trove
  PASS aggregates
  PASS bundle
  PASS client-args
  PASS euca
  PASS horizon
  PASS sec_groups
  PASS swift
  FAILED floating_ips
  FAILED neutron-adv-test
  FAILED volumes
  =

  The env is running devstack in a local environment. With the followinjg 
localrc:
  ubuntu@gate-t1:~/reddwarf/gate-t$ cat /opt/stack/new/devstack/localrc 
  Q_USE_DEBUG_COMMAND=True
  NETWORK_GATEWAY=10.1.0.1
  Q_USE_DEBUG_COMMAND=True
  Q_PLUGIN=ml2
  Q_AGENT=openvswitch
  DEST=/opt/stack/new
  ACTIVE_TIMEOUT=90
  BOOT_TIMEOUT=90
  ASSOCIATE_TIMEOUT=60
  TERMINATE_TIMEOUT=60
  MYSQL_PASSWORD=secret
  RABBIT_PASSWORD=secret
  ADMIN_PASSWORD=secret
  SERVICE_PASSWORD=secret
  SERVICE_TOKEN=111222333444
  SWIFT_HASH=1234123412341234
  ROOTSLEEP=0
  ERROR_ON_CLONE=False
  
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,swift,cinder,c-api,c-vol,c-sch,n-cond,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta
  SKIP_EXERCISES=boot_from_volume,client-env
  SERVICE_HOST=127.0.0.1
  SYSLOG=True
  SCREEN_LOGDIR=/opt/stack/new/screen-logs
  LOGFILE=/opt/stack/new/devstacklog.txt
  VERBOSE=True
  FIXED_RANGE=10.1.0.0/24
  FIXED_NETWORK_SIZE=256
  NETWORK_GATEWAY=10.1.0.1
  VIRT_DRIVER=libvirt
  SWIFT_REPLICAS=1
  export OS_NO_CACHE=True
  CINDER_SECURE_DELETE=False
  API_RATE_LIMIT=False
  VOLUME_BACKING_FILE_SIZE=5G
  CINDER_SECURE_DELETE=False

  
  This situation also cause the neutron Grenade(check-grenade-dsvm-neutron) 
testing to fail as well, see 
http://logs.openstack.org/63/61663/2/check/check-grenade-dsvm-neutron/5f49167/.

  the failing command in floating_ips.sh :
   check_command='while ! sudo /usr/local/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qprobe-24ae41f0-4135-4c67-a16f-2eb5f4c313ec ping -w 1 -c 1 10.1.0.4; do sleep 
1; done'
  + timeout 90 sh -c 'while ! sudo /usr/local/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qprobe-24ae41f0-4135-4c67-a16f-2eb5f4c313ec ping -w 1 -c 1 10.1.0.4; do sleep 
1; done'
  PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data.

  --- 10.1.0.4 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data.
  From 10.1.0.2 icmp_seq=1 Destination Host Unreachable

  --- 10.1.0.4 ping statistics ---
  1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

  After the setup_neutron_debug 
(https://github.com/openstack-dev/devstack/blob/master/stack.sh#L1110) is 
called ( 
ttps://github.com/openstack-dev/devstack/blob/master/lib/neutron#L847), the 
ovs-vsctl show looks like:
  ubuntu@gate-t1:~$ sudo ovs-vsctl show
  c6a9fce5-7834-47cd-b92a-8d5a22ba5c87
  Bridge br-ex
  Port qg-3ac19751-f0
  Interface qg-3ac19751-f0
  type: internal
  Port tap2d117674-ab
  Interface tap2d117674-ab
  type: internal
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-int
  Port br-int
  Interface br-int
  type: internal
  Port tap4fe7e74b-0a
  tag: 1
  Interface tap4fe7e74b-0a
  Port tap24ae41f0-41
  tag: 4095
  Interface tap24ae41f0-41
  type: internal
  Port qr-b835f1ef-38
  tag: 1
  Interface qr-b835f1ef-38
  type: internal
  Port tap23056976-35
  tag: 1
  Interface tap23056976-35
  type: internal
  ovs_version: 1.4.3

  == which the   Port tap24ae41f0-41 has a tag of 4095 which
  should be 1, and the 10.1.0.4 ip address which should be private is
  pingable from the host.

  
  ubuntu@gate-t1:~$ neutron  port-list
  

[Yahoo-eng-team] [Bug 1261334] Re: nvp: update network gateway name on backend as well

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261334

Title:
  nvp: update network gateway name on backend as well

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  when a network gateway name is updated the plugin currently only the
  neutron database is updated; it might be useful to propagate the
  update to the backend.

  This breaks a use case when network gateways created in neutron need
  then to be processed by other tools finding them in nvp by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265353] Re: check_nvp_config.py erroneous config complaint

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265353

Title:
  check_nvp_config.py erroneous config complaint

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed

Bug description:
  The utility may return the following error:

  Gateway(L3GatewayServiceConfig) uuid: 40226ac1-86c6-471b-ac8e-3041d73f5c48
Error: specified default L3GatewayServiceConfig gateway 
(40226ac1-86c6-471b-ac8e-3041d73f5c48) is missing from NVP Gateway Services!

  When in fact the L3Gateway Service was setup correctly and LRs could
  be created without problems.

  This error currently affects only Havana, as a fix for Icehouse was
  committed in Change-Id: I3c5b5dcd316df3867f434bbc35483a2636b715d8

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268762] Re: Remove and recreate interfacein ovs if already exists

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268762

Title:
  Remove and recreate interfacein ovs  if already exists

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed

Bug description:
  If the dhcp-agent machine restarts and openvswitch logs the following
  warning message for all tap interfaces that have not been recreated yet:

  bridge|WARN|could not open network device tap2cf7dbad-9d (No such
  device)

  Once the dhcp-agent starts he recreates the interfaces and readds them to the
  ovs-bridge. Unfortinately, ovs does not reinitalize the interface as its
  already in ovsdb and does not assign it an ofport number.

  In order to correct this we should first remove interfaces that exist and
  then readd them. 

  
  root@arosen-desktop:~# ovs-vsctl  -- --may-exist add-port br-int fake1

  # ofport still -1
  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []
  root@arosen-desktop:~# ip link add fake1 type veth peer name fake11
  root@arosen-desktop:~# ifconfig fake1
  fake1 Link encap:Ethernet  HWaddr 56:c3:a1:2b:1f:f4  
BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []
  root@arosen-desktop:~# ovs-vsctl  -- --may-exist add-port br-int fake1
  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252437] Re: uncaught portnotfound exception on get_dhcp_port

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252437

Title:
  uncaught portnotfound exception on get_dhcp_port

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  While working on fix for bug #1251874 I have noticed this stacktrace
  in the log:

  2013-11-18 17:20:46.237 1021 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 438, in 
_process_data
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 44, in dispatch
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/dhcp_rpc_base.py, line 139, in get_dhcp_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
dict(port=port))
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 588, in update_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
original_port = super(Ml2Plugin, self).get_port(context, id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1454, in get_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp port 
= self._get_port(context, id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 266, in _get_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
raise q_exc.PortNotFound(port_id=id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
PortNotFound: Port d68c27dd-210b-4b10-9c41-40ba01aa0fd3 could not be found

  This is because the try-except looks for exc.NoResultFound and this is
  obviously a mistake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258585] Re: fwaas_driver.ini missing from setup.cfg

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258585

Title:
  fwaas_driver.ini missing from setup.cfg

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  fwaas_driver.ini is missing from setup.cfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251874] Re: reduce severity of network notfound trace when looked up by dhcp agent

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251874

Title:
  reduce severity of network notfound trace when looked up by dhcp agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Neutron Server log has a gazillion of these traces:

  2013-11-15 00:40:31.639 8016 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 438, in 
_process_data
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 44, in dispatch
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/dhcp_rpc_base.py, line 150, in get_dhcp_port
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
network = plugin.get_network(context, network_id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 352, in get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
result = super(Ml2Plugin, self).get_network(context, id, None)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1013, in 
get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
network = self._get_network(context, id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 252, in 
_get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
raise q_exc.NetworkNotFound(net_id=id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
NetworkNotFound: Network 6f199bbe-75ad-429a-ac7e-9c49bc389be5 could not be found
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp

  These are about the dhcp agent wanting the sync the state between its
  local representation of the one on the server's. But an unfound
  network should be tolerated and no exception trace should be reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221419] Re: unable to ping floating ip from fixed_ip association

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1221419

Title:
  unable to ping floating ip from fixed_ip association

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Currently if  one checks out a floating ip the instance that is
  associated with that floating ip is unable to ping it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1221419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243888] Re: neutron-check-nvp-config failure

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243888

Title:
  neutron-check-nvp-config failure

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  neutron-check-nvp-config fails with error:

  
  2013-10-23 12:06:40.937 2495 CRITICAL neutron [-] main() takes exactly 1 
argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245885] Re: Mellanox Neutron Agent is using keystone port

2014-02-13 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245885

Title:
  Mellanox Neutron Agent is using keystone port

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Mellanox Neutron agent is configuredby default  to contact Eswitch Daemon 
using port 5001 which is used by keystone.
  It should be changed to use port 60001.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262223] Re: Wrap call to extension_supported on Launch Instance with try/except

2014-02-13 Thread Alan Pevec
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
   Status: New = Fix Committed

** Changed in: horizon/havana
Milestone: None = 2013.2.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1262223

Title:
  Wrap call to extension_supported on Launch Instance with try/except

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L142

  if api.nova.extension_supported(BlockDeviceMappingV2Boot,
  request):
  source_type_choices.append((volume_image_id,
  _(Boot from image (creates a new volume).)))
  source_type_choices.append((volume_snapshot_id,

  
  extension_supported call can fail, we need to wrap this with try-except

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1262223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279917] [NEW] keystoneclient users.find errors out with KeyError: users

2014-02-13 Thread Steve Ivy
Public bug reported:

Given a basic test script hitting openstack running in a virtualbox:

import keystoneclient

from keystoneclient.v3 import client
keystone = client.Client(user_domain_name='default',
 username='admin',
 password='secret',
 auth_url='http://localhost:35357/v3',
 tenant_name='admin'
 )

user = keystone.users.find(name='admin')

This call errors out with:

Traceback (most recent call last):
  File /home/webdev/tmp/keystone_tests.py, line 28, in module
user = keystone.users.find(name='admin')
  File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 72, 
in func
return f(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 296, 
in find
self.collection_key)
  File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 98, 
in _list
data = body[response_key]
KeyError: 'users'

Adding a print statement to print the body of the response in
base.py:Manager yields:

{u'user': {u'email': u'ad...@example.org', u'tenantId': 
u'a7ba50e02ba641279a2b919f4f824bee', u'enabled': True, u'name': 
u'admin', u'id': u'1d1c161414c34a8baa50a46ab5e15924'}}

The keystone v3 api collection returns a dictionary with a 'user' key,
with a dictionary value, rather than a 'users' key (matching the
collection name) with a list value, which the generic Manager code
expects.

The keystone v3 api should return consistent responses.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystoneapi keystoneclient v3

** Attachment added: simple script that demonstrates the issue
   
https://bugs.launchpad.net/bugs/1279917/+attachment/3979895/+files/keystone_tests.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1279917

Title:
  keystoneclient users.find errors out with KeyError: users

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Given a basic test script hitting openstack running in a virtualbox:

  import keystoneclient

  from keystoneclient.v3 import client
  keystone = client.Client(user_domain_name='default',
   username='admin',
   password='secret',
   auth_url='http://localhost:35357/v3',
   tenant_name='admin'
   )

  user = keystone.users.find(name='admin')

  This call errors out with:

  Traceback (most recent call last):
File /home/webdev/tmp/keystone_tests.py, line 28, in module
  user = keystone.users.find(name='admin')
File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 
72, in func
  return f(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 
296, in find
  self.collection_key)
File /usr/lib/python2.7/dist-packages/keystoneclient/base.py, line 
98, in _list
  data = body[response_key]
  KeyError: 'users'

  Adding a print statement to print the body of the response in
  base.py:Manager yields:

  {u'user': {u'email': u'ad...@example.org', u'tenantId': 
u'a7ba50e02ba641279a2b919f4f824bee', u'enabled': True, u'name': 
  u'admin', u'id': u'1d1c161414c34a8baa50a46ab5e15924'}}

  The keystone v3 api collection returns a dictionary with a 'user' key,
  with a dictionary value, rather than a 'users' key (matching the
  collection name) with a list value, which the generic Manager code
  expects.

  The keystone v3 api should return consistent responses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1279917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274772] Re: libvirt.txt in gate is full of error messages

2014-02-13 Thread Launchpad Bug Tracker
This bug was fixed in the package libvirt - 1.2.1-0ubuntu7

---
libvirt (1.2.1-0ubuntu7) trusty; urgency=low

  * debian/patches/nwfilter-locking.patch: Dropped causes ftbfs.
 -- Chuck Short zul...@ubuntu.com   Thu, 13 Feb 2014 10:07:56 -0700

** Changed in: libvirt (Ubuntu)
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274772

Title:
  libvirt.txt in gate is full of error messages

Status in OpenStack Compute (Nova):
  Triaged
Status in “libvirt” package in Ubuntu:
  Fix Released

Bug description:
  http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
  full/4860441/logs/libvirtd.txt.gz is full of errors such as:

  
  2014-01-30 22:40:04.255+: 9228: error : virNetDevGetIndex:656 : Unable to 
get index for interface vnet0: No such device

  2014-01-30 22:13:14.464+: 9227: error : virExecWithHook:327 :
  Cannot find 'pm-is-supported' in path: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277790] Re: boto 2.25 causing unit tests to fail

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277790

Title:
  boto 2.25 causing unit tests to fail

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  A new version of boto was released (2.25) that causes nova unit tests
  to fail.

  http://logs.openstack.org/03/71503/1/gate/gate-nova-
  python27/4e66adf/console.html

  
  FAIL: nova.tests.test_objectstore.S3APITestCase.test_unknown_bucket
  tags: worker-1
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [nova.wsgi] S3 Objectstore listening on 127.0.0.1:59755
  INFO [nova.S3 Objectstore.wsgi.server] (7108) wsgi starting up on 
http://127.0.0.1:59755/
  INFO [nova.S3 Objectstore.wsgi.server] 127.0.0.1 HEAD /falalala/ HTTP/1.1 
status: 200 len: 115 time: 0.0005140
  INFO [nova.wsgi] Stopping WSGI server.
  }}}

  Traceback (most recent call last):
File nova/tests/test_objectstore.py, line 133, in test_unknown_bucket
  bucket_name)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 393, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: bound method S3Connection.get_bucket of 
S3Connection:127.0.0.1 returned Bucket: falalala

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265512] Re: VMware: unnecesary session termination

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265512

Title:
  VMware: unnecesary session termination

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In some cases, the session with the VC is terminated and restarted again. 
This can happen for example when the user does:
  nova list (and there are no running VMs)
  In addition to the restart of the session the operation also waits 2 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270654] Re: test_different_fname_concurrency flakey fail

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270654

Title:
  test_different_fname_concurrency flakey fail

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Looks like test_different_fname_concurrency has an intermittent fail

  ft1.9289: 
nova.tests.virt.libvirt.test_libvirt.CacheConcurrencyTestCase.test_different_fname_concurrency_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_libvirt.py, line 319, in 
test_different_fname_concurrency
  self.assertTrue(done2.ready())
File /usr/lib/python2.7/unittest/case.py, line 420, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Full logs here: http://logs.openstack.org/91/58191/4/check/gate-nova-
  python27/413d398/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258256] Re: Live upgrade from Havana broken by commit 62e9829

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258256

Title:
  Live upgrade from Havana broken by commit 62e9829

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Commit 62e9829 inadvertently broke live upgrades from Havana to
  master. This was not really related to the patch itself, other than
  that it bumped the Instance version which uncovered a bunch of issues
  in the object infrastructure that weren't yet ready to handle this
  properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257355] Re: live migration fails when using non-image backed disk

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257355

Title:
  live migration fails when using non-image backed disk

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  running live migration with --block-migrate fails if the disk was
  resized before (aka detached from the cow image). This is because
  nova.virt.libvirt.driver.py uses disk_size, not virt_disk_size for re-
  creating the qcow2 file on the destination host. in the case of qcow2
  files, qemu-img however needs to get the virt_disk size passed down,
  otherwise the block migration step will not be able to convert all
  blocks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273455] Re: stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Committed
Status in Manage plugins for Python applications:
  Fix Released

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260123] Re: libvirt wait_for_block_job_info can infinitely loop

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260123

Title:
  libvirt wait_for_block_job_info can infinitely loop

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Callers of wait_for_block_job_info may loop infinitely if the job
  doesn't exist, since libvirt returns an empty dict which this function
  interprets as cur=0 and end=0 = return True.

  I think it should do:

  if not any(status):
  return False

  
  Affects online deletion of Cinder GlusterFS snapshots, and possibly other 
callers of this (live_snapshot).

  
  See http://libvirt.org/git/?p=libvirt-python.git;a=commit;h=f8bc3a9ccc

  (Encountered issue on Fedora 19 w/ virt-preview repo, libvirt 1.1.3.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258253] Re: Compute rpc breaks live upgrade from havana to icehouse

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258253

Title:
  Compute rpc breaks live upgrade from havana to icehouse

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  We are attempting to support live upgrades from the Havana to Icehouse
  release.  Some changes to the compute rpc API need to be backported to
  Havana to make this work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270693] Re: _last_vol_usage_poll was not properly updated

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270693

Title:
  _last_vol_usage_poll was not properly updated

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I found this error in jinkins's log:
  http://logs.openstack.org/02/67402/4/check/gate-nova-
  python27/b132ac8/console.html

  2014-01-20 02:36:59.295 | 
==
  2014-01-20 02:36:59.295 | FAIL: 
nova.tests.compute.test_compute.ComputeVolumeTestCase.test_poll_volume_usage_with_data
  2014-01-20 02:36:59.295 | tags: worker-0
  2014-01-20 02:36:59.296 | 
--
  2014-01-20 02:36:59.296 | Empty attachments:
  2014-01-20 02:36:59.296 |   stderr
  2014-01-20 02:36:59.296 |   stdout
  2014-01-20 02:36:59.297 | 
  2014-01-20 02:36:59.297 | pythonlogging:'': {{{
  2014-01-20 02:36:59.297 | INFO [nova.virt.driver] Loading compute driver 
'nova.virt.fake.FakeDriver'
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Auditing 
locally available compute resources
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Free ram 
(MB): 7680
  2014-01-20 02:36:59.298 | AUDIT [nova.compute.resource_tracker] Free disk 
(GB): 1028
  2014-01-20 02:36:59.299 | AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
  2014-01-20 02:36:59.299 | INFO [nova.compute.resource_tracker] 
Compute_service record created for fake-mini:fakenode1
  2014-01-20 02:36:59.299 | AUDIT [nova.compute.manager] Deleting orphan 
compute node 2
  2014-01-20 02:36:59.300 | }}}
  2014-01-20 02:36:59.300 | 
  2014-01-20 02:36:59.300 | Traceback (most recent call last):
  2014-01-20 02:36:59.300 |   File nova/tests/compute/test_compute.py, line 
577, in test_poll_volume_usage_with_data
  2014-01-20 02:36:59.301 | self.compute._last_vol_usage_poll)
  2014-01-20 02:36:59.301 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2014-01-20 02:36:59.301 | raise self.failureException(msg)
  2014-01-20 02:36:59.302 | AssertionError: _last_vol_usage_poll was not 
properly updated 1390185067.18

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271331] Re: unit test failure in gate nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271331

Title:
  unit test failure in gate
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  We are occasionally seeing the test
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping fail in the
  gate due to

  
  Traceback (most recent call last):
File nova/tests/db/test_sqlite.py, line 53, in test_big_int_mapping
  output, _ = utils.execute(get_schema_cmd, shell=True)
File nova/utils.py, line 166, in execute
  return processutils.execute(*cmd, **kwargs)
File nova/openstack/common/processutils.py, line 168, in execute
  result = obj.communicate()
File /usr/lib/python2.7/subprocess.py, line 754, in communicate
  return self._communicate(input)
File /usr/lib/python2.7/subprocess.py, line 1314, in _communicate
  stdout, stderr = self._communicate_with_select(input)
File /usr/lib/python2.7/subprocess.py, line 1438, in 
_communicate_with_select
  data = os.read(self.stdout.fileno(), 1024)
  OSError: [Errno 11] Resource temporarily unavailable

  
  logstash query: message:FAIL: 
nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5kYi50ZXN0X3NxbGl0ZS5UZXN0U3FsaXRlLnRlc3RfYmlnX2ludF9tYXBwaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzMzk1MTU1NDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251590] Re: [OSSA 2014-003] Live migration can leak root disk into ephemeral storage (CVE-2013-7130)

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251590

Title:
  [OSSA 2014-003] Live migration can leak root disk into ephemeral
  storage (CVE-2013-7130)

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  During pre-live-migration required disks are created along with their
  backing files (if they don't already exist). However, the ephemeral
  backing file is created from a glance downloaded root disk.

  # If the required ephemeral backing file is present then there's no
  issue.

  # If the required ephemeral backing file is not already present, then
  the root disk is downloaded and saved as the ephemeral backing file.
  This will result in the following situations:

  ## The disk.local transferred during live-migration will be rebased on the 
ephemeral backing file so regardless of the content, the end result will be 
identical to the source disk.local.
  ## However, if a new instance of the same flavor is spawned on this compute 
node, then it will have an ephemeral storage that exposes a root disk.

  Security concerns:

  If the migrated VM was spawned off a snapshot, now it's possible for
  any instances of the correct flavor to see the snapshot contents of
  another user via the ephemeral storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245719] Re: RBD backed instance can't shutdown and restart

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245719

Title:
  RBD backed instance can't shutdown and restart

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Ubuntu:
  Confirmed

Bug description:
  Version: Havana w/ Ubuntu Repos. with Ceph for RBD.

  
  When creating Launching a instance with Boot from image (Creates a new 
volume) this creates the instance fine and all is well however if you shutdown 
the instance I can't turn it back on again.

  
  I get the following error in the nova-compute.log when trying to power on an 
shutdown instance.

  
###
  2013-10-29 00:48:33.859 2746 WARNING nova.compute.utils 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] [instance: 
cc370f6d-4be0-4cd3-9f20-bf86f5ad7c09] Can't access image $
  2013-10-29 00:48:34.040 2746 WARNING nova.virt.libvirt.vif 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Deprecated: The LibvirtHybridOVSBridgeDriver 
VIF driver is now de$
  2013-10-29 00:48:34.578 2746 ERROR nova.openstack.common.rpc.amqp 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Exception during message handling
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 353, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1832, in 
start_instance
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
self._power_on(context, instance)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1819, in 
_power_on
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-29 

[Yahoo-eng-team] [Bug 1251123] Re: _update_user_list_with_cas causes significant overhead (when using memcached as token store backend)

2014-02-13 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1251123

Title:
  _update_user_list_with_cas causes significant overhead (when using
  memcached as token store backend)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Fix Released

Bug description:
  [Problem statement]
  In Havana, when using memcached as the backend of token store, we have been 
seeing significant performance drop by comparison with Grizzly. 

  [How to reproduce]
  We used a Python script to boot VMs at the rate of 1 VM per second. We have 
seen a lot of VM creation failed and the Keystone-all process's CPU utilization 
was nearly 100%. 

  [Analysis]
  When using memcached as token's backend, keystone stores two types of K-V 
pairs into memcached.
 
 token_id === token data (associated with an TTL)

 user_id  === a list of ids for tokens that belong to the user

  When creating a new token, Keystone first adds the (token_id, data)
  pair into memcahce, and then update the (user_id, token_id_list) pair
  in function _update_user_list_with_cas.

  What _update_user_list_with_cas does are:
  1. retrieve the old list
  2. for each token_id in the old list, retrieve the token data to check 
whether it is expired or not.
  3. discard the expired tokens, add the valid token_ids to a new list
  4. append the newly created token's id to the new list too.
  5. use memcached's Compare-And-Set function to replace the old list with 
the new list

  In practice we have found it is very usual that a user have thousands
  of valid tokens at a given moment, so the step 2 consumes a lot of
  time. What's worse is that CAS tends to end up with failure and retry,
  which makes this function even less efficient.

  [Proposed fix]
  I'd like to propose a 'lazy cleanup of expired token_ids from the user list' 
solution.

  The idea is to avoid doing the clean up EVERY TIME when a new token is
  created. We can set a dynamic threshold T for each user, and cleanup
  job will be triggered only when the number of token_ids exceeds the
  threshold T. After every cleanup, it will check how many token_ids
  have been cleaned up, if the percentage is lower than a pre-specified
  P, than the T needs to be increased to T*(1+P) to avoid too frequent
  clean-ups.

  Besides, every time the list_tokens function for a given user is
  called, it will always trigger a clean-up action. It is necessary to
  ensure list_tokens always return valid tokens only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1251123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253905] Re: Keystone doesn't handle UTF8 in exceptions

2014-02-13 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1253905

Title:
  Keystone doesn't handle UTF8 in exceptions

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released

Bug description:
  Originally reported:
  https://bugzilla.redhat.com/show_bug.cgi?id=1033190

  Description of problem:

  [root@public-control1 ~]# keystone tenant-create --name Consulting – 
Middleware Delivery 
  Unable to communicate with identity service: {error: {message: An 
unexpected error prevented the server from fulfilling your request. 'ascii' 
codec can't encode character u'\\u2013' in position 11: ordinal not in 
range(128), code: 500, title: Internal Server Error}}. (HTTP 500)

  
  NB: the dash in the name is not an ascii dash.  It's something else.

  Version-Release number of selected component (if applicable):

  openstack-keystone-2013.1.3-2.el6ost.noarch

  How reproducible:

  Every

  
  Additional info:

  Performing the same command on a Folsom cloud works just fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1253905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244842] Re: NoopQuotaDriver returns usages incorrect format

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244842

Title:
  NoopQuotaDriver returns usages incorrect format

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  get_project_usages and get_user_usages should return  a dictionary
  instead of an integer.

  The form should be dict(limit=-1).

  Associated traceback: http://paste.openstack.org/show/49790/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245502] Re: Grizzly - Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245502

Title:
  Grizzly - Havana nova upgrade failure: Cannot drop index
  'instance_uuid'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I was running Ubuntu 13.04 and upgraded to 13.10.  I was running the
  Ubuntu Precise 1:2013.1.3-0ubuntu1.1 on 13.04 and am now on 13.10's
  provided 1:2013.2~rc2-0ubuntu1.

  After getting the box up, migrating the nova database failed with the
  below error.  I am using MySQL.

  
  # nova-manage -v db sync
  2013-10-27 01:47:03.615 24457 INFO migrate.versioning.api [-] 161 - 162...
  2013-10-27 01:47:03.673 24457 INFO migrate.versioning.api [-] done
  ...
  ...
  2013-10-27 01:47:16.373 24457 INFO migrate.versioning.api [-] 184 - 185...
  Command failed, please check log for more info
  2013-10-27 01:47:17.835 24457 CRITICAL nova [-] (OperationalError) (1553, 
Cannot drop index 'instance_uuid': needed in a foreign key constraint) 'ALTER 
TABLE instance_info_caches DROP INDEX instance_uuid' ()
  2013-10-27 01:47:17.835 24457 TRACE nova Traceback (most recent call last):
  2013-10-27 01:47:17.835 24457 TRACE nova   File /usr/bin/nova-manage, line 
10, in module
  2013-10-27 01:47:17.835 24457 TRACE nova sys.exit(main())
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/manage.py, line 1377, in main
  2013-10-27 01:47:17.835 24457 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/manage.py, line 885, in sync
  2013-10-27 01:47:17.835 24457 TRACE nova return migration.db_sync(version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/migration.py, line 33, in db_sync
  2013-10-27 01:47:17.835 24457 TRACE nova return 
IMPL.db_sync(version=version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 75, in 
db_sync
  2013-10-27 01:47:17.835 24457 TRACE nova return 
versioning_api.upgrade(get_engine(), repository, version)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line 186, in 
upgrade
  2013-10-27 01:47:17.835 24457 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
  2013-10-27 01:47:17.835 24457 TRACE nova   File string, line 2, in 
_migrate
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 40, in 
patched_with_engine
  2013-10-27 01:47:17.835 24457 TRACE nova return f(*a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line 366, in 
_migrate
  2013-10-27 01:47:17.835 24457 TRACE nova schema.runchange(ver, change, 
changeset.step)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 91, in 
runchange
  2013-10-27 01:47:17.835 24457 TRACE nova change.run(self.engine, step)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line 145, 
in run
  2013-10-27 01:47:17.835 24457 TRACE nova script_func(engine)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.py,
 line 129, in upgrade
  2013-10-27 01:47:17.835 24457 TRACE nova return 
_uc_rename(migrate_engine, upgrade=True)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.py,
 line 112, in _uc_rename
  2013-10-27 01:47:17.835 24457 TRACE nova old_name, *(columns))
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/utils.py, line 197, in 
drop_unique_constraint
  2013-10-27 01:47:17.835 24457 TRACE nova uc.drop()
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/constraint.py, line 59, in 
drop
  2013-10-27 01:47:17.835 24457 TRACE nova 
self.__do_imports('constraintdropper', *a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/constraint.py, line 32, in 
__do_imports
  2013-10-27 01:47:17.835 24457 TRACE nova run_single_visitor(engine, 
visitorcallable, self, *a, **kw)
  2013-10-27 01:47:17.835 24457 TRACE nova   File 
/usr/lib/python2.7/dist-packages/migrate/changeset/databases/visitor.py, line 
75, in run_single_visitor
  2013-10-27 01:47:17.835 24457 TRACE nova 

[Yahoo-eng-team] [Bug 1244018] Re: update security group raise HttpError500 exception

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244018

Title:
  update security group raise HttpError500 exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  1.Set the item security_group_api=nova in nova.conf
  2.Restart nova
  3.Create a security group
  4.Update the security group
 PUT 
http://192.168.83.241:8774/v2/99a7b3d4bd6540aaaceae89ac74bfab6/os-security-groups/7
 {
  security_group: {
  name: huangtianhua,
  description:for test
  }
 }
  5.The server raises exception as bellow:
 {
  computeFault: {
  message: The server has either erred or is incapable of performing 
the requested operation.,
  code: 500
 }
 }
  6.I think it's a bug.When traversal the rules of the group before returning 
throws error:
 DetachedInstanceError: Parent instance lt;SecurityGroup at 0x789eed0gt; 
is not bound to a Session; lazy load operation ofattribute 'rules' 
cannot proceed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241337] Re: VM do not resume if attach an volume when suspended

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241337

Title:
  VM do not resume if attach an volume when suspended

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  
  1) nova suspend vm2
  2) nova attach vm2 6ac2e985-9586-438f-a027-bc9591fa5b43 /dev/sdb
  3) nova volume-attach vm2 6ac2e985-9586-438f-a027-bc9591fa5b43 /dev/sdb
  4) nova resume vm2

  VM failed to resume and nova compute report the following errors.

  2013-10-18 12:16:33.175 ERROR nova.openstack.common.rpc.amqp 
[req-a8b196e3-dbd5-45f4-814e-56a715b07fdf admin admin] Exception during message 
handling
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in _process_data
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 354, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 90, in wrapped
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 73, in wrapped
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 244, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 230, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 295, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 272, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 259, in decorated_function
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3314, in resume_instance
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1969, in resume
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3206, in 
_create_domain_and_network
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
{'connection_info': jsonutils.dumps(connection_info)})
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 420, in 
block_device_mapping_update
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp context, 
bdm_id, values)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/conductor/api.py, line 170, in 
block_device_mapping_update
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp context, 
values, create=False)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/conductor/rpcapi.py, line 244, in 
block_device_mapping_update_or_create
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp 
values=values, create=create)
  2013-10-18 12:16:33.175 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/rpcclient.py, line 85, in call
  2013-10-18 

[Yahoo-eng-team] [Bug 1222979] Re: Errored instance can't be deleted if volume deleted first

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1222979

Title:
  Errored instance can't be deleted if volume deleted first

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  1. Create a bootable volume   nova volume-create --image-id image_id 10
  2. Boot a vm using the volume created in step 1  nova boot --flavor 1 
--image image_id --block-device-mapping vda=vol_id:::0 instance1

  If the instance fails to spawn in step 2, the instance ends up in an
  ERROR state. The volume goes back to available.  The hard part is
  creating a situation in which step 2 fails.  One way is to create
  enough quantum ports to exceed your port quota prior to attempting to
  spawn the instance.

  3. Delete the volume.
  4. Attempt to delete the instance.  An exception gets thrown  by 
driver.destroy because the volume is not found but the exception is not ignored 
and the instance can never be deleted.  Exceptions from _cleanup_volumes get 
ignored for this same reason.  I think another exception handler needs to be 
added to also ignore VolumeNotFound from driver.destroy.

  I've reproduced this with current code from trunk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1222979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214850] Re: vmware driver does not work with more than one datacenter in vC

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1214850

Title:
  vmware driver does not work with more than one datacenter in vC

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  CreateVM_Task, vm_folder_ref,
  config=config_spec, pool=res_pool_ref)

  specifies a vm_folder_ref that has no relationship to the datastore.

  This may lead to VM construction and placement errors.

  NOTE: code selects the 0th datacenter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1214850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223859] Re: Network cache not correctly updated during interface-attach

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223859

Title:
  Network cache not correctly updated during interface-attach

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The network cache is not correctly updated when running nova
  interface-attach: only the latest allocated IP is used. See this log:

  http://paste.openstack.org/show/46643/

  Nevermind the error reported when running nova interface-attach: I
  believe it is an unrelated issue, and I'll write another bug report
  for it.

  I noticed this issue a few months ago, but haven't had time to work on
  it. I'll try and submit a patch ASAP. See my analysis of the issue
  here: https://bugs.launchpad.net/nova/+bug/1197192/comments/3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1223859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180044] Re: nova failures when vCenter has multiple datacenters

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180044

Title:
  nova failures when vCenter has multiple datacenters

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Committed
Status in “nova” package in Ubuntu:
  Confirmed

Bug description:
  The method at vmops.py _get_datacenter_ref_and_name does not calculate
  datacenter properly.

  def _get_datacenter_ref_and_name(self):
  Get the datacenter name and the reference.
  dc_obj = self._session._call_method(vim_util, get_objects,
  Datacenter, [name])
  vm_util._cancel_retrieve_if_necessary(self._session, dc_obj)
  return dc_obj.objects[0].obj, dc_obj.objects[0].propSet[0].val

  This will not be correct on systems with more than one datacenter.

  Stack trace from logs:

  ERROR nova.compute.manager [req-9395fe41-cf04-4434-bd77-663e93de1d4a
  foo bar] [instance: 484a42a2-642e-4594-93fe-4f72ddad361f] Error:
  ['Traceback (most recent call last):\n', '  File
  /opt/stack/nova/nova/compute/manager.py, line 942, in
  _build_instance\nset_access_ip=set_access_ip)\n', '  File
  /opt/stack/nova/nova/compute/manager.py, line 1204, in _spawn\n
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n',
  '  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__\n
  self.gen.next()\n', '  File /opt/stack/nova/nova/compute/manager.py,
  line 1200, in _spawn\nblock_device_info)\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 176, in spawn\n
  block_device_info)\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 208, in spawn\n
  _execute_create_vm()\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 204, in
  _execute_create_vm\n
  self._session._wait_for_task(instance[\'uuid\'], vm_create_task)\n', '
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 559, in
  _wait_for_task\nret_val = done.wait()\n', '  File
  /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116,
  in wait\nreturn hubs.get_hub().switch()\n', '  File
  /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
  187, in switch\nreturn self.greenlet.switch()\n', 'NovaException:
  A specified parameter was not correct. \nspec.location.folder\n']

  vCenter error is:
  A specified parameter was not correct. spec.location.folder

  Work around:
  use only one datacenter, use only one cluster, turn on DRS

  Additional failures:
  2013-07-18 10:59:12.788 DEBUG nova.virt.vmwareapi.vmware_images 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [instance: 
5b3961b6-38d9-409c-881e-fe50f67b1539] Got image size of 687865856 for the image 
cde14862-60b8-4360-a145-06585b06577c get_vmdk_size_and_properties 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:156
  2013-07-18 10:59:12.963 WARNING nova.virt.vmwareapi.network_util 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [(ManagedObjectReference){
     value = network-1501
     _type = Network
   }, (ManagedObjectReference){
     value = network-1458
     _type = Network
   }, (ManagedObjectReference){
     value = network-2085
     _type = Network
   }, (ManagedObjectReference){
     value = network-1143
     _type = Network
   }]
  2013-07-18 10:59:13.326 DEBUG nova.virt.vmwareapi.vmops 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [instance: 
5b3961b6-38d9-409c-881e-fe50f67b1539] Creating VM on the ESX host 
_execute_create_vm 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:207
  2013-07-18 10:59:14.258 3145 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is 8ef36d061a9341a09d3a5451df798673 multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is 680b790574c64a9783fd2138c43f5f6d. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-07-18 10:59:18.757 3145 WARNING nova.virt.vmwareapi.driver [-] Task 
[CreateVM_Task] (returnval){
     value = task-33558
     _type = Task
   } status: error The input arguments had entities that did not belong to the 
same datacenter.

  2013-07-18 10:59:18.758 ERROR nova.compute.manager 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 

[Yahoo-eng-team] [Bug 1275062] Re: [OSSA 2014-004] sensitive info in image location is logged when authentication to single tenant swift store fails (CVE-2014-1948)

2014-02-13 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1275062

Title:
  [OSSA 2014-004] sensitive info in image location is logged when
  authentication to single tenant swift store fails (CVE-2014-1948)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  WARNING glance.store [-] Get image UUID data from {'url':
  u'swift+https://X@my_auth_url.com/v2.0/my-images/uuid,
  'metadata': {}} failed: Auth GET failed: https://my_auth_url.com
  RESP_CODE

  19:13:05.027  ERROR glance.store [-] Glance tried all locations to get
  data for image UUID but all have failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1275062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274005] Re: .tx/config in havana needs to catch up with Transifex resource renaming

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274005

Title:
  .tx/config in havana needs to catch up with Transifex resource
  renaming

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack I18n  L10n:
  New

Bug description:
  I18N team decided to maintain Horizon translations for Havana stable
  branch and Transifex resource names is renamed to *-havana.

  Horizon repository has .tx/config and .tx/config in stable/havana
  needs to be updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268614] Re: pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-13 Thread Alan Pevec
** Changed in: ceilometer/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268614

Title:
  pep8 gating fails due to tools/config/check_uptodate.sh

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I see several changes, including
  https://review.openstack.org/#/c/63735/ , failed pep8 gating with
  error from check_uptodate tool:

  
  2014-01-13 14:06:39.643 | pep8 runtests: commands[1] | 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh
  2014-01-13 14:06:39.649 |   /home/jenkins/workspace/gate-nova-pep8$ 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh 
  2014-01-13 14:06:43.581 | 2741,2746d2740
  2014-01-13 14:06:43.581 |  # (optional) indicate whether to set the 
X-Service-Catalog
  2014-01-13 14:06:43.581 |  # header. If False, middleware will not ask for 
service
  2014-01-13 14:06:43.581 |  # catalog on token validation and will not set 
the X-Service-
  2014-01-13 14:06:43.581 |  # Catalog header. (boolean value)
  2014-01-13 14:06:43.581 |  #include_service_catalog=true
  2014-01-13 14:06:43.582 |  
  2014-01-13 14:06:43.582 | E: nova.conf.sample is not up to date, please run 
tools/config/generate_sample.sh
  2014-01-13 14:06:43.582 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268631] Re: Unit tests failing with raise UnknownMethodCallError('management_url')

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1268631

Title:
  Unit tests failing with raise UnknownMethodCallError('management_url')

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  A number of unit tests are failing for every review, likely related to
  the release of keystoneclient 0.4.2:

  fungi i think python-keystoneclient==0.4.2 may have just broken horizon
  fungi looks like all python unit test runs for horizon are now failing on 
keystone-specific tests as of the last few minutes, and the only change in the 
pip freeze output for the tests is python-keystoneclient==0.4.2 instead of 0.4.1
  bknudson fungi: UnknownMethodCallError: Method called is not a member of 
the object: management_url ?
  fungi horizon will presumably need patching to work around that
  bknudson Looks like the horizon test is trying to create a mock 
keystoneclient and creating the mock fails for some reason.

  
  2014-01-13 14:42:38.747 | 
==
  2014-01-13 14:42:38.747 | FAIL: test_get_default_role 
(openstack_dashboard.test.api_tests.keystone_tests.RoleAPITests)
  2014-01-13 14:42:38.748 | 
--
  2014-01-13 14:42:38.748 | Traceback (most recent call last):
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py,
 line 77, in test_get_default_role
  2014-01-13 14:42:38.748 | keystoneclient = self.stub_keystoneclient()
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 306, in stub_keystoneclient
  2014-01-13 14:42:38.748 | self.keystoneclient = 
self.mox.CreateMock(keystone_client.Client)
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 258, in CreateMock
  2014-01-13 14:42:38.748 | new_mock = MockObject(class_to_mock, 
attrs=attrs)
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 556, in __init__
  2014-01-13 14:42:38.749 | attr = getattr(class_to_mock, method)
  2014-01-13 14:42:38.749 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 608, in __getattr__
  2014-01-13 14:42:38.749 | raise UnknownMethodCallError(name)
  2014-01-13 14:42:38.749 | UnknownMethodCallError: Method called is not a 
member of the object: management_url
  2014-01-13 14:42:38.749 |   raise UnknownMethodCallError('management_url')
  2014-01-13 14:42:38.749 | 
  2014-01-13 14:42:38.749 | 
  2014-01-13 14:42:38.749 | 
==
  2014-01-13 14:42:38.749 | FAIL: Tests api.keystone.remove_tenant_user
  2014-01-13 14:42:38.749 | 
--
  2014-01-13 14:42:38.750 | Traceback (most recent call last):
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py,
 line 61, in test_remove_tenant_user
  2014-01-13 14:42:38.750 | keystoneclient = self.stub_keystoneclient()
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 306, in stub_keystoneclient
  2014-01-13 14:42:38.750 | self.keystoneclient = 
self.mox.CreateMock(keystone_client.Client)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 258, in CreateMock
  2014-01-13 14:42:38.750 | new_mock = MockObject(class_to_mock, 
attrs=attrs)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 556, in __init__
  2014-01-13 14:42:38.750 | attr = getattr(class_to_mock, method)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 608, in __getattr__
  2014-01-13 14:42:38.750 | raise UnknownMethodCallError(name)
  2014-01-13 14:42:38.751 | UnknownMethodCallError: Method called is not a 
member of the object: management_url
  2014-01-13 14:42:38.751 |   raise UnknownMethodCallError('management_url')

  
  Examples:
  https://jenkins04.openstack.org/job/gate-horizon-python27/18/console
  

[Yahoo-eng-team] [Bug 1268762] Re: Remove and recreate interfacein ovs if already exists

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268762

Title:
  Remove and recreate interfacein ovs  if already exists

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  If the dhcp-agent machine restarts and openvswitch logs the following
  warning message for all tap interfaces that have not been recreated yet:

  bridge|WARN|could not open network device tap2cf7dbad-9d (No such
  device)

  Once the dhcp-agent starts he recreates the interfaces and readds them to the
  ovs-bridge. Unfortinately, ovs does not reinitalize the interface as its
  already in ovsdb and does not assign it an ofport number.

  In order to correct this we should first remove interfaces that exist and
  then readd them. 

  
  root@arosen-desktop:~# ovs-vsctl  -- --may-exist add-port br-int fake1

  # ofport still -1
  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []
  root@arosen-desktop:~# ip link add fake1 type veth peer name fake11
  root@arosen-desktop:~# ifconfig fake1
  fake1 Link encap:Ethernet  HWaddr 56:c3:a1:2b:1f:f4  
BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []
  root@arosen-desktop:~# ovs-vsctl  -- --may-exist add-port br-int fake1
  root@arosen-desktop:~# ovs-vsctl  list inter | grep -A 2 fake1
  name: fake1
  ofport  : -1
  ofport_request  : []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271426] Re: protected property change not rejected if a subsequent rule match accepts them

2014-02-13 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1271426

Title:
  protected property change not rejected if a subsequent rule match
  accepts them

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance havana series:
  Fix Released
Status in OpenStack Security Notes:
  New

Bug description:
  See initial report here: http://lists.openstack.org/pipermail
  /openstack-dev/2014-January/024861.html

  What is happening is that if there is a specific rule that would
  reject an action and a less specific rule that comes after that would
  accept the action, then the action is being accepted. It should be
  rejected.

  This is because we iterate through the property protection rules
  rather than just finding the first match. This bug does not occur when
  policies are used to determine property protections, only when roles
  are used directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1271426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274487] Re: neutron-metadata-agent incorrectly passes keystone token to neutronclient

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274487

Title:
  neutron-metadata-agent incorrectly passes keystone token to
  neutronclient

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  When instantiating a neutron client, the agent passes keystone token
  to object __init__ as auth_token= keyword argument, while
  neutronclient expects token=. This results in extensive interaction
  with keystone on cloud-init service startup because each request from
  an instance to metadata agent results in new token request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261738] Re: Openstack Glance: user_total_quota calculated incorrectly

2014-02-13 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261738

Title:
  Openstack Glance: user_total_quota calculated incorrectly

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Description of problem: Bug in quota calculation, if an image upload
  fails due to quota limit, the failed image size is still added to
  total storage sum figure! Thus future images may fail to upload even
  if it looks as quota hasn’t been reached yet.

  
  Version-Release number of selected component (if applicable):
  RHEL 6.5 

  python-glanceclient-0.12.0-1.el6ost.noarch
  openstack-glance-2013.2-5.el6ost.noarch
  python-glance-2013.2-5.el6ost.noarch

  How reproducible:


  Steps to Reproduce:
  1. vim /etc/glance/glance-api.conf  user_storage_quota = 250mb (in byets)
  2. service glance-api restart
  3. Upload test small image - would be ok
  4. Upload large image say 4Giga, - should fail with Error unable to create 
new image 

  5. Try to upload another small file say 49MB.

  
  Actual results:

  If the large i,age file or sum of failed uploaded images are more than
  the quota, any image size will fail to upload.

  
  Expected results:

  I should be able to upload as long as the sum of all my images is less
  than configured qouta.

  
  Additional info:

  Mysql show databases;
  connect glance;
  SELECT * FROM images;

  Noticed all the images i tired, initial successful uploaded image
  status=”active”, images that i deleted status=”deleted”, images that
  failed to upload due to quota status=”killed”

  I than calculated the sum of all the “killed” images.
  Set a new quota of the above calculated value + 100MB, restarted glance-api 
service. 
  Only than i was able to upload another image of 49MB. 

  When i set a lower quota value (below the calculated sum of all the
  killed images) wasn’t able to upload any image.

  Images of status killed, which fail upload for any reason, should not
  be added to total storage sum calcualtion or quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267438] Re: create volume option is shown, even without cinder enabled

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267438

Title:
  create volume option is shown, even without cinder enabled

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  on http://localhost:8000/project/images_and_snapshots/

  there is the create volume option enabled, even if cinder is
  disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1267438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265563] Re: keypairs can not have an '@' sign in the name

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265563

Title:
  keypairs can not have an '@' sign in the name

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  when importing a keypair and naming the keypair like
  f...@host.bar.com:

  you get the error Unable to import keypair.
  while the message from keystone is more clear:

  DEBUG:urllib3.connectionpool:POST 
/v2/6ebbe9474cf84bfbb42b5962b6b7e79f/os-keypairs HTTP/1.1 400 108
  RESP: [400] CaseInsensitiveDict({'date': 'Thu, 02 Jan 2014 16:31:28 GMT', 
'content-length': '108', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-86527df6-1dd6-4232-964d-0401332baa78'})
  RESP BODY: {badRequest: {message: Keypair data is invalid: Keypair name 
contains unsafe characters, code: 400}}

  
  message: Keypair data is invalid: Keypair name contains unsafe characters, 
code: 400}}

  we should return this message or try to prevent this issue at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262223] Re: Wrap call to extension_supported on Launch Instance with try/except

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1262223

Title:
  Wrap call to extension_supported on Launch Instance with try/except

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L142

  if api.nova.extension_supported(BlockDeviceMappingV2Boot,
  request):
  source_type_choices.append((volume_image_id,
  _(Boot from image (creates a new volume).)))
  source_type_choices.append((volume_snapshot_id,

  
  extension_supported call can fail, we need to wrap this with try-except

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1262223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265353] Re: check_nvp_config.py erroneous config complaint

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265353

Title:
  check_nvp_config.py erroneous config complaint

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  The utility may return the following error:

  Gateway(L3GatewayServiceConfig) uuid: 40226ac1-86c6-471b-ac8e-3041d73f5c48
Error: specified default L3GatewayServiceConfig gateway 
(40226ac1-86c6-471b-ac8e-3041d73f5c48) is missing from NVP Gateway Services!

  When in fact the L3Gateway Service was setup correctly and LRs could
  be created without problems.

  This error currently affects only Havana, as a fix for Icehouse was
  committed in Change-Id: I3c5b5dcd316df3867f434bbc35483a2636b715d8

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261334] Re: nvp: update network gateway name on backend as well

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261334

Title:
  nvp: update network gateway name on backend as well

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  when a network gateway name is updated the plugin currently only the
  neutron database is updated; it might be useful to propagate the
  update to the backend.

  This breaks a use case when network gateways created in neutron need
  then to be processed by other tools finding them in nvp by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260528] Re: Metering dashboard. Marker could not be found (havana)

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260528

Title:
  Metering dashboard. Marker could not be found (havana)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Hello,

  I couldn't reopen this bug
  https://bugs.launchpad.net/horizon/+bug/1247752 . And decided to
  create new one.

  I use latest havana release(stable/havana branch) code, but also
  plunged into Marker could not be found error in horizon logs.

  [Thu Dec 12 22:49:15 2013] [error] Request returned failure status: 400
  [Thu Dec 12 22:49:18 2013] [error] REQ: curl -i -X GET 
http://192.168.0.2:35357/v2.0/tenants?marker=tenant_markerlimit=21 -H 
User-Agent: python-keystoneclient -H Forwarded: 
for=10.20.0.1;by=python-keystonece
  [Thu Dec 12 22:49:18 2013] [error] RESP: [400] {'date': 'Thu, 12 Dec 2013 
22:49:18 GMT', 'content-type': 'application/json', 'content-length': '88', 
'vary': 'X-Auth-Token'}
  [Thu Dec 12 22:49:18 2013] [error] RESP BODY: {error: {message: Marker 
could not be found, code: 400, title: Bad Request}}

  tenant_marker value comes from
  
https://github.com/openstack/horizon/blob/stable/havana/openstack_dashboard/dashboards/admin/metering/views.py#L149

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260423] Re: Email shouldn't be a mandatory attribute

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260423

Title:
  Email shouldn't be a mandatory attribute

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  When using a LDAP backend, it's possible that a user won't have the
  email attribute defined, however it should still be possible to edit
  the other fields.

  Steps to reproduce (in an environment with keystone using a LDAP backend):
  1. Log in as admin
  2. Go to the Users dashboard
  3. Select a user that doesn't have an email defined

  Expected result:
  4. Edit user modal opens

  Actual result:
  4. Error 500

  Traceback:
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in get
154. form = self.get_form(form_class)
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py in 
get_form
82. return form_class(self.request, **self.get_form_kwargs())
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in 
get_form_kwargs
41. kwargs = {'initial': self.get_initial()}
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py
 in get_initial
103. 'email': user.email}
  File /opt/stack/python-keystoneclient/keystoneclient/base.py in __getattr__
425. raise AttributeError(k)

  Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
  Exception Value: email

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254555] Re: tenant does not see network that is routable from tenant-visible network until neutron-server is restarted

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254555

Title:
  tenant does not see network that is routable from tenant-visible
  network until neutron-server is restarted

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  In TripleO We have a setup script[1] that does this as an admin:

  neutron net-create default-net --shared
  neutron subnet-create --ip_version 4 --allocation-pool 
start=10.0.0.2,end=10.255.255.254 --gateway 10.0.0.1 10.0.0.0/8 
$ID_OF_default_net
  neutron router-create default-router
  neutron router-interface-add default-router $ID_OF_10.0.0.0/8_subnet
  neutron net-create ext-net --router:external=True
  neutron subnet-create ext-net $FLOATING_CIDR --disable-dhcp --alocation-pool 
start=$FLOATING_START,end=$FLOATING_END
  neutron router-gateway-set default-router ext-net

  I would then expect that all users will be able to see ext-net using
  'neutron net-list' and that they will be able to create floating IPs
  on ext-net.

  As of this commit:

  commit c655156b98a0a25568a3745e114a0bae41bc49d1
  Merge: 75ac6c1 c66212c
  Author: Jenkins jenk...@review.openstack.org
  Date:   Sun Nov 24 10:02:04 2013 +

  Merge MidoNet: Added support for the admin_state_up flag

  I see that the ext-net network is not available after I do all of the
  above router/subnet creation. It does become available to tenants as
  soon as I restart neutron-server.

  [1] https://git.openstack.org/cgit/openstack/tripleo-
  incubator/tree/scripts/setup-neutron

  I can reproduce this at will using the TripleO devtest process on real
  hardware. I have not yet reproduced on VMs using the 'devtest'
  workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252806] Re: unable to add allow all ingress traffic security group rule

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252806

Title:
  unable to add allow all ingress traffic security group rule

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The following rule is unable to be installed:

  $ neutron security-group-rule-create --direction ingress default
  409-{u'NeutronError': {u'message': u'Security group rule already exists. 
Group id is 29dc1837-75d3-457a-8a90-14f4b6ea6db9.', u'type': 
u'SecurityGroupRuleExists', u'detail': u''}}

  
  The reason for this is when the db query is done it passes this in as a 
filter: 

  {'tenant_id': [u'577a2f0c78fb4e36b76902977a5c1708'], 'direction':
  [u'ingress'], 'ethertype': ['IPv4'], 'security_group_id':
  [u'0fb10163-81b2-4538-bd11-dbbd3878db51']}

  
  and the remote_group_id is wild carded thus it matches this rule: 

  [ {'direction': u'ingress',
'ethertype': u'IPv4',
'id': u'8d5c3429-f4ef-4258-8140-5ff3247f9dd6',
'port_range_max': None,
'port_range_min': None,
'protocol': None,
'remote_group_id': None,
'remote_ip_prefix': None,
'security_group_id': u'0fb10163-81b2-4538-bd11-dbbd3878db51',
'tenant_id': u'577a2f0c78fb4e36b76902977a5c1708'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2014-02-13 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258585] Re: fwaas_driver.ini missing from setup.cfg

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258585

Title:
  fwaas_driver.ini missing from setup.cfg

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  fwaas_driver.ini is missing from setup.cfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2014-02-13 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252437] Re: uncaught portnotfound exception on get_dhcp_port

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252437

Title:
  uncaught portnotfound exception on get_dhcp_port

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  While working on fix for bug #1251874 I have noticed this stacktrace
  in the log:

  2013-11-18 17:20:46.237 1021 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 438, in 
_process_data
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 44, in dispatch
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/dhcp_rpc_base.py, line 139, in get_dhcp_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
dict(port=port))
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 588, in update_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
original_port = super(Ml2Plugin, self).get_port(context, id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1454, in get_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp port 
= self._get_port(context, id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 266, in _get_port
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
raise q_exc.PortNotFound(port_id=id)
  2013-11-18 17:20:46.237 1021 TRACE neutron.openstack.common.rpc.amqp 
PortNotFound: Port d68c27dd-210b-4b10-9c41-40ba01aa0fd3 could not be found

  This is because the try-except looks for exc.NoResultFound and this is
  obviously a mistake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252005] Re: when creating instance, access security and networking tabs missing asterisk

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1252005

Title:
  when creating instance,access security and networking tabs missing
  asterisk

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  when creating an instance, only the Details tab has asterisk, although
  security group and network is mandatory, therefore should also include
  asterisk

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251874] Re: reduce severity of network notfound trace when looked up by dhcp agent

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251874

Title:
  reduce severity of network notfound trace when looked up by dhcp agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Neutron Server log has a gazillion of these traces:

  2013-11-15 00:40:31.639 8016 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 438, in 
_process_data
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 44, in dispatch
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/dhcp_rpc_base.py, line 150, in get_dhcp_port
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
network = plugin.get_network(context, network_id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 352, in get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
result = super(Ml2Plugin, self).get_network(context, id, None)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1013, in 
get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
network = self._get_network(context, id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 252, in 
_get_network
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
raise q_exc.NetworkNotFound(net_id=id)
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp 
NetworkNotFound: Network 6f199bbe-75ad-429a-ac7e-9c49bc389be5 could not be found
  2013-11-15 00:40:31.639 8016 TRACE neutron.openstack.common.rpc.amqp

  These are about the dhcp agent wanting the sync the state between its
  local representation of the one on the server's. But an unfound
  network should be tolerated and no exception trace should be reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240349] Re: publish_errors cfg option is broken

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240349

Title:
  publish_errors cfg option is broken

Status in Cinder:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Trove - Database as a Service:
  In Progress

Bug description:
  Our nova.conf contains a publish_errors option, which doesn't work
  because we don't have the necessary oslo modules:

  # publish error events (boolean value)
  publish_errors=true

  [root@ip9-12-17-141 ˜]# Traceback (most recent call last):
File /usr/bin/nova-api, line 10, in module
  sys.exit(main())
File /usr/lib/python2.6/site-packages/nova/cmd/api.py, line 41, in main
  logging.setup(nova)
File /usr/lib/python2.6/site-packages/nova/openstack/common/log.py, line 
372, in setup
  _setup_logging_from_conf()
File /usr/lib/python2.6/site-packages/nova/openstack/common/log.py, line 
472, in _setup_logging_from_conf
  logging.ERROR)
File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
40, in import_object
  return import_class(import_str)(*args, **kwargs)
File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
30, in import_class
  __import__(mod_str)
  ImportError: No module named log_handler

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1240349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244126] Re: neutron lb-pool-list running by admin returns also non-admin load balancer pools which appear later in horizon's admin project

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244126

Title:
  neutron lb-pool-list running by admin returns also non-admin load
  balancer pools which appear later in horizon's admin project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Version
  ===
  Havana on rhel

  Description
  ===
  neutron lb-pool-list should return the list of load balancer pools in the 
user's tenant, however when running it with admin - it prints the list of all 
tenant's pools.
  The side effect is that the horizon's Project-Load Balancers tab while 
logging-in with the admin user contains load balancers that has nothing to do 
with the admin tenant.

  # keystone tenant-list 
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | abd7d9c464814aff98652c3e235a799b |  admin   |   True  |
  | e86dccb5c751465a8d338f6e3aeb8228 | services |   True  |
  | 43029e52371247ca9dc771780a8f41b5 | vlan_211 |   True  |
  | 0b3607a0807a4d928b0eab794b291198 | vlan_212 |   True  |
  | 783c402f63c94545b270177661631eac | vlan_213 |   True  |
  | 8bfe5effe4e942c2a5d4f41e46f2e09d | vlan_214 |   True  |
  +--+--+-+

  
  # neutron lb-pool-list 
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

  
  # neutron lb-pool-list --all-tenant
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

  
  # neutron lb-pool-list --tenant-id abd7d9c464814aff98652c3e235a799b
  empty output

  
  # neutron lb-pool-list --tenant-id 8bfe5effe4e942c2a5d4f41e46f2e09d
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1244126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249188] Re: [neutron bandwidth metering] When I delete a label, router's tenant_id disappeared.

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1249188

Title:
  [neutron bandwidth metering] When I delete a label, router's tenant_id
  disappeared.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  [neutron bandwidth metering]

  I delete a label using neutronclient CLI (neutron meter-label-delete).
  Then router's tenant_id is omitted.
  I don' t know why.
  it's a bug maybe

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1249188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246258] Re: UnboundLocalError: local variable 'network_name' referenced before assignment

2014-02-13 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246258

Title:
  UnboundLocalError: local variable 'network_name' referenced before
  assignment

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The exception occurs when trying to create/delete an instance that is
  using a network that is not owned by the admin tenant. This prevents
  the deletion of the instance.

  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in 
decorated_function
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1616, in 
run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
do_run_instance()
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
246, in inner
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1615, in 
do_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 965, in 
_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
notify(error, msg=unicode(e))  # notify that build failed
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 949, in 
_run_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
instance, image_meta, legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1078, in 
_build_instance
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
filter_properties, bdms, legacy_bdm_in_spec)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1122, in 
_reschedule_or_error
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 
self._log_original_error(exc_info, instance_uuid)
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1117, in 
_reschedule_or_error
  2013-10-29 00:51:36.173 18588 TRACE nova.openstack.common.rpc.amqp 

[Yahoo-eng-team] [Bug 1250581] Re: run_tests.sh fails with a fresh venv due to django 1.6 installed

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1250581

Title:
  run_tests.sh fails with a fresh venv due to django 1.6 installed

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Today when i removed the existing vnev and ran run_tests.sh, run_tests.sh 
failed.
  run_tests.sh creates a fresh venv.

  I found Django 1.6 is installed. it seems django-nose dependencies installs 
django-1.6.
  It happens when we run pip install -r requirements.txt and then run pip 
install -r test-requirements.txt with --upgrade option. Horizon 
tools/install_venv_common is old and it seems to do so.

  In the recent tools/install_venv_common both requirements files are
  evaluated at the same time and it looks the right solution. Updating
  them to new version (from oslo) addresses this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1250581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243888] Re: neutron-check-nvp-config failure

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243888

Title:
  neutron-check-nvp-config failure

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  neutron-check-nvp-config fails with error:

  
  2013-10-23 12:06:40.937 2495 CRITICAL neutron [-] main() takes exactly 1 
argument (0 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221419] Re: unable to ping floating ip from fixed_ip association

2014-02-13 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1221419

Title:
  unable to ping floating ip from fixed_ip association

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Currently if  one checks out a floating ip the instance that is
  associated with that floating ip is unable to ping it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1221419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223875] Re: boot from image (creates as a new volume) does not work

2014-02-13 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1223875

Title:
  boot from image (creates as a new volume) does not work

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  trying to launch an instance from horizon and selecting boot from
  image (creates a new volume) fails each time with the following error
  in log:

  HTTP exception thrown: Invalid imageRef provided.

  I have used this image to launch other regular instances

  2013-09-11 16:20:40.347 19829 DEBUG nova.api.openstack.wsgi 
[req-ee257bd7-200b-41e5-b6ea-eef4d0a70b5b 0007c2f5563c4cc883c0c8d9e7f7d6f8 
532307bfce18431ba50d309622619b66] Action: 'create', body: {server: {name: 
dafna, imageRef: ,
   flavorRef: 1, max_count: 1, min_count: 1, security_groups: 
[{name: default}]}} _process_stack 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:935
  2013-09-11 16:20:40.348 19829 DEBUG nova.api.openstack.wsgi 
[req-ee257bd7-200b-41e5-b6ea-eef4d0a70b5b 0007c2f5563c4cc883c0c8d9e7f7d6f8 
532307bfce18431ba50d309622619b66] Calling method bound method 
Controller.create of nova.api.openstac
  k.compute.servers.Controller object at 0x315c510 _process_stack 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:936
  2013-09-11 16:20:40.350 19829 INFO nova.api.openstack.wsgi 
[req-ee257bd7-200b-41e5-b6ea-eef4d0a70b5b 0007c2f5563c4cc883c0c8d9e7f7d6f8 
532307bfce18431ba50d309622619b66] HTTP exception thrown: Invalid imageRef 
provided.
  2013-09-11 16:20:40.351 19829 DEBUG nova.api.openstack.wsgi 
[req-ee257bd7-200b-41e5-b6ea-eef4d0a70b5b 0007c2f5563c4cc883c0c8d9e7f7d6f8 
532307bfce18431ba50d309622619b66] Returning 400 to user: Invalid imageRef 
provided. __call__ /usr/lib/
  python2.6/site-packages/nova/api/openstack/wsgi.py:1198
  2013-09-11 16:20:40.352 19829 INFO nova.osapi_compute.wsgi.server 
[req-ee257bd7-200b-41e5-b6ea-eef4d0a70b5b 0007c2f5563c4cc883c0c8d9e7f7d6f8 
532307bfce18431ba50d309622619b66] 10.35.64.69 POST 
/v2/532307bfce18431ba50d309622619b66/servers
   HTTP/1.1 status: 400 len: 266 time: 0.0560989

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1223875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >