[Yahoo-eng-team] [Bug 1258044] Re: Don't need session.flush in context managed by session in sqlalchemy

2013-12-05 Thread ChangBo Guo
** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1258044

Title:
  Don't need session.flush in context managed by session in sqlalchemy

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Within the scope of a single method, keeping all the reads and writes within
  the context managed by a single session. In this way, the session's __exit__
  handler will take care of calling flush() and commit() for you.
  If using this approach, you should not explicitly call flush() or commit().
  See 
http://stackoverflow.com/questions/4201455/sqlalchemy-whats-the-difference-between-flush-and-commit

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1258044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258055] [NEW] Inappropriate exception for flavorRef

2013-12-05 Thread zhangyanzi
Public bug reported:

There is an Inappropriate exception for flavorRef in nova boot API

try:
flavor_id = self._flavor_id_from_req_data(body)
except ValueError as error:
msg = _(Invalid flavorRef provided.)
raise exc.HTTPBadRequest(explanation=msg)

I think it will be better for:
try:
flavor_id = self._flavor_id_from_req_data(body)
except ValueError as error:
raise exc.HTTPBadRequest(explanation=error.format_message())

** Affects: nova
 Importance: Undecided
 Assignee: zhangyanzi (zhangyanzi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = zhangyanzi (zhangyanzi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258055

Title:
  Inappropriate exception for flavorRef

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is an Inappropriate exception for flavorRef in nova boot API

  try:
  flavor_id = self._flavor_id_from_req_data(body)
  except ValueError as error:
  msg = _(Invalid flavorRef provided.)
  raise exc.HTTPBadRequest(explanation=msg)

  I think it will be better for:
  try:
  flavor_id = self._flavor_id_from_req_data(body)
  except ValueError as error:
  raise exc.HTTPBadRequest(explanation=error.format_message())

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245519] Re: instance stuck in scheduling when image cannot be copied to swift backend

2013-12-05 Thread Dafna Ron
this bug has been solved by Red Hat and the behaviour no longer happens.

** Changed in: nova
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245519

Title:
  instance stuck in scheduling when image cannot be copied to swift
  backend

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I configured my setup to work with swift backend. 
  my data servers do not have space left. 
  when I create an image with --location, the image is created locally and 
glance is not trying to copy it to swift until we boot an instance. 
  since there is no space left on swift, when I try to boot, the instance is 
simply stuck in scheduling forever. 

  [root@opens-vdsb ~(keystone_admin)]# nova list 
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | d6ae7ce8-0689-4580-b65d-29dae8497999 | bla  | BUILD  | scheduling | NOSTATE 
|  |
  
+--+--+++-+--+
  [root@opens-vdsb ~(keystone_admin)]# 

  
  [root@opens-vdsb ~(keystone_admin)]# egrep 
d6ae7ce8-0689-4580-b65d-29dae8497999 /var/log/*/*
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:31 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:34 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:36 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:41 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:49 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:35:59 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:36:11 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:36:26 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 686 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:36:44 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 684 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  /var/log/httpd/access_log:10.35.200.226 - - [28/Oct/2013:14:37:04 +0200] GET 
/dashboard/project/instances/?action=row_updatetable=instancesobj_id=d6ae7ce8-0689-4580-b65d-29dae8497999
 HTTP/1.1 200 684 
http://opens-vdsb.qa.lab.tlv.redhat.com/dashboard/project/instances/; 
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130806 Firefox/17.0
  

[Yahoo-eng-team] [Bug 1132879] Re: server reboot hard and rebuild are flaky in tempest when ssh is enabled

2013-12-05 Thread Attila Fazekas
Both issue still happening even with nova network when you try to access
the machine via floating IP, after rebuild or reboot.

** Changed in: nova
   Status: Invalid = New

** Changed in: tempest
   Status: In Progress = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1132879

Title:
  server reboot hard and rebuild are flaky in tempest when ssh is
  enabled

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Working on enabling back ssh access to VMs in tempest tests:

  https://review.openstack.org/#/c/22415/
  https://blueprints.launchpad.net/tempest/+spec/ssh-auth-strategy

  On the gate devstack with nova networking the hard reboot and rebuild
  test are sometimes passing and sometimes not.

  On the gate devstack with quantum networking the hard reboot and
  rebuild tests are systematically not passing, and blocking the overall
  blueprint implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1132879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258065] [NEW] The implementation of utils.str2dict fails to convert a dict with more than 2 elements

2013-12-05 Thread Jianing YANG
Public bug reported:

from neutron.common import utils
print 
utils.str2dict('inside_addr=10.0.1.2,inside_port=22,outside_addr=172.16.0.1,outside_port=,protocol=tcp')

returns 
{'inside_addr': '10.0.1.2', 'inside_port': 
'22,outside_addr=172.16.0.1,outside_port=,protocol=tcp'}

expected value should be

{'outside_port': '', 'inside_addr': '10.0.1.2', 'protocol': 'tcp',
'inside_port': '22', 'outside_addr': '172.16.0.1'}

The reason is that in the third line of the implementation below,
string.split(',', 1) only splits out two key-value pairs.

quote from neutron/common/utils.py:181:

def str2dict(string):
res_dict = {}
for keyvalue in string.split(',', 1):
(key, value) = keyvalue.split('=', 1)
res_dict[key] = value
return res_dict

a quick fix might be remove ,1 from string.split. But it turns out
that str2dict/dict2str may also fail when input values containing
characters like '=' or ','. A better fix might be using json
encode/decode to deal with it.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258065

Title:
  The implementation of utils.str2dict fails to convert a dict with more
  than 2 elements

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  from neutron.common import utils
  print 
utils.str2dict('inside_addr=10.0.1.2,inside_port=22,outside_addr=172.16.0.1,outside_port=,protocol=tcp')

  returns 
  {'inside_addr': '10.0.1.2', 'inside_port': 
'22,outside_addr=172.16.0.1,outside_port=,protocol=tcp'}

  expected value should be

  {'outside_port': '', 'inside_addr': '10.0.1.2', 'protocol': 'tcp',
  'inside_port': '22', 'outside_addr': '172.16.0.1'}

  The reason is that in the third line of the implementation below,
  string.split(',', 1) only splits out two key-value pairs.

  quote from neutron/common/utils.py:181:

  def str2dict(string):
  res_dict = {}
  for keyvalue in string.split(',', 1):
  (key, value) = keyvalue.split('=', 1)
  res_dict[key] = value
  return res_dict

  a quick fix might be remove ,1 from string.split. But it turns out
  that str2dict/dict2str may also fail when input values containing
  characters like '=' or ','. A better fix might be using json
  encode/decode to deal with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258068] [NEW] glance-manage doesnt take version into consideration

2013-12-05 Thread Amala Basha
Public bug reported:

A glance-manage db upgrade/db downgrade will do either an 'upgrade to
latest' or 'downgrade to None' even though a version argument is
provided. The version being passed isn't being picked up and None is
passed across.

** Affects: glance
 Importance: Undecided
 Assignee: Amala Basha (amalabasha)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Amala Basha (amalabasha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1258068

Title:
  glance-manage doesnt take version into consideration

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  A glance-manage db upgrade/db downgrade will do either an 'upgrade to
  latest' or 'downgrade to None' even though a version argument is
  provided. The version being passed isn't being picked up and None is
  passed across.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1258068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258070] [NEW] Need general error handling in API paste pipeline

2013-12-05 Thread Akihiro Motoki
Public bug reported:

Neutron api paste pipeline needs a general error handling, though it is just a 
potential issue.
If an error occurs in the pipeline before neutronapiapp_v2_0, a traceback will 
be returned in an API response.

For example, i added a code which raises an exception [1] to
NeutronKeystoneContext middleware,

$ neutron net-list
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 203, in __call__
return app(environ, start_response)
  File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 581, in __call__
return self.app(env, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /opt/stack/neutron/neutron/auth.py, line 56, in __call__
print 
NameError: global name '' is not defined


[1] 
diff --git a/neutron/auth.py b/neutron/auth.py
index 220bf3e..e9f700a 100644
--- a/neutron/auth.py
+++ b/neutron/auth.py
@@ -53,6 +53,8 @@ class NeutronKeystoneContext(wsgi.Middleware):
 # Inject the context...
 req.environ['neutron.context'] = ctx

+print 
+
 return self.application

** Affects: neutron
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258070

Title:
  Need general error handling in API paste pipeline

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron api paste pipeline needs a general error handling, though it is just 
a potential issue.
  If an error occurs in the pipeline before neutronapiapp_v2_0, a traceback 
will be returned in an API response.

  For example, i added a code which raises an exception [1] to
  NeutronKeystoneContext middleware,

  $ neutron net-list
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 389, 
in handle_one_response
  result = self.application(self.environ, start_response)
File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 203, in 
__call__
  return app(environ, start_response)
File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 581, in __call__
  return self.app(env, start_response)
File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/neutron/neutron/auth.py, line 56, in __call__
  print 
  NameError: global name '' is not defined

  
  [1] 
  diff --git a/neutron/auth.py b/neutron/auth.py
  index 220bf3e..e9f700a 100644
  --- a/neutron/auth.py
  +++ b/neutron/auth.py
  @@ -53,6 +53,8 @@ class NeutronKeystoneContext(wsgi.Middleware):
   # Inject the context...
   req.environ['neutron.context'] = ctx

  +print 
  +
   return self.application

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258083] [NEW] VM instance can not be deleted by nova

2013-12-05 Thread chenhaiq
Public bug reported:

When I run cmd:
nova delete 98106004-feca-419f-8781-ec79333032d6

There is an exception in n-cpu process:

2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 1919, in _delete_instance
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp 
self._shutdown_instance(context, db_inst, bdms)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 1829, in _shutdown_instance
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp network_info = 
self._get_instance_nw_info(context, instance)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 871, in _get_instance_nw_info
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp instance)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp result = 
self._get_instance_nw_info(context, instance, networks)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/api.py, line 49, in wrapper
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp res = f(self, 
context, *args, **kwargs)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 456, in 
_get_instance_nw_info
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp nw_info = 
self._build_network_info_model(context, instance, networks)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1027, in 
_build_network_info_model
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp subnets)
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 957, in 
_nw_info_build_network
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp 
label=network_name,
2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp UnboundLocalError: 
local variable 'network_name' referenced before assignment

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258083

Title:
  VM instance can not be deleted by nova

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I run cmd:
  nova delete 98106004-feca-419f-8781-ec79333032d6

  There is an exception in n-cpu process:

  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 1919, in _delete_instance
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp 
self._shutdown_instance(context, db_inst, bdms)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 1829, in _shutdown_instance
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp network_info 
= self._get_instance_nw_info(context, instance)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 871, in _get_instance_nw_info
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp instance)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp result = 
self._get_instance_nw_info(context, instance, networks)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/api.py, line 49, in wrapper
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp res = 
f(self, context, *args, **kwargs)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 456, in 
_get_instance_nw_info
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp nw_info = 
self._build_network_info_model(context, instance, networks)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1027, in 
_build_network_info_model
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp subnets)
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 957, in 
_nw_info_build_network
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp 
label=network_name,
  2013-12-05 01:59:04.309 TRACE nova.openstack.common.rpc.amqp 
UnboundLocalError: local variable 'network_name' referenced before assignment

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1257896] Re: nova scheduler (devstack) doesnt seem to honor 'scheduler_default_filters = AllHostsFilter' in nova.conf

2013-12-05 Thread John Smith
Invalid. QEMU and KVM only support 2GB of memory on a 32-bit host. This
is the reason the instance doesnt launch.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257896

Title:
  nova scheduler (devstack) doesnt seem to honor
  'scheduler_default_filters = AllHostsFilter' in nova.conf

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When setting 'scheduler_default_filters = AllHostsFilter' in nova.conf
  on devstack, the default filters (like 'RamFilter') are still used.


  Steps to reproduce:

  
  ./stack.sh
  export OS_USERNAME=admin
  export OS_PASSWORD=password
  export OS_TENANT_NAME=demo
  export OS_AUTH_URL=http://192.168.126.142:5000/v2.0/

  source /usr/local/src/devstack/openrc admin

  vi /etc/nova/nova.conf
  Add 'scheduler_default_filters = AllHostsFilter'.
  use 'screen -r' to restart nova scheduler.

  wget https://launchpadlibrarian.net/83303699/cirros-0.3.0-i386-disk.img
  glance image-create --name=cirros-0.3.0-i386 --is-public=true 
--container-format=bare --disk-format=qcow2  cirros-0.3.0-i386-disk.img

  launch a flavor that woud overallocate/overcommit ram memory: nova boot 
--flavor m1.foo --image cirros-0.3.0-i386 myvm
  nova show myvm

  The instance fails to launch: 'No valid host was found.'. Launching a
  smaller flavor, like 'm1.nano', does succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258103] [NEW] Unused imports in libvirt fake utilities

2013-12-05 Thread Gary Kotton
Public bug reported:

Removed unused imports and assignments

** Affects: nova
 Importance: Low
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
Milestone: None = icehouse-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258103

Title:
  Unused imports in libvirt fake utilities

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Removed unused imports and assignments

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257644] Re: gate-tempest-dsvm-postgres-full fails - unable to schedule instance

2013-12-05 Thread Joe Gordon
As Matt Riedemann pointed out in a duplicate bug it looks like a
heartbeat issue with nova-compute

http://logs.openstack.org/04/51404/7/gate/gate-tempest-dsvm-postgres-
full/b1825bf/logs/screen-n-sch.txt.gz?level=DEBUG#_2013-12-04_08_01_19_167

2013-12-04 08:01:19.167 DEBUG nova.servicegroup.drivers.db [req-
9b187d9f-258f-42ef-8235-973de78a46d9 InstanceActionsV3TestXML-
tempest-1651913648-user InstanceActionsV3TestXML-
tempest-1651913648-tenant] DB_Driver.is_up last_heartbeat = 2013-12-04
08:01:19.008808 elapsed = 0.158322 is_up
/opt/stack/new/nova/nova/servicegroup/drivers/db.py:71


A bug like this was seen before in https://review.openstack.org/#/c/46336/  
https://bugs.launchpad.net/nova/+bug/1221987

** Changed in: nova
   Status: New = Triaged

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257644

Title:
  gate-tempest-dsvm-postgres-full fails - unable to schedule instance

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  The trace for the failure is:

  2013-12-04 08:01:19.167 INFO nova.filters 
[req-9b187d9f-258f-42ef-8235-973de78a46d9 
InstanceActionsV3TestXML-tempest-1651913648-user 
InstanceActionsV3TestXML-tempest-1651913648-tenant] Filter ComputeFilter 
returned 0 hosts
  2013-12-04 08:01:19.168 WARNING nova.scheduler.driver 
[req-9b187d9f-258f-42ef-8235-973de78a46d9 
InstanceActionsV3TestXML-tempest-1651913648-user 
InstanceActionsV3TestXML-tempest-1651913648-tenant] [instance: 
78a1af24-f108-404c-a9e8-a021362c206b] Setting instance to ERROR state.
  2013-12-04 08:01:27.352 INFO nova.scheduler.filter_scheduler 
[req-24c3c05c-c371-41b6-a314-3fa91340ed18 
ImagesV3TestJSON-tempest-309232411-user 
ImagesV3TestJSON-tempest-309232411-tenant] Attempting to build 1 instance(s) 
uuids: [u'7e39573c-4ffd-4405-a109-cdca0685fe9c']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178375] Re: Orphan exchanges in Qpid and lack of option for making queues [un]durable

2013-12-05 Thread Thierry Carrez
** Changed in: oslo.messaging
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1178375

Title:
  Orphan exchanges in Qpid and lack of option for making queues
  [un]durable

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Start qpid, nova-api, nova-scheduler, and nova-conductor, and nova-
  compute.

  There are orphan direct exchanges in qpid. Checked using qpid-config
  exchanges. The exchanges continue to grow, presumably, whenever nova-
  compute does a periodic update over AMQP.

  Moreover, the direct and topic exchanges are by default durable which
  is a problem. We want the ability to turn on/off the durable option
  just like Rabbit options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1178375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258113] [NEW] Cannot determine boot-finished state reliably

2013-12-05 Thread Robie Basak
Public bug reported:

From outside an instance, ssh becomes available before cloud-init is
done. If I ssh in, I'd like to be able to automatically wait until
cloud-init has finished, so as not to collide with anything it is doing.

I'd like to be able to do this independently of any userdata. I think
this makes for a cleaner separation between components, and makes
everything easier to develop and test. For example, I'ld like uvtool to
allow the user to completely override everything that is passed through
to cloud-init, but for uvtool to still be able to detect when the
instance is ready from the outside. This is why I don't like the idea
of having to make my own arrangement for this via userdata.

/var/lib/cloud/instance/boot-finished works, but only for the first
boot, since it is persistent. Another point of unreliability is if
somebody boots a machine in order to modify it, shuts it down, and then
clones it. In this case, each clone has a boot-finished file before it
has started booting.

We discussed doing something in /run instead.

I can't think of any other mechanism that can be used from the
outside, independent of userdata, that is not horribly obtuse or racy.

Could whatever we end up doing please be added in time for Ubuntu
Trusty, so that tools that boot Trusty images will be able to use it?
Otherwise we'll need to wait another two years I think. Thanks!

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1258113

Title:
  Cannot determine boot-finished state reliably

Status in Init scripts for use on cloud images:
  New

Bug description:
  From outside an instance, ssh becomes available before cloud-init is
  done. If I ssh in, I'd like to be able to automatically wait until
  cloud-init has finished, so as not to collide with anything it is
  doing.

  I'd like to be able to do this independently of any userdata. I think
  this makes for a cleaner separation between components, and makes
  everything easier to develop and test. For example, I'ld like uvtool
  to allow the user to completely override everything that is passed
  through to cloud-init, but for uvtool to still be able to detect when
  the instance is ready from the outside. This is why I don't like the
  idea of having to make my own arrangement for this via userdata.

  /var/lib/cloud/instance/boot-finished works, but only for the first
  boot, since it is persistent. Another point of unreliability is if
  somebody boots a machine in order to modify it, shuts it down, and
  then clones it. In this case, each clone has a boot-finished file
  before it has started booting.

  We discussed doing something in /run instead.

  I can't think of any other mechanism that can be used from the
  outside, independent of userdata, that is not horribly obtuse or
  racy.

  Could whatever we end up doing please be added in time for Ubuntu
  Trusty, so that tools that boot Trusty images will be able to use it?
  Otherwise we'll need to wait another two years I think. Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1258113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177200] Re: subnets in load balancer models are confusing

2013-12-05 Thread Eugene Nikanorov
Marking as invalid. Routed mode for load balancers is planned so ability
to specify the VIP on different subnet then the pool is an important
part of the API

** Changed in: neutron
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1177200

Title:
  subnets in load balancer models are confusing

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  ~$ quantum subnet-list 
  
+--+--+-++
  | id   | name | cidr| 
allocation_pools   |
  
+--+--+-++
  | 131740ac-8ef0-4e8e-b571-2246e05d470a |  | 10.0.2.0/24 | {start: 
10.0.2.2, end: 10.0.2.254} |
  | 193dc2ec-9893-423a-a59f-77eca753f197 |  | 30.0.0.0/16 | {start: 
30.0.1.0, end: 30.0.1.254} |
  
+--+--+-++

  quantum lb-pool-create --lb-method ROUND_ROBIN --name mypool
  --protocol HTTP --subnet-id 131740ac-8ef0-4e8e-b571-2246e05d470a

  [question] what does mean by this subnet-id in above command?
  quantum lb-member-create --address 30.0.1.2 --protocol-port 80 mypool
  [question] can I add a IP which does not belong to the pool's subnet?
  quantum lb-vip-create --name myvip --protocol-port 80 --protocol HTTP 
--subnet-id 193dc2ec-9893-423a-a59f-77eca753f197 mypool
  [question] If vip subnet is not the same as the pool subnet,  I am assuming a 
l3 agent is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1177200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236585] [NEW] tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since fails sporadically

2013-12-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

See: http://logs.openstack.org/69/49169/2/check/check-tempest-devstack-
vm-full/473539f/console.html

2013-10-07 18:57:09.859 | 
==
2013-10-07 18:57:09.859 | FAIL: 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
2013-10-07 18:57:09.860 | 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
2013-10-07 18:57:09.860 | 
--
2013-10-07 18:57:09.860 | _StringException: Empty attachments:
2013-10-07 18:57:09.861 |   stderr
2013-10-07 18:57:09.861 |   stdout
2013-10-07 18:57:09.861 | 
2013-10-07 18:57:09.862 | pythonlogging:'': {{{
2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,059 Request: GET 
http://127.0.0.1:8774/v2/3b02395d4eec44958ffcc10ac2673fd2/servers?changes-since=2013-10-07T18%3A38%3A31.034080
2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,490 Response Status: 200
2013-10-07 18:57:09.863 | 2013-10-07 18:38:33,490 Nova request id: 
req-906f2cf2-6508-4cd2-bf62-3595b18d223a
2013-10-07 18:57:09.863 | }}}
2013-10-07 18:57:09.863 | 
2013-10-07 18:57:09.863 | Traceback (most recent call last):
2013-10-07 18:57:09.864 |   File 
tempest/api/compute/servers/test_list_servers_negative.py, line 191, in 
test_list_servers_by_changes_since
2013-10-07 18:57:09.864 | self.assertEqual(num_expected, 
len(body['servers']))
2013-10-07 18:57:09.864 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 322, in 
assertEqual
2013-10-07 18:57:09.865 | self.assertThat(observed, matcher, message)
2013-10-07 18:57:09.865 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 417, in 
assertThat
2013-10-07 18:57:09.865 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
2013-10-07 18:57:09.865 | MismatchError: 3 != 2

** Affects: nova
 Importance: Undecided
 Assignee: Ken'ichi Ohmichi (oomichi)
 Status: In Progress

-- 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since
 fails sporadically
https://bugs.launchpad.net/bugs/1236585
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236585] Re: tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since fails sporadically

2013-12-05 Thread Ken'ichi Ohmichi
The patch is in review process.

https://review.openstack.org/#/c/60157/

** Project changed: tempest = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236585

Title:
  
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since
  fails sporadically

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  See: http://logs.openstack.org/69/49169/2/check/check-tempest-
  devstack-vm-full/473539f/console.html

  2013-10-07 18:57:09.859 | 
==
  2013-10-07 18:57:09.859 | FAIL: 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
  2013-10-07 18:57:09.860 | 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
  2013-10-07 18:57:09.860 | 
--
  2013-10-07 18:57:09.860 | _StringException: Empty attachments:
  2013-10-07 18:57:09.861 |   stderr
  2013-10-07 18:57:09.861 |   stdout
  2013-10-07 18:57:09.861 | 
  2013-10-07 18:57:09.862 | pythonlogging:'': {{{
  2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,059 Request: GET 
http://127.0.0.1:8774/v2/3b02395d4eec44958ffcc10ac2673fd2/servers?changes-since=2013-10-07T18%3A38%3A31.034080
  2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,490 Response Status: 200
  2013-10-07 18:57:09.863 | 2013-10-07 18:38:33,490 Nova request id: 
req-906f2cf2-6508-4cd2-bf62-3595b18d223a
  2013-10-07 18:57:09.863 | }}}
  2013-10-07 18:57:09.863 | 
  2013-10-07 18:57:09.863 | Traceback (most recent call last):
  2013-10-07 18:57:09.864 |   File 
tempest/api/compute/servers/test_list_servers_negative.py, line 191, in 
test_list_servers_by_changes_since
  2013-10-07 18:57:09.864 | self.assertEqual(num_expected, 
len(body['servers']))
  2013-10-07 18:57:09.864 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 322, in 
assertEqual
  2013-10-07 18:57:09.865 | self.assertThat(observed, matcher, message)
  2013-10-07 18:57:09.865 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 417, in 
assertThat
  2013-10-07 18:57:09.865 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-07 18:57:09.865 | MismatchError: 3 != 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1236585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258147] [NEW] nicira: tempest fail to create routers in parallel

2013-12-05 Thread Salvatore Orlando
Public bug reported:

when parallel tempest tests are enabled, the nicira plugin is not able to 
create routers and set the external gateway for them.
Parallel operations indeed cause the usual eventlet/mysql deadlock in 
_update_router_gw_info.

The root cause for the deadlock is that the nvp api client uses eventlet
to dispatch requests.

while a solution might be to rework the API client, an easier,
backportable solution would be to specify the critical methods (ie: the
ones which then result in nvp operations which might conflict with db
operations) as synchronized.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: havana-backport-potential nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258147

Title:
  nicira: tempest fail to create routers in parallel

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when parallel tempest tests are enabled, the nicira plugin is not able to 
create routers and set the external gateway for them.
  Parallel operations indeed cause the usual eventlet/mysql deadlock in 
_update_router_gw_info.

  The root cause for the deadlock is that the nvp api client uses
  eventlet to dispatch requests.

  while a solution might be to rework the API client, an easier,
  backportable solution would be to specify the critical methods (ie:
  the ones which then result in nvp operations which might conflict with
  db operations) as synchronized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239898] Re: test_rescue_preserve_disk_on_failure unittest broken

2013-12-05 Thread Thierry Carrez
** Changed in: oslo
   Status: Fix Committed = Fix Released

** Changed in: oslo
Milestone: None = icehouse-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239898

Title:
  test_rescue_preserve_disk_on_failure unittest broken

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  One of following commits broke unittests. I am not quite sure how it's
  related, but they are broken ever since these changes were merged.

  
https://github.com/openstack/nova/commit/f99641f09965b8f5036a976e9a5f5f28a542d800
  
https://github.com/openstack/nova/commit/eaf5636544a9b2ae1e87f74d0cdb989f8a41b008

  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  
  ==
  FAIL: 
nova.tests.virt.xenapi.test_xenapi.XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi
  tags: worker-0
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  Loading network driver 'nova.network.linux_net'
  Loading network driver 'nova.network.linux_net'
  glance.upload_vhd attempt 1/1
  }}}

  Traceback (most recent call last):
File nova/tests/virt/xenapi/test_xenapi.py, line 522, in 
test_instance_snapshot_fails_with_no_primary_vdi
  lambda *args, **kwargs: None)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 394, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 417, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: bound method XenAPIDriver.snapshot of 
nova.virt.xenapi.driver.XenAPIDriver object at 0x19e61490 returned None
  ==
  FAIL: 
nova.tests.virt.xenapi.test_xenapi.XenAPIVMTestCase.test_rescue_preserve_disk_on_failure
  tags: worker-0
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  Loading network driver 'nova.network.linux_net'
  Loading network driver 'nova.network.linux_net'
  glance.download_vhd attempt 1/1
  glance.download_vhd attempt 1/1
  Failed to spawn, rolling back
  Traceback (most recent call last):
File nova/virt/xenapi/vmops.py, line 498, in _spawn
  boot_instance_step(undo_mgr, vm_ref)
File nova/virt/xenapi/vmops.py, line 152, in inner
  rv = f(*args, **kwargs)
File nova/virt/xenapi/vmops.py, line 461, in boot_instance_step
  self._start(instance, vm_ref)
File nova/tests/virt/xenapi/test_xenapi.py, line 1276, in fake_start
  raise test.TestingException('Start Error')
  TestingException: Start Error
  VM already halted, skipping shutdown...
  ['HANDLE_INVALID', 'VDI', '8c985bc1-998b-4dd2-b797-cbe86bf2abb2']
  Traceback (most recent call last):
File nova/virt/xenapi/vm_utils.py, line 439, in destroy_vdi
  session.call_xenapi('VDI.destroy', vdi_ref)
File nova/virt/xenapi/driver.py, line 779, in call_xenapi
  return session.xenapi_request(method, args)
File nova/virt/xenapi/fake.py, line 756, in xenapi_request
  return meth(*full_params)
File nova/virt/xenapi/fake.py, line 801, in lambda
  return lambda *params: self._destroy(name, params)
File nova/virt/xenapi/fake.py, line 917, in _destroy
  raise Failure(['HANDLE_INVALID', table, ref])
  Failure: ['HANDLE_INVALID', 'VDI', '8c985bc1-998b-4dd2-b797-cbe86bf2abb2']
  Unable to destroy VDI 8c985bc1-998b-4dd2-b797-cbe86bf2abb2
  }}}

  Traceback (most recent call last):
File nova/tests/virt/xenapi/test_xenapi.py, line 1285, in 
test_rescue_preserve_disk_on_failure
  self.assertEqual(vdi_ref, vdi_ref2)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 322, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 417, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: !=:
  reference = '62ba4799-a2bf-4a47-aef6-9341b70a2ad1'
  actual= '8a4dcae5-7a8f-49db-92af-775f80f935fc'
  ==
  FAIL: process-returncode
  tags: worker-0
  

[Yahoo-eng-team] [Bug 1248936] Re: Information about domain are not setted in the Nova context

2013-12-05 Thread Thierry Carrez
** Changed in: oslo
   Status: Fix Committed = Fix Released

** Changed in: oslo
Milestone: None = icehouse-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248936

Title:
  Information about domain are not setted in the Nova context

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When a request is made to Keystone, information such as user_id,
  project_id, among others, are setted in the class NovaKeystoneContext
  https://github.com/openstack/nova/blob/master/nova/api/auth.py#L79.
  When using the Keystone V3 API, domain information should be passed,
  but Nova is not ready to receive this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1248936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2013-12-05 Thread Thierry Carrez
** Changed in: oslo
   Status: Fix Committed = Fix Released

** Changed in: oslo
Milestone: None = icehouse-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code -30081 isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248216] Re: Need remove deprecated module commands

2013-12-05 Thread Thierry Carrez
** Changed in: oslo
   Status: Fix Committed = Fix Released

** Changed in: oslo
Milestone: None = icehouse-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248216

Title:
  Need remove deprecated module commands

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  The commands module was deprecated since version 2.6 and it has been
  removed in Python 3. Use the subprocess module instead.

  See http://docs.python.org/2/library/commands#module-commands

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1248216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236648] Re: __metaclass__ is incompatible for python 3

2013-12-05 Thread Thierry Carrez
** Changed in: oslo
   Status: Fix Committed = Fix Released

** Changed in: oslo
Milestone: None = icehouse-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1236648

Title:
  __metaclass__ is incompatible for python 3

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Nova:
  Fix Committed

Bug description:
  Some class uses __metaclass__ for abc.ABCMeta.
  six be used in general for python 3 compatibility.

  For example

  import abc
  import six

  
  six.add_metaclass(abc.ABCMeta)
  class FooDriver:

  @abc.abstractmethod
  def bar():
  pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1236648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258150] [NEW] nicira: tempest fails to delete routers in parallel

2013-12-05 Thread Salvatore Orlando
Public bug reported:

when parallel tempest tests are enabled, the nicira plugin shows erros when 
deleting routers.
Parallel operations indeed cause the usual eventlet/mysql deadlock in 
delete_router as the nvp operation is nested within the db transaction.

The root cause for the deadlock is that the nvp api client uses eventlet
to dispatch requests.

while a solution might be to rework the API client, an easier,
backportable solution would be to move the NVP operation out of the
transaction and ensuring consistency in case of failure.

note: in the same delete_router routine also the metada access network
handling should be moved out of the transaction.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: havana-backport-potential nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258150

Title:
  nicira: tempest fails to delete routers in parallel

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when parallel tempest tests are enabled, the nicira plugin shows erros when 
deleting routers.
  Parallel operations indeed cause the usual eventlet/mysql deadlock in 
delete_router as the nvp operation is nested within the db transaction.

  The root cause for the deadlock is that the nvp api client uses
  eventlet to dispatch requests.

  while a solution might be to rework the API client, an easier,
  backportable solution would be to move the NVP operation out of the
  transaction and ensuring consistency in case of failure.

  note: in the same delete_router routine also the metada access network
  handling should be moved out of the transaction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1020749] Re: Use Openstack-Common notifier

2013-12-05 Thread Fei Long Wang
Now the oslo.messaging has been ported into Glance, so this bug won't be
fixed. See https://review.openstack.org/#/c/57678/

** Changed in: glance
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1020749

Title:
  Use Openstack-Common notifier

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  This bug created from obsolete blueprint use-common-notifier:

  remove the glance notifier and change to use openstack-common
  notifier.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1020749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258160] [NEW] Got error if create image with v2 and update it with v1

2013-12-05 Thread Fei Long Wang
Public bug reported:

When I was test something, I create an empty image with v2 by:
glance --os-image-api-version 2 image-create

And then I update it with v1 like:
glance image-update 82a0c149-7e3e-471b-921f-0df1ad780722 --name flwang_c_1 
--disk-format raw --container-format bare

Then there will be an error as below though the update successfully.
int() argument must be a string or a number, not 'NoneType'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1258160

Title:
  Got error if create image with v2 and update it with v1

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When I was test something, I create an empty image with v2 by:
  glance --os-image-api-version 2 image-create

  And then I update it with v1 like:
  glance image-update 82a0c149-7e3e-471b-921f-0df1ad780722 --name flwang_c_1 
--disk-format raw --container-format bare

  Then there will be an error as below though the update successfully.
  int() argument must be a string or a number, not 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1258160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258169] [NEW] XenAPI: wait_for_vhd_coalesce times out when garbage collector still running

2013-12-05 Thread Bob Ball
Public bug reported:

In some runs, particular stress tests, the GC may run for more than 25
seconds.  The current timeout is after 5*5 seconds and so this needs to
be increased.

2013-12-05 10:57:51.024 DEBUG nova.compute.manager 
[req-24f4e285-4e0d-48f6-8c19-793fd2735488 
ListImageFiltersTestJSON-tempest-987533830-user 
ListImageFiltersTestJSON-tempest-987533830-tenant] [instance: 
37b86681-9225-4786-82f2-f4110697fda6] Cleaning up image 
6ccd3d8d-1918-4bbc-b448-af7ebb50ce33 from (pid=25289) decorated_function 
/opt/stack/nova/nova/compute/manager.py:321
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/compute/manager.py, line 317, in decorated_function
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] *args, **kwargs)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/compute/manager.py, line 2427, in snapshot_instance
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] task_states.IMAGE_SNAPSHOT)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/compute/manager.py, line 2458, in _snapshot_instance
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] update_task_state)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/virt/xenapi/driver.py, line 250, in snapshot
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] self._vmops.snapshot(context, 
instance, image_id, update_task_state)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 729, in snapshot
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] 
post_snapshot_callback=update_task_state) as vdi_uuids:
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/usr/lib/python2.7/contextlib.py, line 17, in __enter__
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] return self.gen.next()
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 782, in 
_snapshot_attached_here_impl
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] original_parent_uuid)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 2044, in 
_wait_for_vhd_coalesce
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] raise exception.NovaException(msg)
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] NovaException: VHD coalesce attempts 
exceeded (5), giving up...
2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]

** Affects: nova
 Importance: Medium
 Status: New

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258169

Title:
  XenAPI: wait_for_vhd_coalesce times out when garbage collector still
  running

Status in OpenStack Compute (Nova):
  New

Bug description:
  In some runs, particular stress tests, the GC may run for more than 25
  seconds.  The current timeout is after 5*5 seconds and so this needs
  to be increased.

  2013-12-05 10:57:51.024 DEBUG nova.compute.manager 
[req-24f4e285-4e0d-48f6-8c19-793fd2735488 
ListImageFiltersTestJSON-tempest-987533830-user 
ListImageFiltersTestJSON-tempest-987533830-tenant] [instance: 
37b86681-9225-4786-82f2-f4110697fda6] Cleaning up image 
6ccd3d8d-1918-4bbc-b448-af7ebb50ce33 from (pid=25289) decorated_function 
/opt/stack/nova/nova/compute/manager.py:321
  2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/compute/manager.py, line 317, in decorated_function
  2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] *args, **kwargs)
  2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6]   File 
/opt/stack/nova/nova/compute/manager.py, line 2427, in snapshot_instance
  2013-12-05 10:57:51.024 TRACE nova.compute.manager [instance: 
37b86681-9225-4786-82f2-f4110697fda6] 

[Yahoo-eng-team] [Bug 1258175] [NEW] injected_file_path_bytes description is ambiguous

2013-12-05 Thread Łukasz Jernaś
Public bug reported:

I feel that the injected_file_path_bytes quata description both in
Horizon as in the docs is a bit ambiguous even after
https://bugs.launchpad.net/horizon/+bug/1254049 as i may be hard to
notice first hand if that's a limit on the file path length or the
actual file.

Docs:
http://docs.openstack.org/havana/config-reference/content/cli_set_compute_quotas.html

Horizon:
openstack_dashboard/dashboards/admin/defaults/tables.py

Maybe as Akihiro Motoki proposed in
https://review.openstack.org/#/c/58103/1/openstack_dashboard/dashboards/admin/defaults/tables.py
it could be changed to Length of Injected File Path in
icehouse/2013.2.2

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: openstack-manuals
 Importance: Undecided
 Status: New

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258175

Title:
  injected_file_path_bytes description is ambiguous

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Manuals:
  New

Bug description:
  I feel that the injected_file_path_bytes quata description both in
  Horizon as in the docs is a bit ambiguous even after
  https://bugs.launchpad.net/horizon/+bug/1254049 as i may be hard to
  notice first hand if that's a limit on the file path length or the
  actual file.

  Docs:
  
http://docs.openstack.org/havana/config-reference/content/cli_set_compute_quotas.html

  Horizon:  
  openstack_dashboard/dashboards/admin/defaults/tables.py

  Maybe as Akihiro Motoki proposed in
  
https://review.openstack.org/#/c/58103/1/openstack_dashboard/dashboards/admin/defaults/tables.py
  it could be changed to Length of Injected File Path in
  icehouse/2013.2.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1258175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258166] Re: N310 check recommends function that doesn't exist

2013-12-05 Thread Victor Sergeyev
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258166

Title:
  N310 check recommends function that doesn't exist

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Check N310 can return N310  timeutils.now() must be used instead of
  datetime.now(), but timeutils.now() does not exist. Only utcnow()
  does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180044] Re: nova failures when vCenter has multiple datacenters

2013-12-05 Thread Gary Kotton
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180044

Title:
  nova failures when vCenter has multiple datacenters

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The method at vmops.py _get_datacenter_ref_and_name does not calculate
  datacenter properly.

  def _get_datacenter_ref_and_name(self):
  Get the datacenter name and the reference.
  dc_obj = self._session._call_method(vim_util, get_objects,
  Datacenter, [name])
  vm_util._cancel_retrieve_if_necessary(self._session, dc_obj)
  return dc_obj.objects[0].obj, dc_obj.objects[0].propSet[0].val

  This will not be correct on systems with more than one datacenter.

  Stack trace from logs:

  ERROR nova.compute.manager [req-9395fe41-cf04-4434-bd77-663e93de1d4a
  foo bar] [instance: 484a42a2-642e-4594-93fe-4f72ddad361f] Error:
  ['Traceback (most recent call last):\n', '  File
  /opt/stack/nova/nova/compute/manager.py, line 942, in
  _build_instance\nset_access_ip=set_access_ip)\n', '  File
  /opt/stack/nova/nova/compute/manager.py, line 1204, in _spawn\n
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n',
  '  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__\n
  self.gen.next()\n', '  File /opt/stack/nova/nova/compute/manager.py,
  line 1200, in _spawn\nblock_device_info)\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 176, in spawn\n
  block_device_info)\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 208, in spawn\n
  _execute_create_vm()\n', '  File
  /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 204, in
  _execute_create_vm\n
  self._session._wait_for_task(instance[\'uuid\'], vm_create_task)\n', '
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 559, in
  _wait_for_task\nret_val = done.wait()\n', '  File
  /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116,
  in wait\nreturn hubs.get_hub().switch()\n', '  File
  /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
  187, in switch\nreturn self.greenlet.switch()\n', 'NovaException:
  A specified parameter was not correct. \nspec.location.folder\n']

  vCenter error is:
  A specified parameter was not correct. spec.location.folder

  Work around:
  use only one datacenter, use only one cluster, turn on DRS

  Additional failures:
  2013-07-18 10:59:12.788 DEBUG nova.virt.vmwareapi.vmware_images 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [instance: 
5b3961b6-38d9-409c-881e-fe50f67b1539] Got image size of 687865856 for the image 
cde14862-60b8-4360-a145-06585b06577c get_vmdk_size_and_properties 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:156
  2013-07-18 10:59:12.963 WARNING nova.virt.vmwareapi.network_util 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [(ManagedObjectReference){
     value = network-1501
     _type = Network
   }, (ManagedObjectReference){
     value = network-1458
     _type = Network
   }, (ManagedObjectReference){
     value = network-2085
     _type = Network
   }, (ManagedObjectReference){
     value = network-1143
     _type = Network
   }]
  2013-07-18 10:59:13.326 DEBUG nova.virt.vmwareapi.vmops 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [instance: 
5b3961b6-38d9-409c-881e-fe50f67b1539] Creating VM on the ESX host 
_execute_create_vm 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:207
  2013-07-18 10:59:14.258 3145 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is 8ef36d061a9341a09d3a5451df798673 multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is 680b790574c64a9783fd2138c43f5f6d. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-07-18 10:59:18.757 3145 WARNING nova.virt.vmwareapi.driver [-] Task 
[CreateVM_Task] (returnval){
     value = task-33558
     _type = Task
   } status: error The input arguments had entities that did not belong to the 
same datacenter.

  2013-07-18 10:59:18.758 ERROR nova.compute.manager 
[req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 
0e1771f8db984a3599596fae62609d9a] [instance: 

[Yahoo-eng-team] [Bug 1241713] Re: lbaas pool tcp

2013-12-05 Thread Akihiro Motoki
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241713

Title:
  lbaas pool tcp

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  New

Bug description:
  When adding a lbaas pool, there is an option for http and https, but
  not tcp, though you can use tcp via the cli.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258192] [NEW] Angular not loaded on modals

2013-12-05 Thread Maxime Vidori
Public bug reported:

The angular cycle is not started when an element is added to the DOM
outside of angular. This happen especially when a modal is called, due
to the asynchronous insertion, angular does not know that the DOM as
changed.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  The angular cycle is not started when an element is added to the DOM
- outside of angular.
+ outside of angular. This happen especially when a modal is called, due
+ to the asynchronous insertion, angular does not know that the DOM as
+ changed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258192

Title:
  Angular not loaded on modals

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The angular cycle is not started when an element is added to the DOM
  outside of angular. This happen especially when a modal is called, due
  to the asynchronous insertion, angular does not know that the DOM as
  changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1258192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-05 Thread Flavio Percoco
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258253] [NEW] Compute rpc breaks live upgrade from havana to icehouse

2013-12-05 Thread Russell Bryant
Public bug reported:

We are attempting to support live upgrades from the Havana to Icehouse
release.  Some changes to the compute rpc API need to be backported to
Havana to make this work.

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Affects: nova/havana
 Importance: High
 Assignee: Russell Bryant (russellb)
 Status: In Progress

** Changed in: nova
   Status: New = Invalid

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova/havana
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258253

Title:
  Compute rpc breaks live upgrade from havana to icehouse

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  We are attempting to support live upgrades from the Havana to Icehouse
  release.  Some changes to the compute rpc API need to be backported to
  Havana to make this work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258256] [NEW] Live upgrade from Havana broken by commit 62e9829

2013-12-05 Thread Dan Smith
Public bug reported:

Commit 62e9829 inadvertently broke live upgrades from Havana to master.
This was not really related to the patch itself, other than that it
bumped the Instance version which uncovered a bunch of issues in the
object infrastructure that weren't yet ready to handle this properly.

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: Confirmed


** Tags: unified-objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258256

Title:
  Live upgrade from Havana broken by commit 62e9829

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Commit 62e9829 inadvertently broke live upgrades from Havana to
  master. This was not really related to the patch itself, other than
  that it bumped the Instance version which uncovered a bunch of issues
  in the object infrastructure that weren't yet ready to handle this
  properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255335] Re: v2 token request always allow external auth method

2013-12-05 Thread Adam Young
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1255335

Title:
  v2 token request always allow external auth method

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Using external kerberos auth with httpd and ticket with curl requests.

  keystone.conf
  [auth]
  methods = password,token

  v2 token request:
  curl --negotiate -u : -H Content-type: application/json -d '{auth: 
{tenantName: tester}}'  
http://keystone-krb5.lab.eng.rdu2.redhat.com:5000/v2.0/tokens

  Successfully obtains a token.

  v3 token request:
  curl --negotiate -u : -H Content-type: application/json -d '{auth: 
{identity: {methods: []},scope: {project: {domain: {name: 
Default},name: tester' 
http://keystone-krb5.lab.eng.rdu2.redhat.com:5000/v3/auth/tokens

  Correctly handles the request with:
  message: Attempted to authenticate with an unsupported method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1255335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-05 Thread Sergio Cazzolato
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
 Assignee: (unassigned) = Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Messaging API for OpenStack:
  New
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-05 Thread Sergio Cazzolato
** No longer affects: oslo.messaging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1041308] Re: decouple sql debug from debug and implement a verbose

2013-12-05 Thread Alex Meade
** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1041308

Title:
  decouple sql debug from debug and implement a verbose

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Would be useful to decouple sql debug from normal debug. Also
  sqlalchemy supports the following debug levels  using the echo
  parameter to create_engine:

  1) echo=false (supposed to be identical to echo set to None, the default but 
doesn't behave well on version  0.6.4.3 of sqlalchemy
   so I've found the  to be safer, see 
http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg25071.html
  for more detail. Glance is using echo=false , works fine on later sqlalchemy 
but problems on earlier version.

  2) echo=true -  the same as setting the the loglevel=debug i..e. logs
  sql statements

  3) echo='debug' - returns result set as well as logging sql
  statements, currently not used by glance

  Would be useful  get  3) for glance using a sql_verbose option  and I
  think because of 1) we should use the default rather than echo=false
  as this works better on older versions of  sqlalchemy .

  I will attach a patch which does the above

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1041308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258275] [NEW] Migration record for resize not cleared if exception is thrown during the resize

2013-12-05 Thread Jennifer Mulsow
Public bug reported:

Testing on havana.

prep_resize() calls resource tracker's resize_claim() which creates a
migration record. This record is cleared during the
rt.drop_resize_claim() from confirm_resize() or revert_resize(), however
if an exception is thrown before one of these is called or after, but
before they clean up the migration record, then the migration record
will hang around in the database indefinitely.

This results in an WARNING being logged every 60 seconds for every resize 
operation that ended with the instance in ERROR state as part of the 
update_available_resource period task, like the following:
2013-12-04 17:49:15.247 25592 WARNING nova.compute.resource_tracker 
[req-75e94365-1cca-4bca-92a7-19b2c62b9551 e4857f249aec4160bfa19c12eb805a96 
a42cfb9766bf41869efab25703f5ce7b] [instance: 
12d2551a-6403-4100-ba57-0995594c9c93] Instance not resizing, skipping migration.

This message is because the resource tracker's
_update_usage_from_migrations() logs this warning if a migration record
for an instance is found, but the instance's current state is not in a
resize state.

These messages will be permanent in the logs even after the instance in
question's state is reset, and even after a successful resize has
occurred on that instance. There is no way to clean up the old migration
record at this point.

It seems like there should be some handling when an exception occurs
during resize, finish_resize, confirm_resize, revert_resize, etc. that
will drop the resize claim, so the claim and migration record do not
persist indefinitely.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258275

Title:
  Migration record for resize not cleared if exception is thrown during
  the resize

Status in OpenStack Compute (Nova):
  New

Bug description:
  Testing on havana.

  prep_resize() calls resource tracker's resize_claim() which creates a
  migration record. This record is cleared during the
  rt.drop_resize_claim() from confirm_resize() or revert_resize(),
  however if an exception is thrown before one of these is called or
  after, but before they clean up the migration record, then the
  migration record will hang around in the database indefinitely.

  This results in an WARNING being logged every 60 seconds for every resize 
operation that ended with the instance in ERROR state as part of the 
update_available_resource period task, like the following:
  2013-12-04 17:49:15.247 25592 WARNING nova.compute.resource_tracker 
[req-75e94365-1cca-4bca-92a7-19b2c62b9551 e4857f249aec4160bfa19c12eb805a96 
a42cfb9766bf41869efab25703f5ce7b] [instance: 
12d2551a-6403-4100-ba57-0995594c9c93] Instance not resizing, skipping migration.

  This message is because the resource tracker's
  _update_usage_from_migrations() logs this warning if a migration
  record for an instance is found, but the instance's current state is
  not in a resize state.

  These messages will be permanent in the logs even after the instance
  in question's state is reset, and even after a successful resize has
  occurred on that instance. There is no way to clean up the old
  migration record at this point.

  It seems like there should be some handling when an exception occurs
  during resize, finish_resize, confirm_resize, revert_resize, etc. that
  will drop the resize claim, so the claim and migration record do not
  persist indefinitely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-05 Thread Xavier Queralt
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157950] Re: Add support for nova's new Fixed IP quota

2013-12-05 Thread Dean Troyer
OpenStackClient added support for fixed-ip quota in
https://review.openstack.org/36550

** Changed in: python-openstackclient
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1157950

Title:
  Add support for nova's new Fixed IP quota

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Python client library for Nova:
  New
Status in OpenStack Command Line Client:
  Fix Released

Bug description:
  Last week a fixed ip quota was added to essex, folsom and trunk for
  security reasons. We need to add UI support for setting this quota to
  horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1157950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258319] [NEW] test_reboot_server_hard fails sporadically in swift check jobs

2013-12-05 Thread Peter Portante
Public bug reported:

test_reboot_server_hard fails sporadically in swift check jobs

I believe this has been reported before, but I was not able to find it.

See: http://logs.openstack.org/43/60343/1/gate/gate-tempest-dsvm-
full/c92d206/console.html

2013-12-05 21:29:18.174 | 
==
2013-12-05 21:29:18.183 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
2013-12-05 21:29:18.186 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
2013-12-05 21:29:18.200 | 
--
2013-12-05 21:29:18.206 | _StringException: Empty attachments:
2013-12-05 21:29:18.206 |   stderr
2013-12-05 21:29:18.207 |   stdout
2013-12-05 21:29:18.207 | 
2013-12-05 21:29:18.207 | pythonlogging:'': {{{

.
.
.

2013-12-05 21:29:19.174 | Traceback (most recent call last):
2013-12-05 21:29:19.175 |   File 
tempest/api/compute/servers/test_server_actions.py, line 83, in 
test_reboot_server_hard
2013-12-05 21:29:19.175 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
2013-12-05 21:29:19.175 |   File 
tempest/services/compute/xml/servers_client.py, line 369, in 
wait_for_server_status
2013-12-05 21:29:19.175 | extra_timeout=extra_timeout)
2013-12-05 21:29:19.176 |   File tempest/common/waiters.py, line 82, in 
wait_for_server_status
2013-12-05 21:29:19.176 | raise exceptions.TimeoutException(message)
2013-12-05 21:29:19.176 | TimeoutException: Request timed out
2013-12-05 21:29:19.177 | Details: Server f313af9a-8ec1-4f77-b63f-76d9317d6423 
failed to reach ACTIVE status within the required time (196 s). Current status: 
HARD_REBOOT.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258319

Title:
  test_reboot_server_hard fails sporadically in swift check jobs

Status in OpenStack Compute (Nova):
  New

Bug description:
  test_reboot_server_hard fails sporadically in swift check jobs

  I believe this has been reported before, but I was not able to find
  it.

  See: http://logs.openstack.org/43/60343/1/gate/gate-tempest-dsvm-
  full/c92d206/console.html

  2013-12-05 21:29:18.174 | 
==
  2013-12-05 21:29:18.183 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
  2013-12-05 21:29:18.186 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_reboot_server_hard[gate,smoke]
  2013-12-05 21:29:18.200 | 
--
  2013-12-05 21:29:18.206 | _StringException: Empty attachments:
  2013-12-05 21:29:18.206 |   stderr
  2013-12-05 21:29:18.207 |   stdout
  2013-12-05 21:29:18.207 | 
  2013-12-05 21:29:18.207 | pythonlogging:'': {{{

  .
  .
  .

  2013-12-05 21:29:19.174 | Traceback (most recent call last):
  2013-12-05 21:29:19.175 |   File 
tempest/api/compute/servers/test_server_actions.py, line 83, in 
test_reboot_server_hard
  2013-12-05 21:29:19.175 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
  2013-12-05 21:29:19.175 |   File 
tempest/services/compute/xml/servers_client.py, line 369, in 
wait_for_server_status
  2013-12-05 21:29:19.175 | extra_timeout=extra_timeout)
  2013-12-05 21:29:19.176 |   File tempest/common/waiters.py, line 82, in 
wait_for_server_status
  2013-12-05 21:29:19.176 | raise exceptions.TimeoutException(message)
  2013-12-05 21:29:19.176 | TimeoutException: Request timed out
  2013-12-05 21:29:19.177 | Details: Server 
f313af9a-8ec1-4f77-b63f-76d9317d6423 failed to reach ACTIVE status within the 
required time (196 s). Current status: HARD_REBOOT.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1176190] Re: 'openssl' returned non-zero exit status 4 w/ SSL enabled

2013-12-05 Thread Adam Young
** Changed in: keystone
   Status: Expired = Confirmed

** Changed in: keystone
 Assignee: (unassigned) = Adam Young (ayoung)

** Project changed: keystone = python-keystoneclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1176190

Title:
  'openssl' returned non-zero exit status 4 w/ SSL enabled

Status in Python client library for Keystone:
  Confirmed

Bug description:
  From this thread on the openstack-operators mailing list:
  http://lists.openstack.org/pipermail/openstack-
  operators/2013-May/002903.html

  Version: grizzly

  If SSL is enabled in keystone, cinder command-line client commands
  don't work.

  Changing the following settings in keystone.conf and restarting
  resolves the issue:

  [ssl]
  enable = False

  [signing]
  token_format = UUID

  Error in cinder-api.log is:

  2013-04-30 20:00:42DEBUG [keystoneclient.middleware.auth_token] Token
  validation failure.
  Traceback (most recent call last):
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 688, in _validate_user_token
  verified = self.verify_signed_token(user_token)
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 1043, in verify_signed_token
  if self.is_signed_token_revoked(signed_text):
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 1007, in is_signed_token_revoked
  revocation_list = self.token_revocation_list
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 1079, in token_revocation_list
  self.token_revocation_list = self.fetch_revocation_list()
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 1109, in fetch_revocation_list
  return self.cms_verify(data['signed'])
    File
  /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
  line 1038, in cms_verify
  raise err
  CalledProcessError: Command 'openssl' returned non-zero exit status 4
  *2013-04-30 20:00:42DEBUG [keystoneclient.middleware.auth_token]
  Marking token 
*MIIMbwYJKoZIhvcNAQcCoIIMYDCCDFwCAQExCTAHBgUrDgMCGjCCC0gGCSqGSIb3DQEHAaCCCzkEggs1eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNC0zMFQyMDowMDo0Mi40MDYzNTMiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTAxVDIwOjAwOjQyWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiNmFhM2JmMWFiNjgwNDAyMTg4NzNhNzgyZjkwY2ZmYTciLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE3Mi4xOS4xMzYuMTE6ODc3NC92Mi82YWEzYmYxYWI2ODA0MDIxODg3M2E3ODJmOTBjZmZhNyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xNzIuMTkuMTM2LjEwOjg3NzQvdjIvNmFhM2JmMWFiNjgwNDAyMTg4NzNhNzgyZjkwY2ZmYTciLCAiaWQiOiAiMjYxNzgzOTEyNzVhNDJjZmEzY
  ...
  
i4xOS4xMzYuMTA6NTAwMC92Mi4wIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImlkZW50aXR5IiwgIm5hbWUiOiAia2V5c3RvbmUifV0sICJ1c2VyIjogeyJ1c2VybmFtZSI6ICJhZG1pbiIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiM2Y4MjY3M2I1ZmUwNDExYWI1ZmQ4MjE2YmRiNjkzYzYiLCAicm9sZXMiOiBbeyJuYW1lIjogIktleXN0b25lU2VydmljZUFkbWluIn0sIHsibmFtZSI6ICJLZXlzdG9uZUFkbWluIn0sIHsibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjY2NmZhOTkwNzhhNGYwN2EwNzBlN2U4NThjMzJmMDIiLCAiMzZiYmE5ZWYwMTc4NDQ4YzhhNjU0Yjc1ZmViM2EwZjQiLCAiYTI1NTgxZGQzNDcwNDYwYjkxZWNhYTI5ZWNhNzIwNWMiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCbzuXTFZ8vZ2h4VnLUvdrzn5HCJdeEI5KkpLLHLkVvjrYwPm6NC+sRvDZ0Mg2MCMHtt1eK4o0GRBtmq8sTtUGqHuT5Ns41whp+r+diTGNfkW6mOaJBwpQhxbjXiTGcCHWJni3RkDTDinY-O7Zto3ct0etVmxvE62lqSFSQUKoyAg==
  *as unauthorized in memcache*
  *2013-04-30 20:00:42  WARNING [keystoneclient.middleware.auth_token]
  Authorization failed for
  
token*MIIMbwYJKoZIhvcNAQcCoIIMYDCCDFwCAQExCTAHBgUrDgMCGjCCC0gGCSqGSIb3DQEHAaCCCzkEggs1eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNC0zMFQyMDowMDo0Mi40MDYzNTMiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTAxVDIwOjAwOjQyWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiNmFhM2JmMWFiNjgwNDAyMTg4NzNhNzgyZjkwY2ZmYTciLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE3Mi4xOS4xMzYuMTE6ODc3NC92Mi82YWEzYmYxYWI2ODA0MDIxODg3M2E3ODJmOTBjZmZhNyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xNzIuMTkuMTM2LjEwOjg3NzQvdjIvNmFhM2JmMWFiNjgwNDAyMTg4NzNhNzgyZjkwY2ZmYTciLCAiaWQiOiAiMjYxNzgzOTEyNzVhNDJjZmEzYjc4NmFiMTUxYzhmOGEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xNzIuMTkuMTM2LjExOjg3NzQvdjIvNmFhM2JmMWFiNjgwNDAyMTg4NzNhNzgyZjkwY2ZmYTcifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuY
  ...
  

[Yahoo-eng-team] [Bug 1258331] [NEW] Glance v2: Image property quotas are unforgiving when quota is exceeded

2013-12-05 Thread Alex Meade
Public bug reported:

Glance v2 image property quota enforcement can be unintuitive.

There are a number of reasons an image may have more properties than the
image_propery_quota allows. E.g. if the quota is lowered or when it is
first created. If this happens then any request to modify an image must
result in the image being under the new quota. This means that even if
the user is removing quotas they can still get an 413 overlimit from
glance if the result would still be over the limit.

This is not a great user experience and is unintuitive. Ideally a user
should be able to remove properties or any other action except for
adding a property when they are over their quota for a given image.

** Affects: glance
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1258331

Title:
  Glance v2: Image property quotas are unforgiving when quota is
  exceeded

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Glance v2 image property quota enforcement can be unintuitive.

  There are a number of reasons an image may have more properties than
  the image_propery_quota allows. E.g. if the quota is lowered or when
  it is first created. If this happens then any request to modify an
  image must result in the image being under the new quota. This means
  that even if the user is removing quotas they can still get an 413
  overlimit from glance if the result would still be over the limit.

  This is not a great user experience and is unintuitive. Ideally a user
  should be able to remove properties or any other action except for
  adding a property when they are over their quota for a given image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1258331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253905] Re: Keystone doesn't handle UTF8 in exceptions

2013-12-05 Thread Alan Pevec
** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone/havana
   Importance: Undecided = High

** Changed in: keystone/havana
   Status: New = Invalid

** Changed in: keystone/havana
 Assignee: (unassigned) = Jamie Lennox (jamielennox)

** Changed in: keystone/havana
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1253905

Title:
  Keystone doesn't handle UTF8 in exceptions

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress

Bug description:
  Originally reported:
  https://bugzilla.redhat.com/show_bug.cgi?id=1033190

  Description of problem:

  [root@public-control1 ~]# keystone tenant-create --name Consulting – 
Middleware Delivery 
  Unable to communicate with identity service: {error: {message: An 
unexpected error prevented the server from fulfilling your request. 'ascii' 
codec can't encode character u'\\u2013' in position 11: ordinal not in 
range(128), code: 500, title: Internal Server Error}}. (HTTP 500)

  
  NB: the dash in the name is not an ascii dash.  It's something else.

  Version-Release number of selected component (if applicable):

  openstack-keystone-2013.1.3-2.el6ost.noarch

  How reproducible:

  Every

  
  Additional info:

  Performing the same command on a Folsom cloud works just fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1253905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258342] [NEW] glance /v1.0/images does not return reasonable results

2013-12-05 Thread Scott Devoid
Public bug reported:

This may not be a bug, per se. What is returned from the basic GET
request of v1/images/detail does not make sense.

Glance appears to only query for public images. [1] Indeed, adding a
is_public=False results in private images being shown. However, in
this case it shows *all* private images, not just the images that are
part of my tenant. Furthermore, I would expect the basic query to return
images for which either (a) my tenant has access to or (b) I personally
have access to. Neither of these cases appear to be queried in the
default request.

[1] http://paste.openstack.org/show/54544/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1258342

Title:
  glance /v1.0/images does not return reasonable results

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  This may not be a bug, per se. What is returned from the basic GET
  request of v1/images/detail does not make sense.

  Glance appears to only query for public images. [1] Indeed, adding a
  is_public=False results in private images being shown. However, in
  this case it shows *all* private images, not just the images that are
  part of my tenant. Furthermore, I would expect the basic query to
  return images for which either (a) my tenant has access to or (b) I
  personally have access to. Neither of these cases appear to be queried
  in the default request.

  [1] http://paste.openstack.org/show/54544/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1258342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258349] [NEW] Rebuild Instance dialog fails for nameless images

2013-12-05 Thread Kieran Spear
Public bug reported:

The image list in Rebuild Instance is constructed from (id, name) pairs.
As the name is a string, the transform function is never called unless
image name happens to be None. In that case it crashes the view.

** Affects: horizon
 Importance: Low
 Assignee: Kieran Spear (kspear)
 Status: In Progress

** Changed in: horizon
   Status: New = In Progress

** Changed in: horizon
 Assignee: (unassigned) = Kieran Spear (kspear)

** Changed in: horizon
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258349

Title:
  Rebuild Instance dialog fails for nameless images

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The image list in Rebuild Instance is constructed from (id, name)
  pairs. As the name is a string, the transform function is never called
  unless image name happens to be None. In that case it crashes the
  view.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1258349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258360] [NEW] Remove unneeded call to conductor in network interface

2013-12-05 Thread Aaron Rosen
Public bug reported:

Remove unneeded call to conductor in network interface

** Affects: nova
 Importance: Medium
 Assignee: Aaron Rosen (arosen)
 Status: In Progress


** Tags: network

** Changed in: nova
 Assignee: (unassigned) = Aaron Rosen (arosen)

** Changed in: nova
   Importance: Undecided = Medium

** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258360

Title:
  Remove unneeded call to conductor in network interface

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Remove unneeded call to conductor in network interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258369] [NEW] refactor showcommand and RetrievePoolStats

2013-12-05 Thread yong sheng gong
Public bug reported:

RetrievePoolStats is copying code because the id query and returned data
are using different resource name.

def get_data(self, parsed_args):
self.log.debug('run(%s)' % parsed_args)
neutron_client = self.get_client()
neutron_client.format = parsed_args.request_format
pool_id = neutronV20.find_resourceid_by_name_or_id(
self.get_client(), 'pool', parsed_args.id)
params = {}
if parsed_args.fields:
params = {'fields': parsed_args.fields}

data = neutron_client.retrieve_pool_stats(pool_id, **params)
self.format_output_data(data)
stats = data['stats']
if 'stats' in data:
return zip(*sorted(stats.iteritems()))
else:
return None

** Affects: python-neutronclient
 Importance: Medium
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) = yong sheng gong (gongysh)

** Changed in: python-neutronclient
   Importance: Undecided = Medium

** Changed in: python-neutronclient
Milestone: None = 2.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258369

Title:
  refactor showcommand and RetrievePoolStats

Status in Python client library for Neutron:
  In Progress

Bug description:
  RetrievePoolStats is copying code because the id query and returned
  data are using different resource name.

  def get_data(self, parsed_args):
  self.log.debug('run(%s)' % parsed_args)
  neutron_client = self.get_client()
  neutron_client.format = parsed_args.request_format
  pool_id = neutronV20.find_resourceid_by_name_or_id(
  self.get_client(), 'pool', parsed_args.id)
  params = {}
  if parsed_args.fields:
  params = {'fields': parsed_args.fields}

  data = neutron_client.retrieve_pool_stats(pool_id, **params)
  self.format_output_data(data)
  stats = data['stats']
  if 'stats' in data:
  return zip(*sorted(stats.iteritems()))
  else:
  return None

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1258369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258375] [NEW] only one subnet_id is allowed behind a router for vpnservice object

2013-12-05 Thread yong sheng gong
Public bug reported:

I think we should allow more than subnet_id in one vpnservice object.
but the model below limits only one subnet_id is used.
https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
RESOURCE_ATTRIBUTE_MAP = {

'vpnservices': {
'id': {'allow_post': False, 'allow_put': False,
   'validate': {'type:uuid': None},
   'is_visible': True,
   'primary_key': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:string': None},
  'required_by_policy': True,
  'is_visible': True},
'name': {'allow_post': True, 'allow_put': True,
 'validate': {'type:string': None},
 'is_visible': True, 'default': ''},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'subnet_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:uuid': None},
  'is_visible': True},
'router_id': {'allow_post': True, 'allow_put': False,
  'validate': {'type:uuid': None},
  'is_visible': True},
'admin_state_up': {'allow_post': True, 'allow_put': True,
   'default': True,
   'convert_to': attr.convert_to_boolean,
   'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
   'is_visible': True}
},

with such limit, I don't think there is a way to allow other subnets
behind the router be vpn exposed!

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258375

Title:
  only one subnet_id is allowed behind a router for vpnservice object

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I think we should allow more than subnet_id in one vpnservice object.
  but the model below limits only one subnet_id is used.
  https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
  RESOURCE_ATTRIBUTE_MAP = {

  'vpnservices': {
  'id': {'allow_post': False, 'allow_put': False,
 'validate': {'type:uuid': None},
 'is_visible': True,
 'primary_key': True},
  'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
  'name': {'allow_post': True, 'allow_put': True,
   'validate': {'type:string': None},
   'is_visible': True, 'default': ''},
  'description': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'is_visible': True, 'default': ''},
  'subnet_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'router_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'admin_state_up': {'allow_post': True, 'allow_put': True,
 'default': True,
 'convert_to': attr.convert_to_boolean,
 'is_visible': True},
  'status': {'allow_post': False, 'allow_put': False,
 'is_visible': True}
  },

  with such limit, I don't think there is a way to allow other subnets
  behind the router be vpn exposed!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258376] [NEW] Tempest test_list_instance_usage_audit_logs fails if the hostname of the instance begins with a number

2013-12-05 Thread Jerry Zhao
Public bug reported:

In the list instance usage audit logs XML response, the element 10-150-10-15  
in log/log  is against the XML schema.
http://www.w3schools.com/xml/xml_elements.asp
XML Naming Rules
XML elements must follow these naming rules:

Names can contain letters, numbers, and other characters
Names cannot start with a number or punctuation character
Names cannot start with the letters xml (or XML, or Xml, etc)
Names cannot contain spaces

So if the hostname of the instance begin with number, punctuation character, 
etc,  the list usage audio logs response will be against the rule. 
We either don't use host name as the element name or add a prefix before the 
host name for the tag. 


22:04:48 ==
22:04:48 FAIL: 
tempest.api.compute.admin.test_instance_usage_audit_log.InstanceUsageAuditLogTestXML.test_list_instance_usage_audit_logs[gate]
22:04:48 
tempest.api.compute.admin.test_instance_usage_audit_log.InstanceUsageAuditLogTestXML.test_list_instance_usage_audit_logs[gate]
22:04:48 --
22:04:48 _StringException: Empty attachments:
22:04:48   stderr
22:04:48   stdout
22:04:48 
22:04:48 pythonlogging:'': {{{
22:04:48 2013-12-05 07:31:08,331 Request: GET 
http://127.0.0.1:8774/v2/08aefd9fa58347028eec8e5969cdc26a/os-instance_usage_audit_log
22:04:48 2013-12-05 07:31:08,332 Request Headers: {'Content-Type': 
'application/xml', 'Accept': 'application/xml', 'X-Auth-Token': 'Token 
omitted'}
22:04:48 2013-12-05 07:31:08,791 Response Status: 200
22:04:48 2013-12-05 07:31:08,792 Nova request id: 
req-d858e0ec-6047-4ecc-a81b-c5bcf99f3f87
22:04:48 2013-12-05 07:31:08,792 Response Headers: {'content-length': '646', 
'content-location': 
u'http://127.0.0.1:8774/v2/08aefd9fa58347028eec8e5969cdc26a/os-instance_usage_audit_log',
 'date': 'Thu, 05 Dec 2013 13:31:08 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
22:04:48 2013-12-05 07:31:08,792 Response Body: 
instance_usage_audit_logstotal_errors0/total_errorstotal_instances0/total_instanceslog10-150-10-15instances0/instancesmessageInstance
 usage audit ran for host 10-150-10-15, 0 instances in 0.0541520118713 
seconds./messageerrors0/errorsstateDONE/state/10-150-10-15/lognum_hosts_running0/num_hosts_runningnum_hosts_done1/num_hosts_donenum_hosts_not_run0/num_hosts_not_runhosts_not_run/overall_statusALL
 hosts done. 0 errors./overall_statusperiod_ending2013-12-05 
13:00:00/period_endingperiod_beginning2013-12-05 
12:00:00/period_beginningnum_hosts1/num_hosts/instance_usage_audit_logs
22:04:48 }}}
22:04:48 
22:04:48 Traceback (most recent call last):
22:04:48   File tempest/api/compute/admin/test_instance_usage_audit_log.py, 
line 37, in test_list_instance_usage_audit_logs
22:04:48 resp, body = self.adm_client.list_instance_usage_audit_logs()
22:04:48   File 
tempest/services/compute/xml/instance_usage_audit_log_client.py, line 34, in 
list_instance_usage_audit_logs
22:04:48 instance_usage_audit_logs = xml_to_json(etree.fromstring(body))
22:04:48   File lxml.etree.pyx, line 2754, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:54631)
22:04:48   File parser.pxi, line 1578, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:82748)
22:04:48   File parser.pxi, line 1457, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:81546)
22:04:48   File parser.pxi, line 965, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:78216)
22:04:48   File parser.pxi, line 569, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:74472)
22:04:48   File parser.pxi, line 650, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:75363)
22:04:48   File parser.pxi, line 590, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:74696)
22:04:48 XMLSyntaxError: StartTag: invalid element name, line 1, column 71
22:04:48 
22:04:48 
22:04:48 ==
22:04:48 FAIL: process-returncode
22:04:48 process-returncode
22:04:48 --
22:04:48 _StringException: Binary content:
22:04:48   traceback (test/plain; charset=utf8)
22:04:48

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258376

Title:
  Tempest test_list_instance_usage_audit_logs fails if the hostname of
  the instance begins with a number

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the list instance usage audit logs XML response, the element 
10-150-10-15  in log/log  is against the XML schema.
  http://www.w3schools.com/xml/xml_elements.asp
  XML Naming Rules
  XML elements must follow these naming rules:

  Names can contain letters, numbers, and other characters
  Names cannot start with a number or 

[Yahoo-eng-team] [Bug 1161988] Re: Flavor naming shouldn't include m1

2013-12-05 Thread Tom Fifield
** Changed in: nova
   Status: In Progress = Won't Fix

** Changed in: tempest
   Status: In Progress = Won't Fix

** Changed in: python-novaclient
   Status: In Progress = Won't Fix

** Changed in: devstack
   Status: In Progress = Confirmed

** Changed in: horizon
   Status: In Progress = Confirmed

** Changed in: python-heatclient
   Status: In Progress = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161988

Title:
  Flavor naming shouldn't include m1

Status in devstack - openstack dev environments:
  Confirmed
Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  Won't Fix
Status in Python client library for heat:
  Confirmed
Status in Python client library for Nova:
  Won't Fix
Status in Tempest:
  Won't Fix

Bug description:
  Flavor naming shouldn't include m1

  ENV: devstack trunk / nova 814e109845b3b2546f60e3f537dcfe32893906a3
  (grizzly)

  The default flavors are now:
  m1.nano 
  m1.micro
  m1.tiny 
  m1.small 
  m1.medium 
  m1.large 
  m1.xlarge

  We are propagating AWS m1 designation. This is not useful
  information to the OpenStack administrator or user, and it's actually
  possible misinformation as the m1 on AWS suggests a specific
  generation of hardware.

  POSSIBLE SOLUTION:

  Drop the m1:
  nano 
  micro
  tiny 
  small 
  medium 
  large 
  xlarge

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1161988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258380] [NEW] unimplemented get_floating_ips_by_fixed_address in neutronv2/api.py

2013-12-05 Thread Mathew Odden
Public bug reported:

get_floating_ips_by_fixed_address is currently hard coded to return an
empty list in the nova neutron API.

https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L770

This is because of a change that was a work around for some issues back
in the Folsom / Grizzly releases.

https://github.com/openstack/nova/commit/c0709bdd82c83e16cab6ed854d2ef873eb775473

This should be implemented or removed if it isn't needed any longer
(dead code)

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: network

** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258380

Title:
  unimplemented get_floating_ips_by_fixed_address in neutronv2/api.py

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  get_floating_ips_by_fixed_address is currently hard coded to return an
  empty list in the nova neutron API.

  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L770

  This is because of a change that was a work around for some issues
  back in the Folsom / Grizzly releases.

  
https://github.com/openstack/nova/commit/c0709bdd82c83e16cab6ed854d2ef873eb775473

  This should be implemented or removed if it isn't needed any longer
  (dead code)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258379] [NEW] vpnservice's router must have gateway interface set

2013-12-05 Thread yong sheng gong
Public bug reported:

at line
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

it is obvious the router must have gateway interface set  then it can be
used as vpnservce router.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258379

Title:
  vpnservice's router must have gateway interface set

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  at line
  
https://github.com/openstack/neutron/blob/master/neutron/services/vpn/service_drivers/ipsec.py#L172

  it is obvious the router must have gateway interface set  then it can
  be used as vpnservce router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258384] [NEW] Double call of update_instance_cache_with_nw_info

2013-12-05 Thread Michael H Wilson
Public bug reported:

A call to associate_floating_ip results in two calls to
update_instance_cache_with_nw_info. One when the decorator is called:

https://github.com/openstack/nova/blob/master/nova/network/api.py#L232

another when the operation is complete:

https://github.com/openstack/nova/blob/master/nova/network/api.py#L255

It looks like the right course of action is to remove the decorator.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258384

Title:
  Double call of update_instance_cache_with_nw_info

Status in OpenStack Compute (Nova):
  New

Bug description:
  A call to associate_floating_ip results in two calls to
  update_instance_cache_with_nw_info. One when the decorator is called:

  https://github.com/openstack/nova/blob/master/nova/network/api.py#L232

  another when the operation is complete:

  https://github.com/openstack/nova/blob/master/nova/network/api.py#L255

  It looks like the right course of action is to remove the decorator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258390] [NEW] Removed erronus config file comment

2013-12-05 Thread John Dewey
Public bug reported:

The comment 'DHCP agents needs it.' at
https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L183
is incorrect. Looking through the code, I see no place this is actually
true. I believe it to be an error.

This looks to be _only_ related to notifications, and holds no impact to
the DHCP agent, which listen to their own set of queues.

** Affects: neutron
 Importance: Undecided
 Assignee: John Dewey (retr0h)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1258390

Title:
  Removed erronus config file comment

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The comment 'DHCP agents needs it.' at
  https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L183
  is incorrect. Looking through the code, I see no place this is
  actually true. I believe it to be an error.

  This looks to be _only_ related to notifications, and holds no impact
  to the DHCP agent, which listen to their own set of queues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1258390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258421] [NEW] NotRegistered: Dashboard with slug router is not registered.

2013-12-05 Thread li,chen
Public bug reported:

I'm working under CentOS 6.4 + Openstack - Havana, everything works fine
except Horizon.

I get error in /var/log/httpd/error_log:

[Fri Dec 06 01:27:42 2013] [error] REQ: curl -i -X GET 
http://192.168.11.11:35357/v2.0/tenants -H User-Agent: python-keystoneclient 
-H Forwarded: for=192.168.4.254;by=python-keystoneclient -H X-Auth-Token: 
3626890532059c0fc72580224dd15ab1
[Fri Dec 06 01:27:42 2013] [error] INFO:urllib3.connectionpool:Starting new 
HTTP connection (1): 192.168.11.11
[Fri Dec 06 01:27:42 2013] [error] DEBUG:urllib3.connectionpool:GET 
/v2.0/tenants HTTP/1.1 200 381
[Fri Dec 06 01:27:42 2013] [error] RESP: [200] {'date': 'Fri, 06 Dec 2013 
07:27:42 GMT', 'content-type': 'application/json', 'content-length': '381', 
'vary': 'X-Auth-Token'}
[Fri Dec 06 01:27:42 2013] [error] RESP BODY: {tenants_links: [], tenants: 
[{description: admin tenant, enabled: true, id: 
45c69667e2a64c889719ef8d8e0dd098, name: admin}, {description: Tenant 
for the openstack services, enabled: true, id: 
4cc060c11bc046178c253aa9521aa152, name: services}, {description: null, 
enabled: true, id: ca47670e792e46d48363dee7e7e43688, name: 
policy_test}]}
[Fri Dec 06 01:27:42 2013] [error]
[Fri Dec 06 01:27:42 2013] [error] ERROR:django.request:Internal Server Error: 
/dashboard/admin/
[Fri Dec 06 01:27:42 2013] [error] Traceback (most recent call last):
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 136, in 
get_response
[Fri Dec 06 01:27:42 2013] [error] response = response.render()
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 104, in 
render
[Fri Dec 06 01:27:42 2013] [error] self._set_content(self.rendered_content)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 81, in 
rendered_content
[Fri Dec 06 01:27:42 2013] [error] content = template.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 140, in render
[Fri Dec 06 01:27:42 2013] [error] return self._render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 123, in 
render
[Fri Dec 06 01:27:42 2013] [error] return compiled_parent._render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
[Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
[Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 155, in 
render
[Fri Dec 06 01:27:42 2013] [error] return 
self.render_template(self.template, context)
[Fri Dec