[Yahoo-eng-team] [Bug 1273608] [NEW] wizard broken in non modal view

2014-01-28 Thread Yves-Gwenael Bourhis
Public bug reported:

When creating a network we get a wizard view since 
https://review.openstack.org/#/c/64644/
However even with JS activated, after a login timeout the view is in non modal 
mode.
This non modal mode is also accessible directly via the URL 
http:///project/networks/create

First issue:

In this mode, the form doesn't show as a wizard anymore but with tabs as 
before, but the "next" and "back" buttons are not functional.

Second issue:
==
The required fields raise their error only after clicking the "create button" 
(and not when changing tab) as they should in wizard mode. (can be tested 
thoroughly with https://review.openstack.org/#/c/63078/ )

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273608

Title:
  wizard broken in non modal view

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a network we get a wizard view since 
https://review.openstack.org/#/c/64644/
  However even with JS activated, after a login timeout the view is in non 
modal mode.
  This non modal mode is also accessible directly via the URL 
http:///project/networks/create

  First issue:
  
  In this mode, the form doesn't show as a wizard anymore but with tabs as 
before, but the "next" and "back" buttons are not functional.

  Second issue:
  ==
  The required fields raise their error only after clicking the "create button" 
(and not when changing tab) as they should in wizard mode. (can be tested 
thoroughly with https://review.openstack.org/#/c/63078/ )

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273622] [NEW] stevedore 0.14.1 brokes Havana stable gate tests

2014-01-28 Thread Yaguang Tang
Public bug reported:

2014-01-28 05:25:01.008 | Traceback (most recent call last):
2014-01-28 05:25:01.009 |   File "nova/tests/test_hooks.py", line 112, in 
test_basic
2014-01-28 05:25:01.009 | self.assertEqual(42, self._hooked(1))
2014-01-28 05:25:01.009 |   File "nova/hooks.py", line 98, in inner
2014-01-28 05:25:01.010 | manager = _HOOKS.setdefault(name, 
HookManager(name))
2014-01-28 05:25:01.010 |   File "nova/hooks.py", line 63, in __init__
2014-01-28 05:25:01.010 | super(HookManager, self).__init__(NS, name, 
invoke_on_load=True)
2014-01-28 05:25:01.011 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/stevedore/hook.py",
 line 43, in __init__
2014-01-28 05:25:01.011 | verify_requirements=verify_requirements,
2014-01-28 05:25:01.012 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/stevedore/named.py",
 line 55, in __init__
2014-01-28 05:25:01.012 | verify_requirements)
2014-01-28 05:25:01.012 | TypeError: _mock_load_plugins() takes exactly 4 
arguments (5 given)
2014-01-28 05:25:01.013 | 
==
2014-01-28 05:25:01.013 | FAIL: 
nova.tests.test_hooks.HookTestCaseWithFunction.test_order_of_execution
2014-01-28 05:25:01.013 | tags: worker-0

details http://logs.openstack.org/24/61924/1/gate/gate-nova-
python26/2a687f6/console.html

** Affects: nova
 Importance: Undecided
 Assignee: Yaguang Tang (heut2008)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Yaguang Tang (heut2008)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273622

Title:
  stevedore 0.14.1 brokes Havana stable gate tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  2014-01-28 05:25:01.008 | Traceback (most recent call last):
  2014-01-28 05:25:01.009 |   File "nova/tests/test_hooks.py", line 112, in 
test_basic
  2014-01-28 05:25:01.009 | self.assertEqual(42, self._hooked(1))
  2014-01-28 05:25:01.009 |   File "nova/hooks.py", line 98, in inner
  2014-01-28 05:25:01.010 | manager = _HOOKS.setdefault(name, 
HookManager(name))
  2014-01-28 05:25:01.010 |   File "nova/hooks.py", line 63, in __init__
  2014-01-28 05:25:01.010 | super(HookManager, self).__init__(NS, name, 
invoke_on_load=True)
  2014-01-28 05:25:01.011 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/stevedore/hook.py",
 line 43, in __init__
  2014-01-28 05:25:01.011 | verify_requirements=verify_requirements,
  2014-01-28 05:25:01.012 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/stevedore/named.py",
 line 55, in __init__
  2014-01-28 05:25:01.012 | verify_requirements)
  2014-01-28 05:25:01.012 | TypeError: _mock_load_plugins() takes exactly 4 
arguments (5 given)
  2014-01-28 05:25:01.013 | 
==
  2014-01-28 05:25:01.013 | FAIL: 
nova.tests.test_hooks.HookTestCaseWithFunction.test_order_of_execution
  2014-01-28 05:25:01.013 | tags: worker-0

  details http://logs.openstack.org/24/61924/1/gate/gate-nova-
  python26/2a687f6/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1053931] Re: Volume hangs in "creating" status even though scheduler raises "No valid host" exception

2014-01-28 Thread Huang Zhiteng
Dafna,

I think this bug has been fixed.

Let me explain a little bit more on the workflow of creating a volume:
1) User sends request to Cinder API service; 2) API creates a DB entry
for the volume and marks its status to 'creating'
(https://github.com/openstack/cinder/blob/stable/havana/cinder/volume/flows/create_volume/__init__.py#L545)
and sends a RPC message to scheduler; 3) scheduler picks up the message
and makes placement decision and if a back-end is available, it sends
the request via RPC to volume service; 4) volume service picks up the
message to perform the real job creating a volume for user.

There are multiple cases in which a volume's status can be stuck in
'creating':

a) something wrong happened during RPC message being processed by
scheduler (e.g. scheduler service is down - related to this change &
bug: https://review.openstack.org/#/c/64014/, message is lost, scheduler
service goes down while scheduler processing the message);

b) something went wrong AFTER backend is chosen, which means scheduler
successfully sends out the message to target back-end, but somehow the
message isn't picked up by target volume service or there is unhandled
exception while volume service handling the request.

If somehow this bug happened again, can you describe the steps how to
reproduce it?

** Changed in: cinder
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1053931

Title:
  Volume hangs in "creating" status even though scheduler raises "No
  valid host" exception

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When the volume creation process fails during scheduling (i.e. there
  is no appropriate host) the status in DB (and in nova volume-list
  output as a result) hangs with a "creating..." value.

  In such case to figure out that volume creation failed one should go
  and see /var/log/nova/nova-scheduler.log (which is not an obvious
  action to do). Moreover, volume stuck with "creating..." status cannot
  be deleted with nova volume-delete. To delete it one have to change
  it's status to error in DB.

  
  Simple scheduler is being used (nova.conf):

  --scheduler_driver=nova.scheduler.simple.SimpleScheduler

  
  Here is a sample output from DB:

  *** 3. row ***
   created_at: 2012-09-21 09:55:42
   updated_at: NULL
   deleted_at: NULL
  deleted: 0
   id: 15
   ec2_id: NULL
  user_id: b0aadfc80b094d94b78d68dcdc7e8757
   project_id: 3b892f660ea2458aa9aa9c9a21352632
 host: NULL
 size: 1
availability_zone: nova
  instance_id: NULL
   mountpoint: NULL
  attach_time: NULL
   status: creating
attach_status: detached
 scheduled_at: NULL
  launched_at: NULL
terminated_at: NULL
 display_name: NULL
  display_description: NULL
provider_location: NULL
provider_auth: NULL
  snapshot_id: NULL
   volume_type_id: NULL

  
  Here is a part of interest in nova-scheduler.log:

  pic': u'volume', u'filter_properties': {u'scheduler_hints': {}}, 
u'snapshot_id': None, u'volume_id': 16}, u'_context_auth_token': '', 
u'_context_is_admin': True, u'_context_project_id': u'3b
892f660ea2458aa9aa9c9a21352632', u'_context_timestamp': 
u'2012-09-21T10:15:47.091307', u'_context_user_id': 
u'b0aadfc80b094d94b78d68dcdc7e8757', u'method': u'create_volume', 
u'_context_remote_address': u'172.18.67.146'} from (pid=11609) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
   15 2012-09-21 10:15:47 DEBUG nova.rpc.amqp 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] unpacked context: {'user_id': 
u'b0aadfc80b094d94b78d68dcdc7e8757', 'roles': [u'admin'], 'timestamp': 
'2012-09-21T10:15:47.091307', 'auth_token': '', 'remote_address': 
u'172.18.67.146', 'is_admin': True, 'request_id': u'req-01f7dd30-0421-4ef3-
a675-16b0cf1362eb', 'project_id': u'3b892f660ea2458aa9aa9c9a21352632', 
'read_deleted': u'no'} from (pid=11609) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
   14 2012-09-21 10:15:47 WARNING nova.scheduler.manager 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] Failed to schedule_create_volume: No vali
d host was found. Is the appropriate service running?
   13 2012-09-21 10:15:47 ERROR nova.rpc.amqp 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] Exception during message handling
   12 2012-09-21 10:15:47 TRACE nova.rpc.amqp Traceback (most recent call last):
   11 2012-09-21 10:15:47 TRACE nova.rpc.amqp

[Yahoo-eng-team] [Bug 1273647] [NEW] Remove leftovers of "# noqa" from imports added to import_exceptions

2014-01-28 Thread Tatiana Mazur
Public bug reported:

There are some "# noqa" leftovers after the patch set:
https://review.openstack.org/#/c/64854/

** Affects: horizon
 Importance: Wishlist
 Assignee: Tatiana Mazur (tmazur)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Mazur (tmazur)

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
Milestone: None => icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273647

Title:
  Remove leftovers of "# noqa" from imports added to import_exceptions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are some "# noqa" leftovers after the patch set:
  https://review.openstack.org/#/c/64854/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268614] Re: pep8 gating fails due to tools/config/check_uptodate.sh

2014-01-28 Thread Doug Hellmann
** No longer affects: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268614

Title:
  pep8 gating fails due to tools/config/check_uptodate.sh

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I see several changes, including
  https://review.openstack.org/#/c/63735/ , failed pep8 gating with
  error from check_uptodate tool:

  
  2014-01-13 14:06:39.643 | pep8 runtests: commands[1] | 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh
  2014-01-13 14:06:39.649 |   /home/jenkins/workspace/gate-nova-pep8$ 
/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh 
  2014-01-13 14:06:43.581 | 2741,2746d2740
  2014-01-13 14:06:43.581 | < # (optional) indicate whether to set the 
X-Service-Catalog
  2014-01-13 14:06:43.581 | < # header. If False, middleware will not ask for 
service
  2014-01-13 14:06:43.581 | < # catalog on token validation and will not set 
the X-Service-
  2014-01-13 14:06:43.581 | < # Catalog header. (boolean value)
  2014-01-13 14:06:43.581 | < #include_service_catalog=true
  2014-01-13 14:06:43.582 | < 
  2014-01-13 14:06:43.582 | E: nova.conf.sample is not up to date, please run 
tools/config/generate_sample.sh
  2014-01-13 14:06:43.582 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-nova-pep8/tools/config/check_uptodate.sh'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273678] [NEW] NameError: name '_' is not defined in Keystone/exception.py

2014-01-28 Thread harshit Agarwal
Public bug reported:

swift-init proxy restart
Signal proxy-server  pid: 3025  signal: 15
No proxy-server running
Starting proxy-server...(/etc/swift/proxy-server.conf)
Traceback (most recent call last):
  File "/usr/local/bin/swift-proxy-server", line 23, in 
sys.exit(run_wsgi(conf_file, 'proxy-server', default_port=8080, **options))
  File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 386, 
in run_wsgi
loadapp(conf_path, global_conf=global_conf)
  File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 313, 
in loadapp
ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
  File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 305, 
in loadcontext
global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, 
in loadcontext
global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, 
in _loadconfig
return loader.get_context(object_type, name, global_conf)
  File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 59, 
in get_context
object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 450, 
in get_context
global_additions=global_additions)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 562, 
in _pipeline_app_context
for name in pipeline[:-1]]
  File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 59, 
in get_context
object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, 
in get_context
section)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 517, 
in _context_from_explicit
value = import_string(found_expr)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 22, in 
import_string
return pkg_resources.EntryPoint.parse("x=" + s).load(False)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1989, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/__init__.py", line 
18, in 
from keystone.middleware.core import *
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 21, 
in 
from keystone.common import utils
  File "/usr/lib/python2.7/dist-packages/keystone/common/utils.py", line 32, in 

from keystone import exception
  File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 63, in 

class ValidationError(Error):
  File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 64, in 
ValidationError
message_format = _("Expecting to find %(attribute)s in %(target)s."
NameError: name '_' is not defined

** Affects: keystone
 Importance: Undecided
 Assignee: harshit Agarwal (harshit-py)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => harshit Agarwal (harshit-py)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273678

Title:
  NameError: name '_' is not defined in Keystone/exception.py

Status in OpenStack Identity (Keystone):
  New

Bug description:
  swift-init proxy restart
  Signal proxy-server  pid: 3025  signal: 15
  No proxy-server running
  Starting proxy-server...(/etc/swift/proxy-server.conf)
  Traceback (most recent call last):
File "/usr/local/bin/swift-proxy-server", line 23, in 
  sys.exit(run_wsgi(conf_file, 'proxy-server', default_port=8080, 
**options))
File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 
386, in run_wsgi
  loadapp(conf_path, global_conf=global_conf)
File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 
313, in loadapp
  ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 
305, in loadcontext
  global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, 
in loadcontext
  global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, 
in _loadconfig
  return loader.get_context(object_type, name, global_conf)
File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 
59, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 450, 
in get_context
  global_additions=global_additions)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 562, 
in _pipeline_app_context
  for name in pipeline[:-1]]
File "/usr/local/lib/python2.7/dist-packages/swift/common/wsgi.py", line 
59, in get_context
  object_type, name=name, global_conf=glob

[Yahoo-eng-team] [Bug 1268622] Re: enable cold migration with target host

2014-01-28 Thread Jay Lau
A bp https://blueprints.launchpad.net/nova/+spec/code-migration-with-
target was filed.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268622

Title:
  enable cold migration with target host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Now cold migration do not support migrate a VM instance with target
  host, we should enable this feature in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273268] Re: live-migration - instance could not be found

2014-01-28 Thread darkyat
Was meant to be fixed but still appears.

https://bugs.launchpad.net/nova/+bug/1044237

** This bug is no longer a duplicate of bug 1044237
   Block Migration doesn't work: Nova searches for the Instance on the 
destination Compute host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273268

Title:
  live-migration - instance could not be found

Status in OpenStack Compute (Nova):
  New

Bug description:
  Starting a live-migration using "nova live-migration
  a7a78e36-e088-416c-9479-e95aa1a0f7ef" failes due to the fact that he's
  trying to detach the volume from the instance of the destination host
  instead of the source host.

  * Start live migration
  * Check logs on both Source and Destination Host

  === Source Host ===
  2014-01-27 15:03:57.554 2681 ERROR nova.virt.libvirt.driver [-] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Live Migration failure: End of file while 
reading data: Input/output error

  === Destination Host ===
  2014-01-27 15:02:13.129 3742 AUDIT nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from mountpoint vda
  2014-01-27 15:02:13.134 3742 WARNING nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detaching volume from unknown instance
  2014-01-27 15:02:13.138 3742 ERROR nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Failed to detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from vda
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Traceback (most recent call last):
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3725, in 
_detach_volume
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] encryption=encryption)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1202, in 
detach_volume
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] virt_dom = 
self._lookup_by_name(instance_name)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3085, in 
_lookup_by_name
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] raise 
exception.InstanceNotFound(instance_id=instance_name)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] InstanceNotFound: Instance 
instance-0084 could not be found.
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]
  2014-01-27 15:02:13.139 3742 DEBUG nova.volume.cinder 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] Cinderclient connection created using URL: 
http://10.3.0.2:8776/v1/cd0e923440eb4bbc8f3388e38544b977 cinderclient 
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py:96
  2014-01-27 15:02:13.142 3742 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 10.3.0.2
  2014-01-27 15:02:13.230 3742 DEBUG urllib3.connectionpool [-] "POST 
/v1/cd0e923440eb4bbc8f3388e38544b977/volumes/2ab8cb25-8f79-4b8e-bc93-c52351df84ee/action
 HTTP/1.1" 202 0 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:296

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273713] [NEW] keystoneclient is using wrong management url if identity v2 and v3 endpoint co-exists

2014-01-28 Thread Qiu Yu
Public bug reported:

Steps to produce
--
1. configure service/endpoint for both identity v2 and v3

$ keystone service-list | grep identity
| 7fe9f96d595b420684fb53b3b17b281e |  keystone  |   identityv3   | Keystone 
Identity Service V3 |
| b00c390065724cdfb66b4e954d295489 |  keystone  |identity|  Keystone 
Identity Service   |

$ keystone endpoint-list | grep 35357
| 5c1a0fdcfb5e435fafa73954c5b43dd0 | RegionOne | 
http://192.168.56.102:5000/v3 | http://192.168.56.102:5000/v3   
  | http://192.168.56.102:35357/v3| 
7fe9f96d595b420684fb53b3b17b281e |
| c985f1b3ee1440778194f036f00f575c | RegionOne |
http://192.168.56.102:5000/v2.0|http://192.168.56.102:5000/v2.0 
   |http://192.168.56.102:35357/v2.0   | 
b00c390065724cdfb66b4e954d295489 |

2. issue an v3 api call, such as domain list, using python-
openstackclient

export OS_AUTH_URL="http://192.168.56.102:5000/v3";
openstack -v --os-identity-api-version 3 domain list

3. it returns "The resource could not be found. (HTTP 404)" as
keystoneclient is still using v2.0 admin url, as indicated from
following debug message.

DEBUG: keystoneclient.session REQ: curl -i -X GET
http://192.168.56.102:35357/v2.0/domains -H "User-Agent: python-
keystoneclient" -H "X-Auth-Token: a171809d51974693bba8c880280cc7da"

Expected result
--
Correct domain list result returned from "openstack -v 
--os-identity-api-version 3 domain list" command line.

** Affects: keystone
 Importance: Undecided
 Assignee: Qiu Yu (unicell)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Qiu Yu (unicell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273713

Title:
  keystoneclient is using wrong management url if identity v2 and v3
  endpoint co-exists

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Steps to produce
  --
  1. configure service/endpoint for both identity v2 and v3

  $ keystone service-list | grep identity
  | 7fe9f96d595b420684fb53b3b17b281e |  keystone  |   identityv3   | Keystone 
Identity Service V3 |
  | b00c390065724cdfb66b4e954d295489 |  keystone  |identity|  Keystone 
Identity Service   |

  $ keystone endpoint-list | grep 35357
  | 5c1a0fdcfb5e435fafa73954c5b43dd0 | RegionOne | 
http://192.168.56.102:5000/v3 | http://192.168.56.102:5000/v3   
  | http://192.168.56.102:35357/v3| 
7fe9f96d595b420684fb53b3b17b281e |
  | c985f1b3ee1440778194f036f00f575c | RegionOne |
http://192.168.56.102:5000/v2.0|http://192.168.56.102:5000/v2.0 
   |http://192.168.56.102:35357/v2.0   | 
b00c390065724cdfb66b4e954d295489 |

  2. issue an v3 api call, such as domain list, using python-
  openstackclient

  export OS_AUTH_URL="http://192.168.56.102:5000/v3";
  openstack -v --os-identity-api-version 3 domain list

  3. it returns "The resource could not be found. (HTTP 404)" as
  keystoneclient is still using v2.0 admin url, as indicated from
  following debug message.

  DEBUG: keystoneclient.session REQ: curl -i -X GET
  http://192.168.56.102:35357/v2.0/domains -H "User-Agent: python-
  keystoneclient" -H "X-Auth-Token: a171809d51974693bba8c880280cc7da"

  Expected result
  --
  Correct domain list result returned from "openstack -v 
--os-identity-api-version 3 domain list" command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238536] Re: POST with empty body results in 411 Error

2014-01-28 Thread Andrea Frittoli
httplib2 is not setting content-length to 0 if the post body is empty
(https://code.google.com/p/httplib2/issues/detail?id=143).

Certain http servers won't be happy if the content length is not set.
Tempest rest client should do that.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Bug watch added: code.google.com/p/httplib2/issues #143
   http://code.google.com/p/httplib2/issues/detail?id=143

** Changed in: tempest
 Assignee: (unassigned) => Andrea Frittoli (andrea-frittoli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238536

Title:
  POST with empty body results in 411 Error

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  In Progress

Bug description:
  Some API commands don't need a body - for example allocating a
  floating IP.   However making a request without a body results in a
  411 error:

  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
"Content-Type: application/xml" -H "Accept: application/xml" -H "X-Auth-Token: 
xxx" -X POST
  HTTP/1.1 411 Length Required
  nnCoection: close
  Content-Length: 284

  Fault Name: HttpRequestReceiveError
  Error Type: Default
  Description: Http request received failed
  Root Cause Code: -19013
  Root Cause : HTTP Transport: Couldn't determine the content length
  Binding State: CLIENT_CONNECTION_ESTABLISHED
  Service: null
  Endpoint: null

  
  Passing an Empty body works:
  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
"Content-Type: application/xml" -H "Accept: application/xml" -H "X-Auth-Token: 
xxx" -X POST -d ''
  HTTP/1.1 200 OK
  Content-Length: 164
  Content-Type: application/xml; charset=UTF-8
  Date: Fri, 31 May 2013 11:13:26 GMT
  X-Compute-Request-Id: req-cc2ce740-6114-4820-8717-113ea1796142

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273730] [NEW] MechanismDriverError hides original exception

2014-01-28 Thread Paul Ward
Public bug reported:

In implementing a mechanism driver for ML2, I see that any exceptions
raised by the mechanism driver are swallowed and all that's bubbled up
to the user is the generic MechanismDriverError.  This happens in
MechanismManager_call_on_drivers() in
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/managers.py.

It would be nice if the MechanismDriverError at least contained the
exception details from the last exception encountered in the driver
chain.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Ward (wpward)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Paul Ward (wpward)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273730

Title:
  MechanismDriverError hides original exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In implementing a mechanism driver for ML2, I see that any exceptions
  raised by the mechanism driver are swallowed and all that's bubbled up
  to the user is the generic MechanismDriverError.  This happens in
  MechanismManager_call_on_drivers() in
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/managers.py.

  It would be nice if the MechanismDriverError at least contained the
  exception details from the last exception encountered in the driver
  chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273731] [NEW] attaching a cinder volume to running instance silently fails

2014-01-28 Thread Jaroslav Henner
Public bug reported:

when using
compute_driver=vmwareapi.VMwareVCDriver
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver

after creating a VM and cinder volume and attaching it:
nova boot --image cirros-0.3.1-x86_64-disk.vmdk --flavor m1.tiny foo
cinder create --display-name baz 1
nova volume-attach foo aeb729e5-bfb4-4ac2-9d73-eec70e03903a auto

the cinder volume remains available, but in the nova api.log:

2014-01-28 15:28:45.815 4986 WARNING nova.virt.vmwareapi.driver [-] Task 
[ReconfigVM_Task] (returnval){
   value = "task-7702"
   _type = "Task"
 } status: error The attempted operation cannot be performed in the current 
state (Powered on).

I think it would be good to report such error to the user. It may be
good to check the VM state before even trying to attach the vol.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273731

Title:
  attaching a cinder volume to running instance silently fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  when using
  compute_driver=vmwareapi.VMwareVCDriver
  volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver

  after creating a VM and cinder volume and attaching it:
  nova boot --image cirros-0.3.1-x86_64-disk.vmdk --flavor m1.tiny foo
  cinder create --display-name baz 1
  nova volume-attach foo aeb729e5-bfb4-4ac2-9d73-eec70e03903a auto

  the cinder volume remains available, but in the nova api.log:

  2014-01-28 15:28:45.815 4986 WARNING nova.virt.vmwareapi.driver [-] Task 
[ReconfigVM_Task] (returnval){
 value = "task-7702"
 _type = "Task"
   } status: error The attempted operation cannot be performed in the current 
state (Powered on).

  I think it would be good to report such error to the user. It may be
  good to check the VM state before even trying to attach the vol.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273713] Re: keystoneclient is using wrong management url if identity v2 and v3 endpoint co-exists

2014-01-28 Thread Dolph Mathews
"identityv3" is not a recognized or supported service type. The correct
approach to this is to use unversioned endpoints in the catalog, which
python-keystoneclient doesn't fully support yet.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273713

Title:
  keystoneclient is using wrong management url if identity v2 and v3
  endpoint co-exists

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Steps to produce
  --
  1. configure service/endpoint for both identity v2 and v3

  $ keystone service-list | grep identity
  | 7fe9f96d595b420684fb53b3b17b281e |  keystone  |   identityv3   | Keystone 
Identity Service V3 |
  | b00c390065724cdfb66b4e954d295489 |  keystone  |identity|  Keystone 
Identity Service   |

  $ keystone endpoint-list | grep 35357
  | 5c1a0fdcfb5e435fafa73954c5b43dd0 | RegionOne | 
http://192.168.56.102:5000/v3 | http://192.168.56.102:5000/v3   
  | http://192.168.56.102:35357/v3| 
7fe9f96d595b420684fb53b3b17b281e |
  | c985f1b3ee1440778194f036f00f575c | RegionOne |
http://192.168.56.102:5000/v2.0|http://192.168.56.102:5000/v2.0 
   |http://192.168.56.102:35357/v2.0   | 
b00c390065724cdfb66b4e954d295489 |

  2. issue an v3 api call, such as domain list, using python-
  openstackclient

  export OS_AUTH_URL="http://192.168.56.102:5000/v3";
  openstack -v --os-identity-api-version 3 domain list

  3. it returns "The resource could not be found. (HTTP 404)" as
  keystoneclient is still using v2.0 admin url, as indicated from
  following debug message.

  DEBUG: keystoneclient.session REQ: curl -i -X GET
  http://192.168.56.102:35357/v2.0/domains -H "User-Agent: python-
  keystoneclient" -H "X-Auth-Token: a171809d51974693bba8c880280cc7da"

  Expected result
  --
  Correct domain list result returned from "openstack -v 
--os-identity-api-version 3 domain list" command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267438] Re: create volume option is shown, even without cinder enabled

2014-01-28 Thread Julie Pichon
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
 Assignee: (unassigned) => Matthias Runge (mrunge)

** Changed in: horizon/havana
   Importance: Undecided => High

** Changed in: horizon/havana
   Status: New => In Progress

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267438

Title:
  create volume option is shown, even without cinder enabled

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  In Progress

Bug description:
  on http://localhost:8000/project/images_and_snapshots/

  there is the "create volume" option enabled, even if cinder is
  disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1267438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273774] [NEW] External interface shows as fake

2014-01-28 Thread Matthew D. Wood
Public bug reported:

On the network topology router-mouse-over-balloon, the list of
interfaces contains one labeled "fake" which is a placeholder
for the external interface.

This interface should be labeled with a much less confusing tag.
"extern" or "gateway" seems much better.

** Affects: horizon
 Importance: Undecided
 Assignee: Matthew D. Wood (woodm1979)
 Status: New


** Tags: topology-view

** Changed in: horizon
 Assignee: (unassigned) => Matthew D. Wood (woodm1979)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273774

Title:
  External interface shows as fake

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the network topology router-mouse-over-balloon, the list of
  interfaces contains one labeled "fake" which is a placeholder
  for the external interface.

  This interface should be labeled with a much less confusing tag.
  "extern" or "gateway" seems much better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273803] [NEW] The pci manager try to modify the pci device list

2014-01-28 Thread jiang, yunhong
Public bug reported:

Currently the ObjectList is mostly immutable, i.e. although the items in
the list is changable, but the list itself should not be add or remove.

However the PCI manager use a ObjectList to track all the devices in the
host and may add/remove, this is not correct. We should not use the
Object List but a simple list to track all the devices.

** Affects: nova
 Importance: Undecided
 Assignee: jiang, yunhong (yunhong-jiang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jiang, yunhong (yunhong-jiang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273803

Title:
  The pci manager try to modify the pci device list

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently the ObjectList is mostly immutable, i.e. although the items
  in the list is changable, but the list itself should not be add or
  remove.

  However the PCI manager use a ObjectList to track all the devices in
  the host and may add/remove, this is not correct. We should not use
  the Object List but a simple list to track all the devices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273455] Re: stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-01-28 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Importance: Undecided => Critical

** Changed in: nova/havana
   Importance: Undecided => Critical

** Changed in: nova/grizzly
   Status: New => In Progress

** Changed in: nova/havana
   Status: New => Fix Committed

** Changed in: nova/havana
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: nova/grizzly
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: nova/havana
Milestone: None => 2013.2.2

** Changed in: nova
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  In Progress
Status in Manage plugins for Python applications:
  Fix Released

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1225191] Re: add qpid-python to requirements.txt

2014-01-28 Thread Ben Nemec
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1225191

Title:
  add qpid-python to requirements.txt

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Compute (Nova):
  New

Bug description:
  If one tries using qpid instead of rabbitmq with heat, when heat-
  engine starts up it will complain that the qpid.messaging library is
  missing. To fix this, qpid-python should be included in
  requirements.txt.

  2013-09-13 20:47:51.624 7031 INFO heat.engine.environment [-] Registering 
AWS::CloudFormation::WaitCondition -> 
  2013-09-13 20:47:51.634 7031 ERROR heat.openstack.common.threadgroup [-] 
Failed to import qpid.messaging
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup   File 
"/opt/stack/venvs/heat/local/lib/python2.7/site-pa
  ckages/heat/openstack/common/threadgroup.py", line 117, in wait
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup 
x.wait()
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup   File 
"/opt/stack/venvs/heat/local/lib/python2.7/site-pa
  ckages/heat/openstack/common/threadgroup.py", line 49, in wait
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup 
return self.thread.wait()
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup   File 
"/opt/stack/venvs/heat/local/lib/python2.7/site-pa
  ckages/eventlet/greenthread.py", line 168, in wait
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup   File 
"/opt/stack/venvs/heat/local/lib/python2.7/site-pa
  ckages/eventlet/event.py", line 116, in wait
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-09-13 20:47:51.634 7031 TRACE heat.openstack.common.threadgroup   File 
"/opt/stack/venvs/heat/local/lib/python2.7/site-pa
  ckages/eventlet/hubs/hub.py", line 187, in switch

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1225191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273831] [NEW] Keystone v2.0 documentation shows unsupported "versionId", "versionList" fields

2014-01-28 Thread Shri Javadekar
Public bug reported:

The documentation of Openstack Keystone v2.0 [1] shows that when a user
is authenticated, the return values will have a "versionId",
"versionInfo", "versionStatus", etc.

However, based on the discussion I had on the #openstack-dev irc
channel, it turns out that Keystone does not support these.

There are other auth implementation which try to be compatible with
Keystone. They implement their auth schemes based on this documentation.
Incorrect documentation causes these to break.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273831

Title:
  Keystone v2.0 documentation shows unsupported "versionId",
  "versionList" fields

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The documentation of Openstack Keystone v2.0 [1] shows that when a
  user is authenticated, the return values will have a "versionId",
  "versionInfo", "versionStatus", etc.

  However, based on the discussion I had on the #openstack-dev irc
  channel, it turns out that Keystone does not support these.

  There are other auth implementation which try to be compatible with
  Keystone. They implement their auth schemes based on this
  documentation. Incorrect documentation causes these to break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273837] [NEW] Describe addresses in ec2 api broken with neutron

2014-01-28 Thread Vish Ishaya
Public bug reported:

Describe addresses using the ec2 api is broken when using neutron. It
attempts to retrieve the fixed ip directly which by id:

https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/api/ec2/cloud.py#L1209

which is not supported by neutron:

https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/network/neutronv2/api.py#L693

It should be pulling the instance uuid from the floating list directly like we 
do in the v2 api:
 
https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/api/openstack/compute/contrib/floating_ips.py#L74

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: ec2 havana-backport-potential

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => Triaged

** Tags added: ec2 havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273837

Title:
  Describe addresses in ec2 api broken with neutron

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Describe addresses using the ec2 api is broken when using neutron. It
  attempts to retrieve the fixed ip directly which by id:

  
https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/api/ec2/cloud.py#L1209

  which is not supported by neutron:

  
https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/network/neutronv2/api.py#L693

  It should be pulling the instance uuid from the floating list directly like 
we do in the v2 api:
   
  
https://github.com/openstack/nova/blob/bc10b3c2b222b5f5c6ee6ffb79c12a8d3e2931bf/nova/api/openstack/compute/contrib/floating_ips.py#L74

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273852] [NEW] PCI device object should be purely DB layer

2014-01-28 Thread jiang, yunhong
Public bug reported:

Currently the PCI device object includes a lot of function like
alloc/free/claim etc. However, the NovaObject should not be used this
way, and it makes the PCI device object really different with other
NovaObject implementation.

We should keep the PCI device object as simple data access, and keep
those method to separated functions.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273852

Title:
   PCI device object should be purely DB layer

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently the PCI device object includes a lot of function like
  alloc/free/claim etc. However, the NovaObject should not be used this
  way, and it makes the PCI device object really different with other
  NovaObject implementation.

  We should keep the PCI device object as simple data access, and keep
  those method to separated functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273862] [NEW] Keystone manage man page errors

2014-01-28 Thread Adam Young
Public bug reported:

patch for keystone-manage man page source

Description of problem:

Grammar errors in keystone-manage.rst

Line 27 reads:

with through the keystone REST api, such data import/export and schema

Should read:

with the keystone REST API, such as data import/export and schema

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  patch for keystone-manage man page source
  
  Description of problem:
  
  Grammar errors in keystone-manage.rst
  
- Version-Release number of selected component (if applicable):
+ Line 27 reads:
  
- keystone-2013.2 prep'd from rhos-4.0-rhel-6 dist-git
+ with through the keystone REST api, such data import/export and schema
  
- See attached patch  for minor grammar fixes
- and capitalization of "API".
+ Should read:
+ 
+ with the keystone REST API, such as data import/export and schema

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273862

Title:
  Keystone manage man page errors

Status in OpenStack Identity (Keystone):
  New

Bug description:
  patch for keystone-manage man page source

  Description of problem:

  Grammar errors in keystone-manage.rst

  Line 27 reads:

  with through the keystone REST api, such data import/export and schema

  Should read:

  with the keystone REST API, such as data import/export and schema

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273867] [NEW] Keystone API v3 lists disabled endpoints and services in catalog

2014-01-28 Thread Adam Young
Public bug reported:

When endpoint or service has "enabled" attribute set to "False", it is
still listed in catalog (`keystone catalog` command and/or in catalog
part of token.

Create testing service (simplifies output later):
> localhost:5000
> POST /v3/services
> '{"service":{"name":"My svc","type":"testing"}}'
response:
> {'service': {'id': '',
>  'links': {'self': 
> 'http://localhost:5000/v3/services/'},
>  'name': 'My svc',
>  'type': 'testing'}}

Create disabled endpoint:
> localhost:5000
> POST /v3/endpoints
> '{"endpoint":{
>"enabled":false,
>"name":"My disabled",
>"interface":"public",
>"url":"disabled_URL",
>"service_id":""}}'
response:
> {'endpoint': {'enabled': False,
>   'id': '',
>   'interface': 'public',
>   'links': {'self': 
> 'http://localhost:5000/v3/endpoints/'},
>   'name': 'My disabled',
>   'region': None,
>   'service_id': '',
>   'url': 'disabled_URL'}}

Now request token and see that it's catalog/endpoints part contains:
> localhost:5000
> POST /v3/auth/tokens
> '{"auth":{
>  "identity":
>{"methods":["password"],
> "password":{
>   "user":{"name":"admin","domain":{"id":"default"},"password":"pass"}}},
>  "scope":{"project":{"name":"admin","domain":{"id":"default"}
snippet of response:
> {'token': {'catalog': [
> ...
>   {'endpoints': [{'enabled': False,
>  'id': '',
>  'interface': 'public',
>  'legacy_endpoint_id': None,
>  'name': 'My disabled',
>  'region': None,
>  'url': 'disabled_URL'}],
>'id': '',
>'type': 'testing'},
> ...

Also it gets listed in response of `keystone catalog` (API v2):
> # keystone catalog --service testing
> Service: testing
> +---+--+
> |  Property |  Value   |
> +---+--+
> | id| |
> | publicURL |disabled_URL  |
> |   region  |  |
> +---+--+

The same example applies to Service with enabled=false.

See https://github.com/openstack/identity-api/blob/master/openstack-
identity-api/src/markdown/identity-api-v3.md#endpoints-v3endpoints for
description of enabled attribute for Endpoint.

And https://github.com/openstack/identity-api/blob/master/openstack-
identity-api/src/markdown/identity-api-v3.md#services-v3services for
description of Service.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273867

Title:
  Keystone API v3 lists disabled endpoints and services in catalog

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When endpoint or service has "enabled" attribute set to "False", it is
  still listed in catalog (`keystone catalog` command and/or in catalog
  part of token.

  Create testing service (simplifies output later):
  > localhost:5000
  > POST /v3/services
  > '{"service":{"name":"My svc","type":"testing"}}'
  response:
  > {'service': {'id': '',
  >  'links': {'self': 
'http://localhost:5000/v3/services/'},
  >  'name': 'My svc',
  >  'type': 'testing'}}

  Create disabled endpoint:
  > localhost:5000
  > POST /v3/endpoints
  > '{"endpoint":{
  >"enabled":false,
  >"name":"My disabled",
  >"interface":"public",
  >"url":"disabled_URL",
  >"service_id":""}}'
  response:
  > {'endpoint': {'enabled': False,
  >   'id': '',
  >   'interface': 'public',
  >   'links': {'self': 
'http://localhost:5000/v3/endpoints/'},
  >   'name': 'My disabled',
  >   'region': None,
  >   'service_id': '',
  >   'url': 'disabled_URL'}}

  Now request token and see that it's catalog/endpoints part contains:
  > localhost:5000
  > POST /v3/auth/tokens
  > '{"auth":{
  >  "identity":
  >{"methods":["password"],
  > "password":{
  >   "user":{"name":"admin","domain":{"id":"default"},"password":"pass"}}},
  >  "scope":{"project":{"name":"admin","domain":{"id":"default"}
  snippet of response:
  > {'token': {'catalog': [
  > ...
  >   {'endpoints': [{'enabled': False,
  >  'id': '',
  >  'interface': 'public',
  >  'legacy_endpoint_id': None,
  >  'name': 'My disabled',
  >  'region': None,
  >  'url': 'disabled_URL'}],
  >'id': '',
  >'type': 'testing'},
  > ...

  Also it gets listed in response of `keystone catalog` (API v2):
  > # keystone catalog --service testing
  > Service: testing
  > +---+-

[Yahoo-eng-team] [Bug 1273874] [NEW] modal missing close and cancel button

2014-01-28 Thread Cindy Lu
Public bug reported:

Not sure if it's just me or what but some of the modal popup forms are
missing the 'X' and 'Cancel' buttons.

This regression can been seen on Admin > Flavor > Edit Flavor and
Project > Instances > Launch Instance.  Modals with workflow tabs.
Please see attached image.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "012814 - missing cancel button.png"
   
https://bugs.launchpad.net/bugs/1273874/+attachment/3961140/+files/012814%20-%20missing%20cancel%20button.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273874

Title:
  modal missing close and cancel button

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Not sure if it's just me or what but some of the modal popup forms are
  missing the 'X' and 'Cancel' buttons.

  This regression can been seen on Admin > Flavor > Edit Flavor and
  Project > Instances > Launch Instance.  Modals with workflow tabs.
  Please see attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273882] [NEW] os-collect-config unable to restart neutron-l3-agent

2014-01-28 Thread Gregory Haynes
Public bug reported:

The command 'os-collect-config --force --one' fails with:

+ service neutron-l3-agent restart
neutron-l3-agent stop/waiting
start: Job failed to start
[2014-01-28 23:17:33,155] (os-refresh-config) [ERROR] during post-configure 
phase. [Command '['dib-run-parts', 
'/opt/stack/os-config-refresh/post-configure.d']' returned non-zero exit status 
1]


And in /var/log/upstart/neutron-l3-agent.log:

Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: kill -9 11634 (no 
filter matched)\n'
2014-01-28 23:17:33.073 13463 ERROR neutron.common.legacy [-] Skipping unknown 
group key: firewall_driver
2014-01-28 23:17:33.133 13463 CRITICAL neutron 
[req-5f2c30e1-d121-4183-8bb1-109940edc995 None] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'kill', '-9', '11634']
Exit code: 99
Stdout: ''
Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: kill -9 11634 (no 
filter matched)\n'


Was able to fix by adding filter to /etc/neutron/rootwrap.d/l3.filters:
kill_l3_agent: KillFilter, root, /opt/stack/venvs/neutron/bin/python, -9

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273882

Title:
  os-collect-config unable to restart neutron-l3-agent

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  New

Bug description:
  The command 'os-collect-config --force --one' fails with:

  + service neutron-l3-agent restart
  neutron-l3-agent stop/waiting
  start: Job failed to start
  [2014-01-28 23:17:33,155] (os-refresh-config) [ERROR] during post-configure 
phase. [Command '['dib-run-parts', 
'/opt/stack/os-config-refresh/post-configure.d']' returned non-zero exit status 
1]

  
  And in /var/log/upstart/neutron-l3-agent.log:

  Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: kill -9 11634 (no 
filter matched)\n'
  2014-01-28 23:17:33.073 13463 ERROR neutron.common.legacy [-] Skipping 
unknown group key: firewall_driver
  2014-01-28 23:17:33.133 13463 CRITICAL neutron 
[req-5f2c30e1-d121-4183-8bb1-109940edc995 None] 
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'kill', '-9', '11634']
  Exit code: 99
  Stdout: ''
  Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: kill -9 11634 (no 
filter matched)\n'

  
  Was able to fix by adding filter to /etc/neutron/rootwrap.d/l3.filters:
  kill_l3_agent: KillFilter, root, /opt/stack/venvs/neutron/bin/python, -9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273894] [NEW] GlusterFS: Do not time out long-running volume snapshot operations

2014-01-28 Thread Eric Harney
Public bug reported:

Currently, when Cinder sends a snapshot create or delete job to Nova for
the GlusterFS driver, it has a fixed timeout window, and if the job
takes longer than that, the snapshot operation is failed.  (The
assumption is that Nova has somehow failed.)

This is problematic because it fails operations that are still active
but running very slowly.

The fix proposed here is to use the same update_snapshot_status API
which is used to finalize these operations to send periodic updates
while the operation is in progress, so that Cinder knows that Nova is
still active, and that the job does not need to be timed out.

This is backward compatible for both Havana Cinder and Havana Nova.

** Affects: cinder
 Importance: Undecided
 Assignee: Eric Harney (eharney)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Eric Harney (eharney)
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Eric Harney (eharney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273894

Title:
  GlusterFS: Do not time out long-running volume snapshot operations

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, when Cinder sends a snapshot create or delete job to Nova
  for the GlusterFS driver, it has a fixed timeout window, and if the
  job takes longer than that, the snapshot operation is failed.  (The
  assumption is that Nova has somehow failed.)

  This is problematic because it fails operations that are still active
  but running very slowly.

  The fix proposed here is to use the same update_snapshot_status API
  which is used to finalize these operations to send periodic updates
  while the operation is in progress, so that Cinder knows that Nova is
  still active, and that the job does not need to be timed out.

  This is backward compatible for both Havana Cinder and Havana Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1273894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270608] Re: n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

2014-01-28 Thread John Griffith
Turns out this does appear to be a side effect of commit: 
e2e0ed80799c1ba04b37278996a171fc74b6f9eb
does seem to be the root of the problem.  It appears that the initialize is in 
some cases doing a delete of 
the targets.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New => Confirmed

** Changed in: cinder
   Importance: Undecided => Critical

** Changed in: cinder
 Assignee: (unassigned) => John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270608

Title:
  n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to
  fail

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Changes are failing the gate-tempest-*-full gate due to an error message in 
the logs.
  The error message is like

  2014-01-18 20:13:19.437 | Log File: n-cpu
  2014-01-18 20:13:20.482 | 2014-01-18 20:04:05.189 ERROR nova.compute.manager 
[req-25a1842c-ce9a-4035-8975-651f6ee5ddfc 
tempest.scenario.manager-tempest-1060379467-user 
tempest.scenario.manager-tempest-1060379467-tenant] [instance: 
0b1c1b55-b520-4ff2-bac2-8457ba3f4b6a] Error: iSCSI device not found at 
/dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-a6e86002-dc25-4782-943b-58cc0c68238d-lun-1

  Here's logstash for the query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwiRXJyb3I6IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXQgL2Rldi9kaXNrL2J5LXBhdGgvaXAtMTI3LjAuMC4xOjMyNjAtaXNjc2ktaXFuLjIwMTAtMTAub3JnLm9wZW5zdGFjazp2b2x1bWUtXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAxNTA4NTU5NTJ9

  shows several failures starting at 2014-01-17T14:00:00

  Maybe tempest is doing something that generates the ERROR message and then 
isn't accepting the error message it should?
  Or nova is logging an error message when it shouldn't?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1270608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273943] [NEW] Tox failing on py27dj14 environment

2014-01-28 Thread Kirill Izotov
Public bug reported:

Full test log: http://paste.openstack.org/show/62066/

It seems like django.test.assertContain is unable to parse template
html.

Further inspection revealed that there is a difference in parsing
between start tag and end tag and even more, difference between parsing
tags with and without the attributes:

HTMLParser.tagfind.match('document.write("something")',
1).end() would result in 7, so the parsed tag will be 'script'

but

HTMLParser.tagfind.match('document.write("something")', 1).end()
will result in 8 and a parsed tag of 'script ' (with tailing whitespace)

Somewhere between 2.7.3 and 2.7.4 Python had changed its
HTMLParser.tagfind regex [1, 2]. Django relied heavily on this regex
with its own _HtmlParser modification [3] and hadn't react fast enough
to land the fix in 1.4 [4].

So what we are having here is for particular configuration including
both python 2.7.4+ and django 1.4, django.test.assertContain would not
be able to properly parse perfectly valid html and would fail this
tests.

The question is what should we do in this case. Should we limit py27dj14
environment with basepython of 2.7.3, should we disable this tests on
Django 1.4, replace the assertion function with one without the bug or
just completely ignore that bug as irrelevant?

---

[1] http://hg.python.org/cpython/file/70274d53c1dd/Lib/HTMLParser.py#l25
[2] http://hg.python.org/cpython/file/026ee0057e2d/Lib/HTMLParser.py#l25
[3] 
https://github.com/django/django/blob/98a1e14e093211f15e91daa4c9de0402be5d31b8/django/utils/html_parser.py#L36-L39
[4] 
https://github.com/django/django/commit/6bc1b222994301782bd80780bdeec8c4eb44631a

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273943

Title:
  Tox failing on py27dj14 environment

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Full test log: http://paste.openstack.org/show/62066/

  It seems like django.test.assertContain is unable to parse template
  html.

  Further inspection revealed that there is a difference in parsing
  between start tag and end tag and even more, difference between
  parsing tags with and without the attributes:

  HTMLParser.tagfind.match('document.write("something")',
  1).end() would result in 7, so the parsed tag will be 'script'

  but

  HTMLParser.tagfind.match('document.write("something")', 1).end()
  will result in 8 and a parsed tag of 'script ' (with tailing
  whitespace)

  Somewhere between 2.7.3 and 2.7.4 Python had changed its
  HTMLParser.tagfind regex [1, 2]. Django relied heavily on this regex
  with its own _HtmlParser modification [3] and hadn't react fast enough
  to land the fix in 1.4 [4].

  So what we are having here is for particular configuration including
  both python 2.7.4+ and django 1.4, django.test.assertContain would not
  be able to properly parse perfectly valid html and would fail this
  tests.

  The question is what should we do in this case. Should we limit
  py27dj14 environment with basepython of 2.7.3, should we disable this
  tests on Django 1.4, replace the assertion function with one without
  the bug or just completely ignore that bug as irrelevant?

  ---

  [1] http://hg.python.org/cpython/file/70274d53c1dd/Lib/HTMLParser.py#l25
  [2] http://hg.python.org/cpython/file/026ee0057e2d/Lib/HTMLParser.py#l25
  [3] 
https://github.com/django/django/blob/98a1e14e093211f15e91daa4c9de0402be5d31b8/django/utils/html_parser.py#L36-L39
  [4] 
https://github.com/django/django/commit/6bc1b222994301782bd80780bdeec8c4eb44631a

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273975] [NEW] when "v2/images" API with "changes-since" option is requested, Glance API Server returns 500 code

2014-01-28 Thread Noboru Arai
Public bug reported:

environment:
Openstack by deployed devstack

reproduce:
  1.   request "v2/images" with "chnages-since" to Glance API Server.

  2.  return 500 error code

expect result:
   return list filtered out by "changes-since" option 

cause:
   in "_make_conditions_from_filters" method of glance/db/sqlalchemy/api.py,
 "timeutils.normalize_time" function is used.

Although the argument of  "timeutils.normalize_time" have to have datetime,
unicode object is set.

syslog:

2014-01-29 15:21:16.444 24293 INFO glance.wsgi.server 
[561294e8-8719-4dd1-84d3-c070f9bf421d 2fdafc2b68a54ec19343c5eb45d2b65b 
ac9666adfda644f181d8005053e27b73] Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 372, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 581, in __call__
return self.app(env, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 372, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 372, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 372, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 203, in __call__
return app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
response = self.app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 604, in __call__
request, **action_args)
  File "/opt/stack/glance/glance/common/wsgi.py", line 623, in dispatch
return method(*args, **kwargs)
  File "/opt/stack/glance/glance/api/v2/images.py", line 91, in index
member_status=member_status)
  File "/opt/stack/glance/glance/api/authorization.py", line 90, in list
images = self.image_repo.list(*args, **kwargs)
  File "/opt/stack/glance/glance/domain/proxy.py", line 56, in list
items = self.base.list(*args, **kwargs)
  Fi