[Yahoo-eng-team] [Bug 1277027] Re: test_admin_delete_servers_of_others failure due to unexpected task state

2014-02-17 Thread Attila Fazekas
The server must be always delete-able (unless it is lock-ed).
One of the reason of removing the whitebox tests was, the servers become always 
delectable.

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277027

Title:
  test_admin_delete_servers_of_others failure due to unexpected task
  state

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  In Progress

Bug description:
  I couldn't find an existing bug for this, apologies if it's a dupe,
  looks like a nova bug:

  
  https://review.openstack.org/#/c/70717/
  
http://logs.openstack.org/17/70717/1/gate/gate-tempest-dsvm-full/9adaf90/console.html

  2014-02-06 08:04:16.350 | 2014-02-06 07:48:25,729 Response Body: {server: 
{status: ERROR, os-access-ips:access_ip_v6: , updated: 
2014-02-06T07:48:25Z, os-access-ips:access_ip_v4: , addresses: {}, 
links: [{href: 
http://127.0.0.1:8774/v3/servers/1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd;, rel: 
self}, {href: 
http://127.0.0.1:8774/servers/1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd;, rel: 
bookmark}], os-extended-status:task_state: null, key_name: null, image: 
{id: 7e649e07-3cea-4d95-90b2-2bbea7fce698, links: [{href: 
http://23.253.79.233:9292/images/7e649e07-3cea-4d95-90b2-2bbea7fce698;, rel: 
bookmark}]}, os-pci:pci_devices: [], 
os-extended-availability-zone:availability_zone: nova, 
os-extended-status:power_state: 0, os-config-drive:config_drive: , 
host_id: 10f0dc42e72572ed6d30e8dc32b41edc1d41a3dacda6571c5aeabe6e, 
flavor: {id: 42, links: [{href: http://127.0.0.1:8774/flavors/42;, 
rel: bookmark}]}, id: 1cbd2e01-
 d2f3-45a0-baee-1d16df3dc7fd, security_groups: [{name: default}], 
user_id: f2262ed0a64c43359867456cfbccc153, name: 
ServersAdminV3Test-instance-1705049062, created: 2014-02-06T07:48:21Z, 
tenant_id: 8e932e11471a469e85a30195b2198f63, os-extended-status:vm_state: 
error, os-server-usage:launched_at: null, 
os-extended-volumes:volumes_attached: [], os-server-usage:terminated_at: 
null, os-extended-status:locked_by: null, fault: {message: No valid host 
was found. , code: 500, created: 2014-02-06T07:48:25Z}, metadata: {}}}
  2014-02-06 08:04:16.350 | }}}
  2014-02-06 08:04:16.350 | 
  2014-02-06 08:04:16.350 | Traceback (most recent call last):
  2014-02-06 08:04:16.350 |   File 
tempest/api/compute/v3/admin/test_servers.py, line 85, in 
test_admin_delete_servers_of_others
  2014-02-06 08:04:16.350 | 
self.servers_client.wait_for_server_termination(server['id'])
  2014-02-06 08:04:16.351 |   File 
tempest/services/compute/v3/json/servers_client.py, line 179, in 
wait_for_server_termination
  2014-02-06 08:04:16.351 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2014-02-06 08:04:16.351 | BuildErrorException: Server 
1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd failed to build and is in ERROR status
  2014-02-06 08:04:16.351 | 
  2014-02-06 08:04:16.351 | 
  2014-02-06 08:04:16.351 | 
==
  2014-02-06 08:04:16.351 | FAIL: process-returncode
  2014-02-06 08:04:16.351 | process-returncode
  2014-02-06 08:04:16.351 | 
--
  2014-02-06 08:04:16.352 | _StringException: Binary content:
  2014-02-06 08:04:16.352 |   traceback (test/plain; charset=utf8)
  2014-02-06 08:04:16.352 | 
  2014-02-06 08:04:16.352 | 
  2014-02-06 08:04:16.352 | 
--
  2014-02-06 08:04:16.352 | Ran 2101 tests in 2350.793s
  2014-02-06 08:04:16.353 | 
  2014-02-06 08:04:16.353 | FAILED (failures=2, skipped=130)
  2014-02-06 08:04:16.353 | ERROR: InvocationError: '/bin/bash 
tools/pretty_tox.sh 
(?!.*\\[.*\\bslow\\b.*\\])(^tempest\\.(api|scenario|thirdparty|cli)) 
--concurrency=2'
  2014-02-06 08:04:16.354 | ___ summary 

  2014-02-06 08:04:16.354 | ERROR:   full: commands failed
  2014-02-06 08:04:16.463 | Checking logs...
  2014-02-06 08:04:16.562 | Log File: n-net
  2014-02-06 08:04:16.562 | 2014-02-06 07:34:40.598 30086 ERROR 
oslo.messaging._executors.base [-] Exception during message handling
  2014-02-06 08:04:16.562 | 
  2014-02-06 08:04:16.563 | 2014-02-06 07:34:40.601 30086 ERROR 
oslo.messaging._drivers.common [-] Returning exception Instance 
fcfcbace-c4dd-4214-957f-b01e0b47fcf4 could not be found.
  2014-02-06 08:04:16.563 | 
  2014-02-06 08:04:16.563 | 2014-02-06 07:34:40.601 30086 ERROR 
oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', '  
File /opt/stack/new/oslo.messaging/oslo/messaging/_executors/base.py, line 
36, in _dispatch\nincoming.reply(self.callback(incoming.ctxt, 
incoming.message))\n', '  File 
/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 134, in 
__call__\nreturn self._dispatch(endpoint, 

[Yahoo-eng-team] [Bug 1281014] [NEW] unfriendly user experience if no valid host selected in nova scheduler

2014-02-17 Thread Haifeng, Song
Public bug reported:

nova version: 2.15.0

If no enough resource available on any computer node, command like 'nova resize 
instancevm 100' will exit silently with no enough error or warning message.
Users can be confused, not knowing what's wrong and what to do next.
Although, there is warning message in /var/log/conductor.log as follows, not 
much user can find it easily:
2014-02-17 03:43:29.000 6320 WARNING nova.scheduler.utils 
[req-c0d5f130-c5a9-41b7-8fe4-4d08be4cc774 9ed1534f040c43e98293f6bc6b632e96 
bd5848810607480d968b6d1ca9a36637] Failed to compute_task_migrate_server: No 
valid host was found.
Traceback (most recent call last):

  File /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py, 
line 420, in catch_client_exception
return func(*args, **kwargs)

  File /usr/lib/python2.6/site-packages/nova/scheduler/manager.py, line 298, 
in select_destinations
filter_properties)

  File /usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, 
line 148, in select_destinations
raise exception.NoValidHost(reason='')

NoValidHost: No valid host was found.

It's better to report some error or warning message if such situation
happens.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281014

Title:
  unfriendly user experience if no valid host selected in nova scheduler

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova version: 2.15.0

  If no enough resource available on any computer node, command like 'nova 
resize instancevm 100' will exit silently with no enough error or warning 
message.
  Users can be confused, not knowing what's wrong and what to do next.
  Although, there is warning message in /var/log/conductor.log as follows, not 
much user can find it easily:
  2014-02-17 03:43:29.000 6320 WARNING nova.scheduler.utils 
[req-c0d5f130-c5a9-41b7-8fe4-4d08be4cc774 9ed1534f040c43e98293f6bc6b632e96 
bd5848810607480d968b6d1ca9a36637] Failed to compute_task_migrate_server: No 
valid host was found.
  Traceback (most recent call last):

File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py, line 
420, in catch_client_exception
  return func(*args, **kwargs)

File /usr/lib/python2.6/site-packages/nova/scheduler/manager.py, line 
298, in select_destinations
  filter_properties)

File /usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, 
line 148, in select_destinations
  raise exception.NoValidHost(reason='')

  NoValidHost: No valid host was found.

  It's better to report some error or warning message if such situation
  happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262124] [NEW] Ceilometer cannot poll and pubulish floatingip samples

2014-02-17 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The ceilometer central agent pull and pubulish floatingip samples or other 
types of samples .but it cannot get valid samples of floatingip.
The reason is ceilometer floatingip poster call nova API  list metod of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController, this API 
get floatingips filtered by context.project_id.

The current context.project_id is the id of tenant service.So,the
result is {floatingips: []}

the logs of nova-api-os-compute is:

http://paste.openstack.org/show/55285/

** Affects: nova
 Importance: Undecided
 Assignee: Liusheng (liusheng)
 Status: New


** Tags: ceilometer
-- 
Ceilometer cannot poll and pubulish floatingip samples
https://bugs.launchpad.net/bugs/1262124
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266974] Re: nova work with glance SSL

2014-02-17 Thread Xavier Queralt
You need a newer version of eventlet. This was reported in [1] and fixed
with the patch in [2]. I'll try to update the packages in RDO but in the
meanwhile, could you open a bug in [3] for tracking it? Thanks.

[1] https://bitbucket.org/eventlet/eventlet/issue/136
[2] https://bitbucket.org/eventlet/eventlet/commits/609f230
[3] 
https://bugzilla.redhat.com/enter_bug.cgi?product=RDOcomponent=openstack-nova

** Changed in: nova
   Status: New = Invalid

** Changed in: python-glanceclient
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266974

Title:
  nova work with glance SSL

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Invalid

Bug description:
  My environment is :
  Nova api --https-- haproxy(SSL proxy)http Glance api1
 |--http Glance api2

  I use centos + rdo rpm package(havana), my haproxy is 1.5_dev21.

  It can work well if I config in nova.conf as following:
  glance_api_servers=glanceapi1_ip:9292,glanceapi2_ip:9292

  But when I want nova api talk with glance api in https, it can't work. My 
config is as following:
  glance_api_servers=https://Glanceapi_VIP:443 in nova.conf,

  When I boot VM, I will get the error as below:
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 1220, in create
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 
legacy_bdm=legacy_bdm)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 840, in 
_create_instance
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image_id, boot_meta 
= self._get_image(context, image_href)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 620, in _get_image
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = 
image_service.show(context, image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 292, in show
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 
_reraise_translated_image_exception(image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 290, in show
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = 
self._client.call(context, 1, 'get', image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 214, in call
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return 
getattr(client.images, method)(*args, **kwargs)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/v1/images.py, line 114, in get
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack % 
urllib.quote(str(image_id)))
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 293, in 
raw_request
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return 
self._http_request(url, method, **kwargs)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 244, in 
_http_request
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack body_str = 
''.join([chunk for chunk in body_iter])
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 499, in 
__iter__
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = self.next()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 515, in 
next
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = 
self._resp.read(CHUNKSIZE)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/httplib.py, line 518, in read
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/httplib.py, line 499, in close
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.fp.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/socket.py, line 278, in close
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self._sock.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/eventlet/greenio.py, line 145, in __getattr__
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack attr = 
getattr(self.fd, name)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 

[Yahoo-eng-team] [Bug 1262124] Re: Ceilometer cannot poll and pubulish floatingip samples

2014-02-17 Thread Liusheng
** Project changed: ceilometer = nova

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) = Liusheng (liusheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262124

Title:
  Ceilometer cannot poll and pubulish floatingip samples

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  The ceilometer central agent pull and pubulish floatingip samples or other 
types of samples .but it cannot get valid samples of floatingip.
  The reason is ceilometer floatingip poster call nova API  list metod of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController, this API 
get floatingips filtered by context.project_id.

  The current context.project_id is the id of tenant service.So,the
  result is {floatingips: []}

  the logs of nova-api-os-compute is:

  http://paste.openstack.org/show/55285/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281087] [NEW] Nova block migrate will fail if image or snapshot is not available

2014-02-17 Thread Robert van Leeuwen
Public bug reported:

When you do a block migration nova checks if the image is in the _base 
directory.
When the machine does not have the image it will try to download it from Glance.
Since some people throw away unused images and especially snapshots the 
migration will fail (Traceback, below) when it is not available.
This makes block migration for e.g. maintenance a bit of a headache.


There are 2 possible solutions:
1) A feature that will copy the _base file from the source machine to the 
target migration machine
2) Add a glance soft-delete option so just remove it from the view but 
actually keep it available for live-migration


2014-02-17 07:51:33.170 27273 ERROR nova.openstack.common.rpc.amqp 
[req-e5d497d8-16a1-4b0b-82b5-7ea1eba0dcc6 a6f038c720424af48ef4dd3ef6ce86c1 
2792c7106e98489da6e38a1ad9ae6685] Exception during message handling
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp **args)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 90, in wrapped
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp payload)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 73, in wrapped
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 4067, in 
live_migration
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 4059, in 
live_migration
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp 
block_migration, disk, dest, migrate_data)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/rpcapi.py, line 492, in 
pre_live_migration
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp 
disk=disk, migrate_data=migrate_data)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/rpcclient.py, line 85, in call
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp return 
self._invoke(self.proxy.call, ctxt, method, **kwargs)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/rpcclient.py, line 63, in _invoke
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp return 
cast_or_call(ctxt, msg, **self.kwargs)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/proxy.py, line 
126, in call
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp result = 
rpc.call(context, real_topic, msg, timeout)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py, line 
139, in call
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp return 
_get_impl().call(CONF, context, topic, msg, timeout)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 816, in call
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp 
rpc_amqp.get_connection_pool(conf, Connection))
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 574, 
in call
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp rv = 
list(rv)
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 539, 
in __iter__
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp raise 
result
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp 
ImageNotFound_Remote: Image 2b72d644-c346-4ac6-a645-8685fe50e06d could not be 
found.
2014-02-17 07:51:33.170 27273 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):

** Affects: nova

[Yahoo-eng-team] [Bug 1281095] [NEW] Keystone fails to start in devtest check run - Address already in use

2014-02-17 Thread Mark McLoughlin
Public bug reported:

See http://logs.openstack.org/83/73583/2/check/check-tempest-dsvm-
full/4009b24/logs/screen-key.txt.gz

 2014-02-17 11:07:32.608 22842 ERROR root [-] Failed to start the admin server
 2014-02-17 11:07:32.608 22842 TRACE root Traceback (most recent call last):
 2014-02-17 11:07:32.608 22842 TRACE root   File 
/opt/stack/new/keystone/bin/keystone-all, line 78, in serve
 2014-02-17 11:07:32.608 22842 TRACE root server.start()
 2014-02-17 11:07:32.608 22842 TRACE root   File 
/opt/stack/new/keystone/keystone/common/environment/eventlet_server.py, line 
65, in start
 2014-02-17 11:07:32.608 22842 TRACE root backlog=backlog)
 2014-02-17 11:07:32.608 22842 TRACE root   File 
/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py, line 38, in 
listen
 2014-02-17 11:07:32.608 22842 TRACE root sock.bind(addr)
 2014-02-17 11:07:32.608 22842 TRACE root   File 
/usr/lib/python2.7/socket.py, line 224, in meth
 2014-02-17 11:07:32.608 22842 TRACE root return 
getattr(self._sock,name)(*args)
 2014-02-17 11:07:32.608 22842 TRACE root error: [Errno 98] Address already in 
use

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1281095

Title:
  Keystone fails to start in devtest check run - Address already in
  use

Status in OpenStack Identity (Keystone):
  New

Bug description:
  See http://logs.openstack.org/83/73583/2/check/check-tempest-dsvm-
  full/4009b24/logs/screen-key.txt.gz

   2014-02-17 11:07:32.608 22842 ERROR root [-] Failed to start the admin server
   2014-02-17 11:07:32.608 22842 TRACE root Traceback (most recent call last):
   2014-02-17 11:07:32.608 22842 TRACE root   File 
/opt/stack/new/keystone/bin/keystone-all, line 78, in serve
   2014-02-17 11:07:32.608 22842 TRACE root server.start()
   2014-02-17 11:07:32.608 22842 TRACE root   File 
/opt/stack/new/keystone/keystone/common/environment/eventlet_server.py, line 
65, in start
   2014-02-17 11:07:32.608 22842 TRACE root backlog=backlog)
   2014-02-17 11:07:32.608 22842 TRACE root   File 
/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py, line 38, in 
listen
   2014-02-17 11:07:32.608 22842 TRACE root sock.bind(addr)
   2014-02-17 11:07:32.608 22842 TRACE root   File 
/usr/lib/python2.7/socket.py, line 224, in meth
   2014-02-17 11:07:32.608 22842 TRACE root return 
getattr(self._sock,name)(*args)
   2014-02-17 11:07:32.608 22842 TRACE root error: [Errno 98] Address already 
in use

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1281095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281098] [NEW] Too long tunnel devices names

2014-02-17 Thread Viktor Křivák
Public bug reported:

Openvswitch neutron agent create too long names for tunnel devices. 
Ports name are created like %type-%remoteip.
For example for gre type tunnel, name is gre-192.168.201.10 which exceed max 
length for linux network devices (15 chars).
This name pass throw openvswitch, but create failed port with ofport -1 and 
proceed with error like this:

2014-02-17 11:25:14.048 22908 ERROR neutron.agent.linux.ovs_lib [-] Unable to 
execute ['ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']. 
Exception: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'

This bug affect only devices with long ip address 10.0.0.1 will pass,
but 192.168.201.10 fail.

Found in HAVANA with:
openvswitch: 1.9.3-1
linux: 3.2.54-2

But I think length limit will apply on all versions.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281098

Title:
  Too long tunnel devices names

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Openvswitch neutron agent create too long names for tunnel devices. 
  Ports name are created like %type-%remoteip.
  For example for gre type tunnel, name is gre-192.168.201.10 which exceed max 
length for linux network devices (15 chars).
  This name pass throw openvswitch, but create failed port with ofport -1 and 
proceed with error like this:

  2014-02-17 11:25:14.048 22908 ERROR neutron.agent.linux.ovs_lib [-] Unable to 
execute ['ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']. 
Exception: 
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'

  This bug affect only devices with long ip address 10.0.0.1 will pass,
  but 192.168.201.10 fail.

  Found in HAVANA with:
  openvswitch: 1.9.3-1
  linux: 3.2.54-2

  But I think length limit will apply on all versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252454] Re: keystone was referencing nova's unit tests README

2014-02-17 Thread Dolph Mathews
Thanks Victor! This was probably fixed when HACKING was rewritten to
mostly point to http://docs.openstack.org/developer/hacking/

** Changed in: keystone
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1252454

Title:
  keystone was referencing nova's unit tests README

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Keystone's HACKING.rst referenced nova:

  237   For more information on creating unit tests and utilizing the testing
  238   infrastructure in OpenStack Nova, please read nova/testing/README.rst.

  Either keystone should have its own docs in creating unit tests or
  this should be moved into a common place that all projects can use.

  See comments in patchset 1 of https://review.openstack.org/#/c/55900

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1252454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3

2014-02-17 Thread Fengqian
** No longer affects: python-cinderclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280105

Title:
  urllib/urllib2  is incompatible for python 3

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Data Processing (Savanna):
  In Progress
Status in Trove - Database as a Service:
  In Progress
Status in Zuul: A project gating system:
  New

Bug description:
  urllib/urllib2  is incompatible for python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280826] Re: config generator fails when project enables lazy messages

2014-02-17 Thread James Carey
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = James Carey (jecarey)

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280826

Title:
  config generator fails when project enables lazy messages

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  When lazy message translation is enabled in Nova, the check_update.sh
  calls generate_sample.sh, which uses a copy of oslo's
  config/generator.py which produces the following message:

  CRITICAL nova [-] TypeError: Message objects do not support addition.

  The config/generator.py module installs i18n without lazy enabled
  (named parameter 'lazy' not specified):

  gettextutils.install('nova')

  To gather information about the projects options, it loads the project
  modules looking for entry points.   When these modules are loaded,
  they may contain code to enable lazy.   In the case of Nova this is
  the nova/cmds/__init__.py which calls:

  gettextutils.enable_lazy()

  This means that the messages returned with information for the entry
  points are lazy enabled.  Thus when config/generator.py tries to work
  with the help message for the option associated with the Nova modules:

  opt_help += ' (' + OPT_TYPES[opt_type] + ')'

  it fails because opt_help is a gettextutils.Message instance, which
  doesn't support addition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280709] Re: intermittent resize2fs failures: kernel BUG at fs/ext4/resize.c:409!

2014-02-17 Thread Dan Prince
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Dan Prince (dan-prince)

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280709

Title:
  intermittent resize2fs failures: kernel BUG at fs/ext4/resize.c:409!

Status in Openstack disk image builder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  I've been seeing intermittent resize2fs issues w/ DIB images (deployed
  with Nova bare metal) for months now. The issue occurs at first boot
  time once a Nova bare metal instance has booted. Cloud init makes a
  call to resize the file system which fails with the following kernel
  BUG message:

  [  112.136896] EXT4-fs (sda1): resizing filesystem from 1080688 to 5243214 
blocs
  [  112.164072] [ cut here ]
  [  112.164179] kernel BUG at fs/ext4/resize.c:409!
  [  112.164285] invalid opcode:  [#1] SMP
  [  112.164488] Modules linked in: openvswitch vxlan ip_tunnel gre libcrc32c 
noui
  [  112.165042] CPU: 0 PID: 968 Comm: resize2fs Tainted: G  I  
3.12.9-301
  [  112.165042] Hardware name: Dell Inc. OptiPlex 760 /0M858N, 
B9
  [  112.165042] task: 8800b7969080 ti: 8800b74f4000 task.ti: 
8800b740
  [  112.165042] RIP: 0010:[81254fa1]  [81254fa1] 
set_flexbg_0
  [  112.165042] RSP: 0018:8800b74f5c28  EFLAGS: 00010216
  [  112.165042] RAX: 8800b743bf00 RBX: 88007fae9000 RCX: 
1000
  [  112.165042] RDX: 88007f4e2c00 RSI: 0001 RDI: 
0010
  [  112.165042] RBP: 8800b74f5c70 R08: 8800afd27750 R09: 
8800b7daee00
  [  112.165042] R10:  R11: 8800afd27750 R12: 
00188000
  [  112.165042] R13: 0010 R14: 00188000 R15: 
8800b743b800
  [  112.165042] FS:  7f858a7f4780() GS:8800be80() 
knlGS:0
  [  112.165042] CS:  0010 DS:  ES:  CR0: 8005003b
  [  112.165042] CR2: 7f8589c1eea6 CR3: b7979000 CR4: 
000407f0
  [  112.165042] Stack:
  [  112.165042]  8800afd27750 8800b96cb060 88188000 
00108125b
  [  112.165042]  0010 88007f4e2ed0 07ff 
8800b7430
  [  112.165042]  8800b743b800 8800b74f5d68 81256768 
00180
  [  112.165042] Call Trace:
  [  112.165042]  [81256768] ext4_flex_group_add+0x1448/0x1830
  [  112.165042]  [81257de2] ext4_resize_fs+0x7b2/0xe80
  [  112.165042]  [8123ac50] ext4_ioctl+0xbf0/0xf00
  [  112.165042]  [811c111d] do_vfs_ioctl+0x2dd/0x4b0
  [  112.165042]  [811b9df2] ? final_putname+0x22/0x50
  [  112.165042]  [811c1371] SyS_ioctl+0x81/0xa0
  [  112.165042]  [81676aa9] system_call_fastpath+0x16/0x1b
  [  112.165042] Code: c8 4c 89 df e8 41 96 f8 ff 44 89 e8 49 01 c4 44 29 6d d4 0
  [  112.165042] RIP  [81254fa1] set_flexbg_block_bitmap+0x171/0x180
  [  112.165042]  RSP 8800b74f5c28
  [  112.175633] ---[ end trace f179f994a575df06 ]---

To manage notifications about this bug go to:
https://bugs.launchpad.net/diskimage-builder/+bug/1280709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281217] [NEW] Improve error notification when schedule_run_instance fails

2014-02-17 Thread Julien Vey
Public bug reported:

When trying to spawn a new instance on an OpenStack installation, and
the spawn fails with an error (for instance a driver error), you get a
message No valid host was found. with an empty reason

It would help to have a more meaningful message when possible in order
to diagnose the problem

This would result for instance in having a message like

Error: No valid host was found. Error from host: precise64 (node
precise64): InstanceDeployFailure: Image container format not supported
(ami) ].

instead of just (as it is currently)

Error: No valid host was found.

** Affects: nova
 Importance: Undecided
 Assignee: Julien Vey (vey-julien)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Julien Vey (vey-julien)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281217

Title:
  Improve error notification when schedule_run_instance fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to spawn a new instance on an OpenStack installation, and
  the spawn fails with an error (for instance a driver error), you get a
  message No valid host was found. with an empty reason

  It would help to have a more meaningful message when possible in order
  to diagnose the problem

  This would result for instance in having a message like

  Error: No valid host was found. Error from host: precise64 (node
  precise64): InstanceDeployFailure: Image container format not
  supported (ami) ].

  instead of just (as it is currently)

  Error: No valid host was found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281216] [NEW] Keystone Havana Authentication Error using samAccountName in Active Directory

2014-02-17 Thread Brian Seltzer
Public bug reported:

When using Active Directory as the LDAP backend for Keystone, if I use
the cn attribute for user_id_attribute and user_name_attribute,
authentication works fine.  However, if I try to use samAccountName,
authentication fails.  For example, keystone user-list returns the
following error:

Authorization Failed: An unexpected error prevented the server from
fulfilling your request. 'name' (HTTP 500)

and the login screen in Horizon shows: An error occurred authenticating.
Please try again later.

Also, the following trace is shown in the keystone.log:

2014-02-17 06:48:37.472 8207 ERROR keystone.common.wsgi [-] 'name'
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 238, in 
__call__
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 127, in 
authenticate
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi auth_token_data, 
roles_ref=roles_ref, catalog_ref=catalog_ref)
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/manager.py, line 44, in 
_wrapper
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi return f(*args, 
**kw)
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/providers/uuid.py, line 362, 
in issue_v2_token
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi token_ref, 
roles_ref, catalog_ref)
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/providers/uuid.py, line 57, 
in format_token
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi 'name': 
user_ref['name'],
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi KeyError: 'name'
2014-02-17 06:48:37.472 8207 TRACE keystone.common.wsgi 
2014-02-17 06:48:37.474 8207 INFO access [-] 192.168.1.128 - - 
[17/Feb/2014:11:48:37 +] POST http://192.168.1.128:35357/v2.0/tokens 
HTTP/1.0 500 150

It appears that the user_ref has no 'name' property when I try to use
samAccountName.  This seems to have worked in Grizzly but does not work
in Havana.  Below are the applicable lines from the keystone.conf:

[ldap]
query_scope = sub
url = LDAP://192.168.1.253
user = CN=ldapuser,CN=Users,DC=mydomain,DC=net
password = ldapuserpassword
suffix = DC=mydomain,DC=net
use_dumb_member = True
dumb_member = CN=ldapuser,CN=Users,DC=mydomain,DC=net

user_tree_dn = CN=Users,DC=mydomain,DC=net
user_objectclass = organizationalPerson
user_id_attribute = samAccountName
user_name_attribute = samAccountName
user_mail_attribute = mail
user_enabled_attribute = userAccountControl
user_enabled_mask = 2
user_enabled_default = 512
user_attribute_ignore = password,tenant_id,tenants
user_allow_create = False
user_allow_update = False
user_allow_delete = False

tenant_tree_dn = OU=Projects,OU=OpenStack,DC=mydomain,DC=net
tenant_objectclass = organizationalUnit
tenant_id_attribute = ou
tenant_member_attribute = member
tenant_name_attribute = ou
tenant_desc_attribute = description
tenant_enabled_attribute = extensionName
tenant_attribute_ignore = description,businessCategory,extensionName
tenant_allow_create = True
tenant_allow_update = True
tenant_allow_delete = True

role_tree_dn = OU=Roles,OU=OpenStack,DC=mydomain,DC=net
role_objectclass = organizationalRole
role_id_attribute = cn
role_name_attribute = cn
role_member_attribute = roleOccupant
role_allow_create = True
role_allow_update = True
role_allow_delete = True

Again, if I change the user_id_attribute and the user_name_attribute to
cn then everything works fine.  Please advise.  Thanks!

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: active directory

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1281216

Title:
  Keystone Havana Authentication Error using samAccountName in Active
  Directory

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When using Active Directory as the LDAP backend for Keystone, if I use
  the cn attribute for user_id_attribute and user_name_attribute,
  authentication works fine.  However, if I try to use samAccountName,
  authentication fails.  For example, keystone user-list returns the
  following error:

  Authorization Failed: An unexpected error prevented the server from
  fulfilling your request. 'name' (HTTP 500)

  and the login screen in Horizon shows: An error occurred
  authenticating. Please try again later.

  Also, the following trace is shown in the keystone.log:

  2014-02-17 06:48:37.472 8207 ERROR keystone.common.wsgi [-] 'name'
  2014-02-17 

[Yahoo-eng-team] [Bug 1278581] Re: NSX plugin can't reassociate floating ip to different IP on same port

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1278581

Title:
  NSX plugin can't reassociate floating ip to different IP on same port

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  the NSX plugin is unable to associate a floating IP to a different
  internal address on the same port where it's currently associated.

  How to reproduce:
  - create a port with two IPs on the same subnet (just because the fip needs 
to go through the same router)
  - associate a floating IP with IP #1
  - associate the same floating IP with IP#2
  - FAIL!

  This happens with this new test being introduced in tempest:
  https://review.openstack.org/#/c/71251

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1278581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251422] Re: deleting security-group fails when neutron and nvp are out of sync

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251422

Title:
  deleting security-group fails when neutron and nvp are out of sync

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  %neutron security-group-list
  
+--+-++
  | id   | name| 
description|
  
+--+-++
  | 0163fc21-e0d2-4219-9efc-47c71e102ba8 | default | 
default|
  | 1023c62b-029f-4faf-9257-97ffc870fffd | default | 
default|
  | 7c096113-2e5d-4fc3-b3ba-8462ec5e6964 | default | 
default|
  | b9fc7dc0-c22b-45ee-a811-d2a11b7e864f | security-tempest-1342522857 | 
description-tempest-1430856972 |
  | dfc6bff9-b412-4f47-b38b-6d0079412a4d | default | 
default|
  +--+-
  %neutron security-group-delete b9fc7dc0-c22b-45ee-a811-d2a11b7e864f
  500-{u'NeutronError': {u'message': u'An unknown exception occurred.', 
u'type': u'NeutronException', u'detail': u''}}

  
  neutron:/var/log/neutron.log
  2013-11-14 05:08:05,037 (neutron.plugins.nicira.api_client.request): DEBUG 
request _issue_request [6997] Completed request 'POST 
https://x.x.x.x:443//ws.v1/security-profile': 201 (0.06 seconds)
  2013-11-14 05:08:05,037 (neutron.plugins.nicira.api_client.request): DEBUG 
request _issue_request Reading X-Nvp-config-Generation response header: 
'37774691'
  2013-11-14 05:08:05,038 (neutron.plugins.nicira.api_client.client): DEBUG 
client release_connection [6997] Released connection https://x.x.x.x:443. 9 
connection(s) available.
  2013-11-14 05:08:05,038 (neutron.plugins.nicira.api_client.request_eventlet): 
DEBUG request_eventlet _handle_request [6997] Completed request 'POST 
/ws.v1/security-profile': 201
  2013-11-14 05:08:05,039 (neutron.plugins.nicira.nvplib): DEBUG nvplib 
create_security_profile Created Security Profile: {u'display_name': 
u'security-tempest-1342522857', u'_href': 
u'/ws.v1/security-profile/b9fc7dc0-c22b-45ee-a811-d2a11b7e864f', u'tags': 
[{u'scope': u'quan
  tum', u'tag': u'2014.1.a230.g5c9c9b9'}, {u'scope': u'os_tid', u'tag': 
u'csi-tenant-tempest'}], u'logical_port_egress_rules': [{u'ethertype': u'IPv4', 
u'port_range_max': 68, u'port_range_min': 68, u'protocol': 17}], u'_schema': 
u'/ws.v1/schema/SecurityProfileConfig', u'log
  ical_port_ingress_rules': [{u'ethertype': u'IPv4'}, {u'ethertype': u'IPv6'}], 
u'uuid': u'b9fc7dc0-c22b-45ee-a811-d2a11b7e864f'}
  2013-11-14 05:08:05,063 (neutron.openstack.common.rpc.amqp): DEBUG amqp 
notify Sending security_group.create.end on notifications.info
  2013-11-14 05:08:05,063 (neutron.openstack.common.rpc.amqp): DEBUG amqp 
_add_unique_id UNIQUE_ID is 3221f6dd937e4657993532db6910d7fc.
  2013-11-14 05:08:05,069 (amqp): DEBUG channel _do_close Closed channel #1
  2013-11-14 05:08:05,070 (amqp): DEBUG channel __init__ using channel_id: 1
  2013-11-14 05:08:05,071 (amqp): DEBUG channel _open_ok Channel open
  2013-11-14 05:08:05,072 (neutron.wsgi): INFO log write x.x.x.x,x.x.x.x - - 
[14/Nov/2013 05:08:05] POST /v2.0/security-groups.json HTTP/1.1 201 954 
0.180143

  2013-11-14 05:08:05,280 (keystoneclient.middleware.auth_token): DEBUG 
auth_token __call__ Authenticating user token
  2013-11-14 05:08:05,280 (keystoneclient.middleware.auth_token): DEBUG 
auth_token _remove_auth_headers Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
  2013-11-14 05:08:05,283 (iso8601.iso8601): DEBUG iso8601 parse_date Parsed 
2013-11-15T05:08:04.00Z into {'tz_sign': None, 'second_fraction': 
u'00', 'hour': u'05', 'tz_hour': None, 'month': u'11', 'timezone': u'Z', 
'second': u'04', 'tz_minute': None, 'year': u'2013', 'separator': u'T', 'day': 
u'15', 'minute': u'08'} with default timezone iso8601.iso8601.Utc object at 
0x1798c50
  2013-11-14 05:08:05,284 (iso8601.iso8601): DEBUG iso8601 to_int Got u'2013' 
for 'year' with default None
  2013-11-14 05:08:05,284 (iso8601.iso8601): DEBUG iso8601 to_int Got u'11' for 
'month' with default None
  2013-11-14 05:08:05,284 (iso8601.iso8601): DEBUG iso8601 to_int Got u'15' for 
'day' with default None
  2013-11-14 05:08:05,284 (iso8601.iso8601): DEBUG 

[Yahoo-eng-team] [Bug 1097990] Re: Provide atomic database access nvp plugin

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1097990

Title:
   Provide atomic database access nvp plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The nvp plugin has several places where it does not use transactions
  and it should.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1097990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265081] Re: nicira: db integrity error during port deletion

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265081

Title:
  nicira: db integrity error during port deletion

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  This is a stacktrace experienced during a test on trunk:

  2013-12-30 13:42:39.606 29118 DEBUG routes.middleware [-] Matched DELETE 
/ports/a42a9719-8416-4c44-a4f3-b3861648281f __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2013-12-30 13:42:39.606 29118 DEBUG routes.middleware [-] Route path: 
'/ports/{id}{.format}', defaults: {'action': u'delete', 'controller': wsgify 
at 72315472 wrapping function resource at 0x44f2500} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2013-12-30 13:42:39.606 29118 DEBUG routes.middleware [-] Match dict: 
{'action': u'delete', 'controller': wsgify at 72315472 wrapping function 
resource at 0x44f2500, 'id': u'a42a9719-8416-4c44-a4f3-b3861648281f', 
'format': None} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-12-30 13:42:39.606 29118 DEBUG neutron.openstack.common.rpc.amqp 
[req-d8b115ac-38c3-4690-b670-7c81a9d9ca15 663434b6088b4984991d07a858d6f6bc 
4a9f94ca2c674047b8e86fae8eefc9a4] Sending port.delete.start on 
notifications.info notify 
/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py:598
  2013-12-30 13:42:39.607 29118 DEBUG neutron.openstack.common.rpc.amqp 
[req-d8b115ac-38c3-4690-b670-7c81a9d9ca15 663434b6088b4984991d07a858d6f6bc 
4a9f94ca2c674047b8e86fae8eefc9a4] UNIQUE_ID is 
3f460839b3144eaa911c04de8725eb06. _add_unique_id 
/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py:339
  2013-12-30 13:42:39.610 29118 DEBUG neutron.plugins.nicira.api_client.client 
[-] [3681] Acquired connection https://192.168.1.8:443. 9 connection(s) 
available. acquire_connection 
/opt/stack/neutron/neutron/plugins/nicira/api_client/client.py:134
  2013-12-30 13:42:39.611 29118 DEBUG neutron.plugins.nicira.api_client.request 
[-] [3681] Issuing - request POST 
https://192.168.1.8:443//ws.v1/lswitch/6d795954-a836-47f2-b2e3-2438e0da7f80/lport
 _issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:99
  2013-12-30 13:42:39.611 29118 DEBUG neutron.plugins.nicira.api_client.request 
[-] Setting X-Nvp-Wait-For-Config-Generation request header: '96230' 
_issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:124
  2013-12-30 13:42:39.647 29118 DEBUG neutron.plugins.nicira.nvplib 
[req-d8b115ac-38c3-4690-b670-7c81a9d9ca15 663434b6088b4984991d07a858d6f6bc 
4a9f94ca2c674047b8e86fae8eefc9a4] Looking for port with q_port_id tag 
'a42a9719-8416-4c44-a4f3-b3861648281f' on: 
'6d795954-a836-47f2-b2e3-2438e0da7f80' get_port_by_neutron_tag 
/opt/stack/neutron/neutron/plugins/nicira/nvplib.py:743
  2013-12-30 13:42:39.648 29118 DEBUG neutron.plugins.nicira.api_client.client 
[-] [3682] Acquired connection https://192.168.1.8:443. 8 connection(s) 
available. acquire_connection 
/opt/stack/neutron/neutron/plugins/nicira/api_client/client.py:134
  2013-12-30 13:42:39.648 29118 DEBUG neutron.plugins.nicira.api_client.request 
[-] [3682] Issuing - request GET 
https://192.168.1.8:443//ws.v1/lswitch/6d795954-a836-47f2-b2e3-2438e0da7f80/lport?fields=uuidtag_scope=q_port_idtag=a42a9719-8416-4c44-a4f3-b3861648281f
 _issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:99
  2013-12-30 13:42:39.648 29118 DEBUG neutron.plugins.nicira.api_client.request 
[-] Setting X-Nvp-Wait-For-Config-Generation request header: '96230' 
_issue_request 
/opt/stack/neutron/neutron/plugins/nicira/api_client/request.py:124
  2013-12-30 13:42:39.670 29118 DEBUG neutron.openstack.common.rpc.amqp [-] 
received {u'_context_roles': [u'admin'], u'_context_request_id': 
u'req-0f858449-8d08-4c71-af5c-3fa7b4470f20', u'_context_tenant_name': None, 
u'_context_project_name': None, u'_context_read_deleted': u'no', u'args': 
{u'agent_state': {u'agent_state': {u'topic': u'dhcp_agent', u'binary': 
u'neutron-dhcp-agent', u'host': u'cidevstack', u'agent_type': u'DHCP agent', 
u'configurations': {u'subnets': 3, u'use_namespaces': True, 
u'dhcp_lease_duration': 86400, u'dhcp_driver': 
u'neutron.agent.linux.dhcp.Dnsmasq', u'networks': 3, u'ports': 5}}}, u'time': 
u'2013-12-30T21:42:39.665454'}, u'_context_tenant': None, u'method': 
u'report_state', u'_unique_id': u'49ca1db3e75247e594c8c4005b4b262a', 
u'_context_timestamp': u'2013-12-30 21:42:39.665074', u'_context_is_admin': 
True, u'version': u'1.0', u'_context_project_id': None, u'_context_tenant_id': 
None, u'_context_user': None, u'_context_user_id': None, u'namespace': None, 
u'_c
 ontext_user_name': None} _safe_log 
/opt/stack/neutron/neutron/openstack/common/rpc/common.py:276
  2013-12-30 

[Yahoo-eng-team] [Bug 1130171] Re: NVP plugin raises OutOfSync when listing shared networks

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1130171

Title:
  NVP plugin raises OutOfSync when listing shared networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  Currently quantum NVP plugin always raises when listing networks if
  shared networks are present.

  As found out by Aaron:
  I just checked this out and this is a bug. When we query NVP for the networks 
it knows about we pass in a filter so that nvp only returns us the networks 
with a tag that are owned by the tenant id. Since this network is not owned by 
the tenant the code we have believes that the two are out of sync.

  Unfortunately we cannot say os_tid=tenant_id or shared in nvp queries, so 
we'll have to do two nvp queries.
  We will add a new tag for marking shared networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1130171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1130053] Re: Ensure NVP plugin does not applies SNAT internal traffic

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1130053

Title:
  Ensure NVP plugin does not applies SNAT internal traffic

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  In the NVP plugin, the external gateway creates a default SNAT rule which 
causes all traffic to be SNATted.
  We should ensure that appropriate measures are taken for not SNATting traffic 
whose destination is instead local.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1130053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1117769] Re: Incorrect network type translation in nicira plugin

2014-02-17 Thread Aaron Rosen
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1117769

Title:
  Incorrect network type translation in nicira plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  network types specified as part of the provider networks extensions
  need to be translated into the appropriate network type before
  dispatching the call to the NVP API.

  However this mapping is not working properly for 'flat' and 'vlan' use
  cases at the moment. These values are not being converted into the
  corresponding NVP values, thus leading to failures upon creation of
  the network.

  The correct NVP network type, for both flat and vlan, is 'bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1117769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274034] Re: Neutron firewall anti-spoofing does not prevent ARP poisoning

2014-02-17 Thread Jeremy Stanley
Switched to public following discussion with Mark.

** Information type changed from Private Security to Public

** Tags added: security

** Changed in: ossa
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274034

Title:
  Neutron firewall anti-spoofing does not prevent ARP poisoning

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Security Advisories:
  Invalid

Bug description:
  The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache 
poisoning.
  When anti-spoofing rules are handled by Nova, a list of rules was added 
through the libvirt network filter feature:
  - no-mac-spoofing
  - no-ip-spoofing
  - no-arp-spoofing
  - nova-no-nd-reflection
  - allow-dhcp-server

  Actually, the neutron firewall driver 'iptabes_firawall' handles only
  MAC and IP anti-spoofing rules.

  This is a security vulnerability, especially on shared networks.

  Reproduce an ARP cache poisoning and man in the middle:
  - Create a private network/subnet 10.0.0.0/24
  - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 
10.0.0.4)
  - Log on VM1 and install ettercap [1]
  - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:'
  - Log on too on VM2 (with VNC/spice console) and ping google.fr = ping is ok
  - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 
instead to be send directly to the network gateway and forwarded by the VM1 to 
the gw. The ICMP capture looks something like that [2]
  - Go back to VM2 and check the ARP table = the MAC address associated to the 
GW is the MAC address of VM1

  [1] http://ettercap.github.io/ettercap/
  [2] http://paste.openstack.org/show/62112/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281276] [NEW] typos in workflows/views.py

2014-02-17 Thread Cindy Lu
Public bug reported:

caracteristics -- characteristics in get_layout comment

instanciated -- instantiated in get_workflow comment

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- typo mistake
+ typos in workflows/views.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281276

Title:
  typos in workflows/views.py

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  caracteristics -- characteristics in get_layout comment

  instanciated -- instantiated in get_workflow comment

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271330] Re: Customize the flavor's label

2014-02-17 Thread George Peristerakis
** Changed in: horizon
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271330

Title:
  Customize the flavor's label

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  In the create instance form, provide a way to customize the flavor's
  option labels

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281287] [NEW] User settings lead to believe the options are per user

2014-02-17 Thread Leandro Rosa
Public bug reported:

The From here you can modify dashboard settings for your user
statement from the User settings page leads to believe the  options  are
per user. Actually, they are global configurations.

Steps to reproduce:

1) Log in to Horizon as admin1
2) Go to Settings
3) Set Francais (fr)
4) Log out
5) Log in to Horizon as admin2

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

** Changed in: horizon
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281287

Title:
  User settings lead to believe the options are per user

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  The From here you can modify dashboard settings for your user
  statement from the User settings page leads to believe the  options
  are per user. Actually, they are global configurations.

  Steps to reproduce:

  1) Log in to Horizon as admin1
  2) Go to Settings
  3) Set Francais (fr)
  4) Log out
  5) Log in to Horizon as admin2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281305] [NEW] NoCloud source broken in Trusty cloud images

2014-02-17 Thread Daniel Manrique
Public bug reported:

I have a test script that verifies virtualization is working correctly,
it does so by downloading and booting a cloud image (as the main use
case is verification of cloud infrastructure). Since I just care about
correctly booting the VM, I don't need to connect to an actual cloud
controller or anything, so my script just builds a noCloud source with a
basic config file that just uses runcmd and final_message to give me
something in the logs to grep for, to ensure the VM initialized
correctly.

This script started failing with trusty cloud images recently :( I
noticed because the text I grep for wasn't being found, which means the
config isn't being read correctly.


Steps to reproduce:

I came up with a script that can be run (assuming a proper qemu setup).
This is very heavily inspired in doc/sources/nocloud/README.rst which has a 
similar example.

#Download the image
IMAGE=trusty-server-cloudimg-i386-disk1.img
wget -c http://cloud-images.ubuntu.com/trusty/current/$IMAGE
# Create a seed.iso per doc/sources/nocloud/README.rst
{ echo instance-id: iid-local01; echo local-hostname: cloudimg; }  meta-data
printf #cloud-config\npassword: passw0rd\nchpasswd: { expire: False 
}\nssh_pwauth: True\n  user-data
# Two additional tests to ensure these parameters are considered
printf \n\nruncmd:\n - [sh, -c, echo '  I WILL NOT SEE THIS  ' 
/tmp/runcmd.txt ]\n  user-data
printf \n\nfinal_message: FINAL MESSAGE\n  user-data
genisoimage  -output seed.iso -volid cidata -joliet -rock user-data meta-data

# Create qcow image to boot, per the same README file
qemu-img create -f qcow2 -b $IMAGE boot-disk.img

# Boot the image, also per the README
kvm -m 256 \
 -drive file=boot-disk.img,if=virtio \
 -drive file=seed.iso,if=virtio, \
 -net nic -net user,hostfwd=tcp::-:22 \


Expected result: (This was verified with a saucy cloud image):
- The node boots and leaves me at the ubuntu login: prompt.
- Logging in with ubuntu/passw0rd works
- /tmp/runcmd.txt exists and contains I WILL NOT SEE THIS
- grep FINAL /var/log/cloud-init.log returns a match in that file.

Actual result:
- The node takes much longer to boot than the saucy one.
- Unable to login with ubuntu/passw0rd (suggesting the configuration wasn't 
loaded)
- No /tmp/runcmd.txt
- No FINAL text in /var/log/cloud-init.log
- The final two items were determined by shutting down the VM and mounting the 
.img file for manual review.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1281305

Title:
  NoCloud source broken in Trusty cloud images

Status in Init scripts for use on cloud images:
  New

Bug description:
  I have a test script that verifies virtualization is working
  correctly, it does so by downloading and booting a cloud image (as the
  main use case is verification of cloud infrastructure). Since I just
  care about correctly booting the VM, I don't need to connect to an
  actual cloud controller or anything, so my script just builds a
  noCloud source with a basic config file that just uses runcmd and
  final_message to give me something in the logs to grep for, to ensure
  the VM initialized correctly.

  This script started failing with trusty cloud images recently :( I
  noticed because the text I grep for wasn't being found, which means
  the config isn't being read correctly.

  
  Steps to reproduce:

  I came up with a script that can be run (assuming a proper qemu setup).
  This is very heavily inspired in doc/sources/nocloud/README.rst which has a 
similar example.

  #Download the image
  IMAGE=trusty-server-cloudimg-i386-disk1.img
  wget -c http://cloud-images.ubuntu.com/trusty/current/$IMAGE
  # Create a seed.iso per doc/sources/nocloud/README.rst
  { echo instance-id: iid-local01; echo local-hostname: cloudimg; }  meta-data
  printf #cloud-config\npassword: passw0rd\nchpasswd: { expire: False 
}\nssh_pwauth: True\n  user-data
  # Two additional tests to ensure these parameters are considered
  printf \n\nruncmd:\n - [sh, -c, echo '  I WILL NOT SEE THIS  ' 
/tmp/runcmd.txt ]\n  user-data
  printf \n\nfinal_message: FINAL MESSAGE\n  user-data
  genisoimage  -output seed.iso -volid cidata -joliet -rock user-data meta-data

  # Create qcow image to boot, per the same README file
  qemu-img create -f qcow2 -b $IMAGE boot-disk.img

  # Boot the image, also per the README
  kvm -m 256 \
   -drive file=boot-disk.img,if=virtio \
   -drive file=seed.iso,if=virtio, \
   -net nic -net user,hostfwd=tcp::-:22 \

  
  Expected result: (This was verified with a saucy cloud image):
  - The node boots and leaves me at the ubuntu login: prompt.
  - Logging in with ubuntu/passw0rd works
  - /tmp/runcmd.txt exists and contains I WILL NOT SEE THIS
  - grep FINAL /var/log/cloud-init.log returns a match in that file.

  Actual result:
  - The node 

[Yahoo-eng-team] [Bug 1281346] [NEW] By including local in flake8, certain config files fail

2014-02-17 Thread Jason E. Rist
Public bug reported:

As part of the integration of Tuskar-UI into Horizon, a file is copied
that starts part of the Tuskar-UI infrastructure.  With that, when tests
are run using tox and ./run-tests, they will fail on the flake8
verification due to lack of headers in the configuration files:

** Affects: horizon
 Importance: Undecided
 Assignee: Jason E. Rist (jason-rist)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281346

Title:
  By including local in flake8, certain config files fail

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  As part of the integration of Tuskar-UI into Horizon, a file is copied
  that starts part of the Tuskar-UI infrastructure.  With that, when
  tests are run using tox and ./run-tests, they will fail on the flake8
  verification due to lack of headers in the configuration files:

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281357] [NEW] IntegrityError on Nicira get_vcns_router_binding

2014-02-17 Thread Paul Michali
Public bug reported:

Neutron gate for py27 is failing with this error...

2014-02-17 17:22:49.452 | 2014-02-17 17:22:49,107ERROR 
[neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] No DHCP agents are 
associated with network '35b65fc4-9904-4bcc-9d94-2a5af567e910'. Unable to send 
notification for 'network_create_end' with payload: {'network': {'status': 
'ACTIVE', 'subnets': [], 'name': u'net1', 'admin_state_up': True, 'tenant_id': 
u'92dae896-947e-4dd8-b7bc-8b7652eeea3f', 'shared': False, 
'port_security_enabled': True, 'id': '35b65fc4-9904-4bcc-9d94-2a5af567e910'}}
2014-02-17 17:22:49.452 | 2014-02-17 17:22:49,113ERROR 
[neutron.plugins.nicira.vshield.tasks.tasks] Task 
Task-deploying-router1-c71b0936-b0db-441f-81eb-871722e43310-1fb1cf62-97f8-11e3-9688-bc764e050f7e
 encountered exception in bound method VcnsCallbacks.edge_deploy_result of 
neutron.plugins.nicira.NeutronServicePlugin.VcnsCallbacks object at 
0x318fc710 at state 3
2014-02-17 17:22:49.452 | Traceback (most recent call last):
2014-02-17 17:22:49.452 |   File 
neutron/plugins/nicira/vshield/tasks/tasks.py, line 98, in _invoke_monitor
2014-02-17 17:22:49.452 | func(self)
2014-02-17 17:22:49.453 |   File 
neutron/plugins/nicira/NeutronServicePlugin.py, line 1566, in 
edge_deploy_result
2014-02-17 17:22:49.453 | context.session, neutron_router_id)
2014-02-17 17:22:49.454 |   File neutron/plugins/nicira/dbexts/vcns_db.py, 
line 43, in get_vcns_router_binding
2014-02-17 17:22:49.454 | filter_by(router_id=router_id).first())
2014-02-17 17:22:49.454 |   File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2156, in first
2014-02-17 17:22:49.454 | ret = list(self[0:1])
2014-02-17 17:22:49.455 |   File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2023, in __getitem__
2014-02-17 17:22:49.455 | return list(res)
2014-02-17 17:22:49.455 |   File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2226, in __iter__
2014-02-17 17:22:49.456 | self.session._autoflush()
2014-02-17 17:22:49.456 |   File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py,
 line 1127, in _autoflush
2014-02-17 17:22:49.456 | self.flush()
2014-02-17 17:22:49.457 |   File 
neutron/openstack/common/db/sqlalchemy/session.py, line 542, in _wrap
2014-02-17 17:22:49.457 | raise exception.DBError(e)
2014-02-17 17:22:49.457 | DBError: (IntegrityError) foreign key constraint 
failed u'INSERT INTO neutron_nsx_network_mappings (neutron_id, nsx_id) VALUES 
(?, ?)' ('35b65fc4-9904-4bcc-9d94-2a5af567e910', 
u'a035e43d-91b3-479e-a0d0-b98b7c4b29ab')
2014-02-17 17:22:49.458 | 2014-02-17 17:22:49,135ERROR 
[neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] No DHCP agents are 
associated with network 'e970f0a0-d222-4a37-91c8-00eaefaee61b'. Unable to send 
notification for 'network_create_end' with payload: {'network': {'status': 
'ACTIVE', 'subnets': [], 'name': u'net1', 'admin_state_up': True, 'tenant_id': 
u'8e6ecbc9-1902-46ea-9757-a8b2c9e969a5', 'shared': False, 
'port_security_enabled': True, 'id': 'e970f0a0-d222-4a37-91c8-00eaefaee61b'}}
2014-02-17 17:22:49.458 | 2014-02-17 17:22:49,153ERROR 
[neutron.api.v2.resource] create failed

Ref: http://logs.openstack.org/27/41827/21/check/gate-neutron-
python27/fdcefc5/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281357

Title:
  IntegrityError on Nicira get_vcns_router_binding

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron gate for py27 is failing with this error...

  2014-02-17 17:22:49.452 | 2014-02-17 17:22:49,107ERROR 
[neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] No DHCP agents are 
associated with network '35b65fc4-9904-4bcc-9d94-2a5af567e910'. Unable to send 
notification for 'network_create_end' with payload: {'network': {'status': 
'ACTIVE', 'subnets': [], 'name': u'net1', 'admin_state_up': True, 'tenant_id': 
u'92dae896-947e-4dd8-b7bc-8b7652eeea3f', 'shared': False, 
'port_security_enabled': True, 'id': '35b65fc4-9904-4bcc-9d94-2a5af567e910'}}
  2014-02-17 17:22:49.452 | 2014-02-17 17:22:49,113ERROR 
[neutron.plugins.nicira.vshield.tasks.tasks] Task 
Task-deploying-router1-c71b0936-b0db-441f-81eb-871722e43310-1fb1cf62-97f8-11e3-9688-bc764e050f7e
 encountered exception in bound method VcnsCallbacks.edge_deploy_result of 
neutron.plugins.nicira.NeutronServicePlugin.VcnsCallbacks object at 
0x318fc710 at state 3
  2014-02-17 17:22:49.452 | Traceback (most recent call last):
  2014-02-17 17:22:49.452 |   File 
neutron/plugins/nicira/vshield/tasks/tasks.py, line 98, 

[Yahoo-eng-team] [Bug 1281358] [NEW] volume name field doesn't repopulate

2014-02-17 Thread Cindy Lu
Public bug reported:

==Steps to reproduce==

1. Project  Instances  Create Snapshot for any instance so that you
have at least one snapshot

2. Go to Project  Volumes  Create Volume

2. For Volume Source, select Image

3. In 'Use image as a source' make sure there are at least 2 choices.  select 
any one.
 == You will notice that Volume name and size will automatically be 
populated

4. Now select another image source.  
== Volume name and size are not updated

Alternate instructions:
1. Go to Project  Instances  Create Volume
2. For Volume Source, select Image
3. For Use image as a source, make sure you have more than 1 choice
4. select one, notice that Volume name and size are auto-populated
5. now select another image as source
6. the 2 fields are not repopulated

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281358

Title:
  volume name field doesn't repopulate

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  ==Steps to reproduce==

  1. Project  Instances  Create Snapshot for any instance so that you
  have at least one snapshot

  2. Go to Project  Volumes  Create Volume

  2. For Volume Source, select Image

  3. In 'Use image as a source' make sure there are at least 2 choices.  select 
any one.
   == You will notice that Volume name and size will automatically be 
populated

  4. Now select another image source.  
  == Volume name and size are not updated

  Alternate instructions:
  1. Go to Project  Instances  Create Volume
  2. For Volume Source, select Image
  3. For Use image as a source, make sure you have more than 1 choice
  4. select one, notice that Volume name and size are auto-populated
  5. now select another image as source
  6. the 2 fields are not repopulated

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268439] Re: range method is not same in py3.x and py2.x

2014-02-17 Thread Steve Baker
** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268439

Title:
  range method is not same in py3.x and py2.x

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Neutron:
  Invalid
Status in Python client library for Swift:
  Fix Committed
Status in OpenStack Object Storage (Swift):
  In Progress

Bug description:
  in py3.x,range is xrange in py2.x.
  in py3.x, if you want get a list,you must use:
  list(range(value))

  I review the code, find that many codes use range for  loop, if used py3.x 
environment,
  it will occure error.
  so we must modify this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279683] Re: The problem generated by deleting dhcp'port by mistack

2014-02-17 Thread shihanzhang
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279683

Title:
  The problem generated by deleting dhcp'port by mistack

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  New

Bug description:
  In my environment, I delete the dhcp port by mistack, then I found the vm in 
primary subnet can not get IP, the reason is that when a dhcp port is deleted, 
neutron will create the dhcp port automaticly, but the VIF TAP will not be 
deleted, you will see one IP in two VIF TAP
   
  root@shz-dev:~# ip netns exec qdhcp-ab049276-3b7c-41a2-b62c-3f587a02b0a6 
ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:576 (576.0 B)  TX bytes:576 (576.0 B)

  tap4694b3c4-a6 Link encap:Ethernet  HWaddr fa:16:3e:41:f6:fc  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe41:f6fc/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:796 (796.0 B)

  tapa546a666-31 Link encap:Ethernet  HWaddr fa:16:3e:98:dd:a7  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe98:dda7/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:496 (496.0 B)

  Even if the problem is caused by error operation, I think the dhcp
  port should not allow be deleted, the port on router can not be
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262678] Re: Missing firewall_driver with ml2 breaks neutron securitygroups API

2014-02-17 Thread Mathieu Gagné
** Changed in: puppet-neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262678

Title:
  Missing firewall_driver with ml2 breaks neutron securitygroups API

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Manuals:
  Fix Released
Status in Puppet module for Neutron:
  Fix Committed
Status in puppet-neutron havana series:
  Fix Released

Bug description:
  When using nova 'security_group_api=neutron' and neutron
  'core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin' with the 'vlan'
  type_driver/tenant_network_type, no securitygroup/firewall_driver is
  set in /etc/neutron/plugins.ini (which is symlinked to
  /etc/neutron/plugins/ml2/ml2_conf.ini).  This causes the 'neutron
  security-group-list' command to return 404 Not Found.

  Adding these two lines to ml2_conf.ini and restarting neutron-server
  causes the 'neutron security-group-list' command to function properly:

  [securitygroup]
  
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

  I have NOT confirmed full functionality (firewall operation) with this
  change -- I've only tested that the API now exists.

  Environment: Using RDO Havana on CentOS 6.5 with very recent patches.
  nova-api and neutron-server on the same machine, deployed entirely via
  puppet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1262678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281395] [NEW] Debug option in ./run_tests.sh

2014-02-17 Thread Maithem
Public bug reported:

In other projects like nova, you can run tests with debug mode by
passing a -d flag to ./run_tests.sh, in horizon this doesn't exist. Not
sure whether this is a bug or not, but it will be extremely useful to
have it.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281395

Title:
  Debug option in ./run_tests.sh

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In other projects like nova, you can run tests with debug mode by
  passing a -d flag to ./run_tests.sh, in horizon this doesn't exist.
  Not sure whether this is a bug or not, but it will be extremely useful
  to have it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281351] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume status not available

2014-02-17 Thread John Griffith
I had decided that tonight was the night I was going to fix this on the
Cinder side, but alas I'm stuck.

The problem here is that we run into the odd case with nova booting an
instance from a volume, compute API starts up the process, grabs the
volume and makes the attach (so now the volume status is in-use).
Then while it's booting something goes very wrong in nova (as can be
seen by the multitudes of traces in the n-api logs) so we punch out and
call tearDown.

The problem is we never cleaned up the volume-status, so it's still
listed as in-use but the tearDown is a neat little loop on
thing.delete so it's not accounting for things like removing
attachments etc.

I was thinking doCleanup but realized that won't work because it's
called after tearDown which is what fails.  So the only thing I can
think of that's practical as a hack is to add a if isinstance Volume
in the delete loop and change the state using the admin api to available
so that teardown can do it's thing.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281351

Title:
  tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume
  status not available

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  During a run of check-tempest-dsvm-postgres-full

  
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Waiting for Server: 
scenario-server--1813410115 to get to NotFound status. Currently in ACTIVE 
status
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Sleeping for 1 seconds
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:42,734 
  2014-02-17 22:04:31.037 | REQ: curl -i 
'http://127.0.0.1:8774/v2/3437c74f89904598a189851959e53779/servers/72361b22-1ea4-49ce-be61-003467145fe5'
 -X GET -H X-Auth-Project-Id: TestVolumeBootPattern-1661167687 -H 
User-Agent: python-novaclient -H Accept: application/json -H X-Auth-Token: 
MIISvQYJKoZIhvcNAQcCoIISrjCCEqoCAQExCTAHBgUrDgMCGjCCERMGCSqGSIb3DQEHAaCCEQQEghEAeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0xN1QyMjowMzowOC4yMjQyMTgiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTE3VDIzOjAzOjA4WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3LWRlc2MiLCAiZW5hYmxlZCI6IHRydWUsICJpZCI6ICIzNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJuYW1lIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3In19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN
 
2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAiaWQiOiAiZDg2ZWYzMjI1M2U0NDBkYTk0YjMxMmQxNjdkNTE3NTAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJpZCI6ICJkYWIxMWRhMGY0MWQ0OGRlYWQwNGE1YjQ0OWJiOTdiMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAiaWQiOiAiYzY2ZTU2ZjhlODE1NDZiNGE2ZTgzNWRkZjhkNTY0NTkiLCAicHVibG
 
ljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92MyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRldjMiLCAibmFtZSI6ICJub3ZhdjMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAiaWQiOiAiZDUxMTgwMDM2YWQ3NGYzZGEyZWVmZDJmM2M0MmUzZWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJzMyIsICJuYW1lIjogInMzIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgImlkIjogImNkN2E2NGIyNDA2OTRkZTM4Y2FmMWE0NGQ5OTE2MGI1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzcvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyIsICJpZCI6ICJjMGEyN2Q4ZTQ4NDc0OWE
 

[Yahoo-eng-team] [Bug 1281351] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume status not available

2014-02-17 Thread John Griffith
** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281351

Title:
  tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern volume
  status not available

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  In Progress

Bug description:
  During a run of check-tempest-dsvm-postgres-full

  
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Waiting for Server: 
scenario-server--1813410115 to get to NotFound status. Currently in ACTIVE 
status
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:41,733 Sleeping for 1 seconds
  2014-02-17 22:04:31.036 | 2014-02-17 22:03:42,734 
  2014-02-17 22:04:31.037 | REQ: curl -i 
'http://127.0.0.1:8774/v2/3437c74f89904598a189851959e53779/servers/72361b22-1ea4-49ce-be61-003467145fe5'
 -X GET -H X-Auth-Project-Id: TestVolumeBootPattern-1661167687 -H 
User-Agent: python-novaclient -H Accept: application/json -H X-Auth-Token: 
MIISvQYJKoZIhvcNAQcCoIISrjCCEqoCAQExCTAHBgUrDgMCGjCCERMGCSqGSIb3DQEHAaCCEQQEghEAeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0xN1QyMjowMzowOC4yMjQyMTgiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTE3VDIzOjAzOjA4WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3LWRlc2MiLCAiZW5hYmxlZCI6IHRydWUsICJpZCI6ICIzNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJuYW1lIjogIlRlc3RWb2x1bWVCb290UGF0dGVybi0xNjYxMTY3Njg3In19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzQzN
 
2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkiLCAiaWQiOiAiZDg2ZWYzMjI1M2U0NDBkYTk0YjMxMmQxNjdkNTE3NTAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zNDM3Yzc0Zjg5OTA0NTk4YTE4OTg1MTk1OWU1Mzc3OSIsICJpZCI6ICJkYWIxMWRhMGY0MWQ0OGRlYWQwNGE1YjQ0OWJiOTdiMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMiLCAiaWQiOiAiYzY2ZTU2ZjhlODE1NDZiNGE2ZTgzNWRkZjhkNTY0NTkiLCAicHVibG
 
ljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92MyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRldjMiLCAibmFtZSI6ICJub3ZhdjMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjMzMzMiLCAiaWQiOiAiZDUxMTgwMDM2YWQ3NGYzZGEyZWVmZDJmM2M0MmUzZWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJzMyIsICJuYW1lIjogInMzIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5MjkyIiwgImlkIjogImNkN2E2NGIyNDA2OTRkZTM4Y2FmMWE0NGQ5OTE2MGI1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzcvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyIsICJpZCI6ICJjMGEyN2Q4ZTQ4NDc0OWE
 
2Yjk1NTI3YWU5MTU0NWI1YiIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJtZXRlcmluZyIsICJuYW1lIjogImNlaWxvbWV0ZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjgwMDAvdjEiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjgwMDAvdjEiLCAiaWQiOiAiYTAyMDYxNmQ3YzFiNDk2ZjkwMzAyMGM2OGVlMDU2ZDQiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODAwMC92MSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjbG91ZGZvcm1hdGlvbiIsICJuYW1lIjogImhlYXQtY2ZuIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YxLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YxLzM0MzdjNzRmODk5MDQ1OThhMTg5ODUxOTU5ZTUzNzc5IiwgImlkIjogIjUzMmU2ZWQ4ZGEyMTQwMzE4N2Y0NmRjMDNmN2M4ODQ4IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjEvMzQzN2M3NGY4OTkwNDU5OGExODk4NTE5NTllNTM3NzkifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAidm9sdW1l
 

[Yahoo-eng-team] [Bug 1281440] [NEW] should handle boolean string parameters through create multiple servers API

2014-02-17 Thread Ken'ichi Ohmichi
Public bug reported:

If specifying false string (False) as return_reservation_id
parameter of create multiple servers API, Nova considers it as True.

On the other hand, nova can consider false string as false in the case of 
on_shared_storage parameter of evacuate API.
That behavior seems API inconsistency.

** Affects: nova
 Importance: Undecided
 Assignee: Ken'ichi Ohmichi (oomichi)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Ken'ichi Ohmichi (oomichi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281440

Title:
  should handle boolean string parameters through create multiple
  servers API

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If specifying false string (False) as return_reservation_id
  parameter of create multiple servers API, Nova considers it as True.

  On the other hand, nova can consider false string as false in the case of 
on_shared_storage parameter of evacuate API.
  That behavior seems API inconsistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp