[Yahoo-eng-team] [Bug 1332382] Re: block device mapping timeout in compute

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332382

Title:
  block device mapping timeout in compute

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  
  When booting instances passing in block-device and increasing the volume size 
, instances can go in to error state if the volume takes longer to create than 
the hard code value set in:

  nova/compute/manager.py

def _await_block_device_map_created(self, context, vol_id, max_tries=180,
  wait_between=1):

  
  Here is the command used to repro:

  nova boot --flavor ca8d889e-6a4e-48f8-81ce-0fa2d153db16 --image 
438b3f1f-1b23-4b8d-84e1-786ffc73a298  
  --block-device 
source=image,id=438b3f1f-1b23-4b8d-84e1-786ffc73a298,dest=volume,size=128  
  --nic net-id=5f847661-edef-4dff-9f4b-904d1b3ac422 --security-groups 
d9ce9fe3-983f-42a8-899e-609c01977e32  
  Test_Image_Instance

  max_retries should be made configurable.

  Looking through the different releases, Grizzly was 30, Havana was 60
  , IceHouse is 180.

  Here is a traceback:

  2014-06-19 06:54:24.303 17578 ERROR nova.compute.manager 
[req-050fc984-cfa2-4c34-9cde-c8aeea65e6ed  
  d0b8f2c3cf70445baae994004e602e11 1e83429a8157489fb7ce087bd037f5d9] [instance: 
 
  74f612ea-9722-4796-956f-32defd417000] Instance failed block device setup
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  Traceback (most recent call last):
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1394,  
  in _prep_block_device
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  self._await_block_device_map_created))
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/virt/block_device.py, line 283,  
  in attach_block_devices
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  block_device_mapping)
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/virt/block_device.py, line 238,  
  in attach
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  wait_func(context, vol['id'])
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 909,  
  in _await_block_device_map_created
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  attempts=attempts)
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  VolumeNotCreated: Volume 8489549e-d23e-45c2-ae6e-7fdb1a9c30d0 did not finish  
  being created even after we waited 65 seconds or 60 attempts.
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1332382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322926] Re: Hyper-V driver volumes are attached incorrectly when multiple iSCSI servers are present

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322926

Title:
  Hyper-V driver volumes are attached incorrectly when multiple iSCSI
  servers are present

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Hyper-V can change the order of the mounted drives when rebooting a
  host and thus passthrough disks can be assigned to the wrong instance
  resulting in a critical scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407736] Re: python unit test jobs failing due to subunit log being too big

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407736

Title:
  python unit test jobs failing due to subunit log being too big

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in Database schema migration for SQLAlchemy:
  Fix Committed

Bug description:
  http://logs.openstack.org/60/144760/1/check/gate-nova-
  python26/6eb86b3/console.html#_2015-01-05_10_20_01_178

  2015-01-05 10:20:01.178 | + [[ 72860 -gt 5 ]]
  2015-01-05 10:20:01.178 | + echo
  2015-01-05 10:20:01.178 | 
  2015-01-05 10:20:01.178 | + echo 'sub_unit.log was  50 MB of uncompressed 
data!!!'
  2015-01-05 10:20:01.178 | sub_unit.log was  50 MB of uncompressed data!!!
  2015-01-05 10:20:01.179 | + echo 'Something is causing tests for this project 
to log significant amounts'
  2015-01-05 10:20:01.179 | Something is causing tests for this project to log 
significant amounts
  2015-01-05 10:20:01.179 | + echo 'of data. This may be writers to python 
logging, stdout, or stderr.'
  2015-01-05 10:20:01.179 | of data. This may be writers to python logging, 
stdout, or stderr.
  2015-01-05 10:20:01.179 | + echo 'Failing this test as a result'
  2015-01-05 10:20:01.179 | Failing this test as a result
  2015-01-05 10:20:01.179 | + echo

  Looks like the subunit log is around 73 MB, this could be due to the
  new pip because I'm seeing a ton of these:

  DeprecationWarning: `require` parameter is deprecated. Use
  EntryPoint._load instead.

  The latest pip was released on 1/3/15:

  https://pypi.python.org/pypi/pip/6.0.6

  That's also when those warnings showed up:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGVwcmVjYXRpb25XYXJuaW5nOiBgcmVxdWlyZWAgcGFyYW1ldGVyIGlzIGRlcHJlY2F0ZWQuIFVzZSBFbnRyeVBvaW50Ll9sb2FkIGluc3RlYWQuXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgYW5kIHByb2plY3Q6XCJvcGVuc3RhY2svbm92YVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIwNDc2OTk3NTI3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374458] Re: test_encrypted_cinder_volumes_luks fails to detach encrypted volume

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374458

Title:
  test_encrypted_cinder_volumes_luks fails to detach encrypted volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/98/124198/3/check/check-grenade-dsvm-
  icehouse/c89f18f/console.html#_2014-09-26_03_38_56_940

  2014-09-26 03:38:57.259 | Traceback (most recent call last):
  2014-09-26 03:38:57.259 |   File tempest/scenario/manager.py, line 142, 
in delete_wrapper
  2014-09-26 03:38:57.259 | delete_thing(*args, **kwargs)
  2014-09-26 03:38:57.259 |   File 
tempest/services/volume/json/volumes_client.py, line 108, in delete_volume
  2014-09-26 03:38:57.259 | resp, body = self.delete(volumes/%s % 
str(volume_id))
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
225, in delete
  2014-09-26 03:38:57.259 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
435, in request
  2014-09-26 03:38:57.259 | resp, resp_body)
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
484, in _error_checker
  2014-09-26 03:38:57.259 | raise exceptions.BadRequest(resp_body)
  2014-09-26 03:38:57.259 | BadRequest: Bad request
  2014-09-26 03:38:57.260 | Details: {u'message': u'Invalid volume: Volume 
status must be available or error, but current status is: in-use', u'code': 400}
  2014-09-26 03:38:57.260 | }}}
  2014-09-26 03:38:57.260 | 
  2014-09-26 03:38:57.260 | traceback-2: {{{
  2014-09-26 03:38:57.260 | Traceback (most recent call last):
  2014-09-26 03:38:57.260 |   File tempest/common/rest_client.py, line 
561, in wait_for_resource_deletion
  2014-09-26 03:38:57.260 | raise exceptions.TimeoutException(message)
  2014-09-26 03:38:57.260 | TimeoutException: Request timed out
  2014-09-26 03:38:57.260 | Details: 
(TestEncryptedCinderVolumes:_run_cleanups) Failed to delete resource 
704461b6-3421-4959-8113-a011e6410ede within the required time (196 s).
  2014-09-26 03:38:57.260 | }}}
  2014-09-26 03:38:57.260 | 
  2014-09-26 03:38:57.261 | traceback-3: {{{
  2014-09-26 03:38:57.261 | Traceback (most recent call last):
  2014-09-26 03:38:57.261 |   File 
tempest/services/volume/json/admin/volume_types_client.py, line 97, in 
delete_volume_type
  2014-09-26 03:38:57.261 | resp, body = self.delete(types/%s % 
str(volume_id))
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
225, in delete
  2014-09-26 03:38:57.261 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
435, in request
  2014-09-26 03:38:57.261 | resp, resp_body)
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
484, in _error_checker
  2014-09-26 03:38:57.261 | raise exceptions.BadRequest(resp_body)
  2014-09-26 03:38:57.261 | BadRequest: Bad request
  2014-09-26 03:38:57.261 | Details: {u'message': u'Target volume type is 
still in use.', u'code': 400}
  2014-09-26 03:38:57.262 | }}}
  2014-09-26 03:38:57.262 | 
  2014-09-26 03:38:57.262 | Traceback (most recent call last):
  2014-09-26 03:38:57.262 |   File tempest/test.py, line 142, in wrapper
  2014-09-26 03:38:57.262 | return f(self, *func_args, **func_kwargs)
  2014-09-26 03:38:57.262 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 56, in 
test_encrypted_cinder_volumes_luks
  2014-09-26 03:38:57.262 | self.attach_detach_volume()
  2014-09-26 03:38:57.262 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 49, in 
attach_detach_volume
  2014-09-26 03:38:57.262 | self.nova_volume_detach()
  2014-09-26 03:38:57.262 |   File tempest/scenario/manager.py, line 439, 
in nova_volume_detach
  2014-09-26 03:38:57.262 | 'available')
  2014-09-26 03:38:57.262 |   File 
tempest/services/volume/json/volumes_client.py, line 181, in 
wait_for_volume_status
  2014-09-26 03:38:57.263 | raise exceptions.TimeoutException(message)
  2014-09-26 03:38:57.263 | TimeoutException: Request timed out
  2014-09-26 03:38:57.263 | Details: Volume 
704461b6-3421-4959-8113-a011e6410ede failed to reach available status within 
the required time (196 s).

  
  

[Yahoo-eng-team] [Bug 1399498] Re: centos 7 unit test fails

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399498

Title:
  centos 7 unit test fails

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  centos 7 unit test fails.

  to pass this test:
  export OPENSSL_ENABLE_MD5_VERIFY=1
  export NSS_HASH_ALG_SUPPORT=+MD5 

  
  # ./run_tests.sh -V -s nova.tests.unit.test_crypto.X509Test
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  nova.tests.unit.test_crypto.X509Test'`
  nova.tests.unit.test_crypto.X509Test
  test_encrypt_decrypt_x509 OK  2.73
  test_can_generate_x509FAIL

  Slowest 2 tests took 6.24 secs:
  nova.tests.unit.test_crypto.X509Test
  test_can_generate_x5093.51
  test_encrypt_decrypt_x509 2.73

  ==
  FAIL: nova.tests.unit.test_crypto.X509Test.test_can_generate_x509
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385484] Re: Failed to start nova-compute after evacuate

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385484

Title:
  Failed to start nova-compute after evacuate

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  After evacuated successfully, and restarting the failed host to get it
  back. User will run into below error.


  179Sep 23 01:48:35 node-1 nova-compute 2014-09-23 01:48:35.346 13206 ERROR 
nova.openstack.common.threadgroup [-] error removing image
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 483, 
in run_service
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 163, in start
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1018, in 
init_host
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._destroy_evacuated_instances(context)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 712, in 
_destroy_evacuated_instances
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
bdi, destroy_disks)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 962, in 
destroy
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
destroy_disks, migrate_data)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1080, in 
cleanup
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._cleanup_rbd(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1090, in 
_cleanup_rbd
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
LibvirtDriver._get_rbd_driver().cleanup_volumes(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py, line 238, in 
cleanup_volumes
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.rbd.RBD().remove(client.ioctx, volume)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/rbd.py, line 300, in remove
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
raise make_ex(ret, 'error removing image')
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 

[Yahoo-eng-team] [Bug 1375467] Re: db deadlock on _instance_update()

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 887, 
in instance_update\
  service)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 
139, in inner\
  return func(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File /usr/lib/python2.7/site-packages/nova/db/api.py, line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 
447, in __exit__\
  self.rollback()\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, 
line 58, in __exit__\
  compat.reraise(exc_type, exc_value, exc_tb)\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 
444, in __exit__\
  self.commit()\
  File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 443, in _wrap\
  _raise_if_deadlock_error(e, self.bind.dialect.name)\
  File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 427, in _raise_if_deadlock_error\
  raise exception.DBDeadlock(operational_error)\
  DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get 
lock; try restarting transaction\') None None\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340411] Re: Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral RBD

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340411

Title:
  Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral
  RBD

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Greetings,

  
  We can't seem to be able to evacuate instances from a failed compute node 
using shared storage. We are using Ceph Ephemeral RBD as the storage medium.

  
  Steps to reproduce:

  nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
  or
  POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
  {evacuate:{host:node-24,onSharedStorage:1}}

  
  Here is what shows up in the logs:

  
  180Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding 
instance
  179Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting 
instance vm_state to ERROR
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5554, 
in _error_out_instance_on_exception
  yield
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2434, 
in rebuild_instance
  _(Invalid state of instance files on shared
  InvalidSharedStorage: Invalid state of instance files on shared storage
  179Jul 10 20:36:49 node-24 nova-oslo.messaging.rpc.dispatcher ERROR: 
Exception during message handling: Invalid state of instance files on shared 
storage
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply
  incoming.message))
File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 393, 
in decorated_function
  return function(self, context, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py, line 
139, in inner
  return func(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in 
wrapped
  payload)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in 
wrapped
  return f(self, context, *args, **kw)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 274, 
in decorated_function
  pass
File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 260, 
in decorated_function
  return function(self, context, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 327, 
in decorated_function
  function(self, context, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 303, 
in decorated_function
  e, sys.exc_info())
File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 290, 
in decorated_function
  return function(self, context, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2434, 
in rebuild_instance
  _(Invalid state of instance files on shared
  InvalidSharedStorage: Invalid state of instance files on shared storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314677] Re: nova-cells fails when using JSON file to store cell information

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314677

Title:
  nova-cells fails when using JSON file to store cell information

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  As recommended in http://docs.openstack.org/havana/config-
  reference/content/section_compute-cells.html#cell-config-optional-json
  I'm creating the nova-cells config with the cell information stored in
  a json file. However, when I do this nova-cells fails to start with
  this error in the logs:

  2014-04-29 11:52:05.240 16759 CRITICAL nova [-] __init__() takes exactly 3 
arguments (1 given)
  2014-04-29 11:52:05.240 16759 TRACE nova Traceback (most recent call last):
  2014-04-29 11:52:05.240 16759 TRACE nova   File /usr/bin/nova-cells, line 
10, in module
  2014-04-29 11:52:05.240 16759 TRACE nova sys.exit(main())
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/cells.py, line 40, in main
  2014-04-29 11:52:05.240 16759 TRACE nova manager=CONF.cells.manager)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 257, in create
  2014-04-29 11:52:05.240 16759 TRACE nova db_allowed=db_allowed)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 139, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cells/manager.py, line 87, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.state_manager = 
cell_state_manager()
  2014-04-29 11:52:05.240 16759 TRACE nova TypeError: __init__() takes exactly 
3 arguments (1 given)

  
  I have had a dig into the code and it appears that CellsManager creates an 
instance of CellStateManager with no arguments. CellStateManager __new__ runs 
and creates an instance of CellStateManagerFile which runs __new__ and __init__ 
with cell_state_cls and cells_config_path set. At this point __new__ returns 
CellStateManagerFile and the new instance's __init__() method is invoked 
(CellStateManagerFile.__init__) with the original arguments (there weren't any) 
which then results in the stack trace.

  It seems reasonable for CellStateManagerFile to derive the
  cells_config_path info for itself so I've patched it locally with

  === modified file 'state.py'
  --- state.py  2014-04-30 15:10:16 +
  +++ state.py  2014-04-30 15:10:26 +
  @@ -155,7 +155,7 @@
   config_path = CONF.find_file(cells_config)
   if not config_path:
   raise 
cfg.ConfigFilesNotFoundError(config_files=[cells_config])
  -return CellStateManagerFile(cell_state_cls, config_path)
  +return CellStateManagerFile(cell_state_cls)
   
   return CellStateManagerDB(cell_state_cls)
   
  @@ -450,7 +450,9 @@
   
   
   class CellStateManagerFile(CellStateManager):
  -def __init__(self, cell_state_cls, cells_config_path):
  +def __init__(self, cell_state_cls=None):
  +cells_config = CONF.cells.cells_config
  +cells_config_path = CONF.find_file(cells_config)
   self.cells_config_path = cells_config_path
   super(CellStateManagerFile, self).__init__(cell_state_cls)
   

  
  Ubuntu: 14.04
  nova-cells: 1:2014.1-0ubuntu1

  nova.conf:

  [DEFAULT]
  dhcpbridge_flagfile=/etc/nova/nova.conf
  dhcpbridge=/usr/bin/nova-dhcpbridge
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  force_dhcp_release=True
  iscsi_helper=tgtadm
  libvirt_use_virtio_for_bridges=True
  connection_type=libvirt
  root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  verbose=True
  ec2_private_dns_show_ip=True
  api_paste_config=/etc/nova/api-paste.ini
  volumes_path=/var/lib/nova/volumes
  enabled_apis=ec2,osapi_compute,metadata
  auth_strategy=keystone
  compute_driver=libvirt.LibvirtDriver
  quota_driver=nova.quota.NoopQuotaDriver

  
  [cells]
  enable=True
  name=cell
  cell_type=compute
  cells_config=/etc/nova/cells.json

  
  cells.json: 
  {
  parent: {
  name: parent,
  api_url: http://api.example.com:8774;,
  transport_url: rabbit://rabbit.example.com,
  weight_offset: 0.0,
  weight_scale: 1.0,
  is_parent: true
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1332382] Re: block device mapping timeout in compute

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332382

Title:
  block device mapping timeout in compute

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  
  When booting instances passing in block-device and increasing the volume size 
, instances can go in to error state if the volume takes longer to create than 
the hard code value set in:

  nova/compute/manager.py

def _await_block_device_map_created(self, context, vol_id, max_tries=180,
  wait_between=1):

  
  Here is the command used to repro:

  nova boot --flavor ca8d889e-6a4e-48f8-81ce-0fa2d153db16 --image 
438b3f1f-1b23-4b8d-84e1-786ffc73a298  
  --block-device 
source=image,id=438b3f1f-1b23-4b8d-84e1-786ffc73a298,dest=volume,size=128  
  --nic net-id=5f847661-edef-4dff-9f4b-904d1b3ac422 --security-groups 
d9ce9fe3-983f-42a8-899e-609c01977e32  
  Test_Image_Instance

  max_retries should be made configurable.

  Looking through the different releases, Grizzly was 30, Havana was 60
  , IceHouse is 180.

  Here is a traceback:

  2014-06-19 06:54:24.303 17578 ERROR nova.compute.manager 
[req-050fc984-cfa2-4c34-9cde-c8aeea65e6ed  
  d0b8f2c3cf70445baae994004e602e11 1e83429a8157489fb7ce087bd037f5d9] [instance: 
 
  74f612ea-9722-4796-956f-32defd417000] Instance failed block device setup
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  Traceback (most recent call last):
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1394,  
  in _prep_block_device
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  self._await_block_device_map_created))
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/virt/block_device.py, line 283,  
  in attach_block_devices
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  block_device_mapping)
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/virt/block_device.py, line 238,  
  in attach
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  wait_func(context, vol['id'])
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 909,  
  in _await_block_device_map_created
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  attempts=attempts)
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]  
  VolumeNotCreated: Volume 8489549e-d23e-45c2-ae6e-7fdb1a9c30d0 did not finish  
  being created even after we waited 65 seconds or 60 attempts.
  2014-06-19 06:54:24.303 17578 TRACE nova.compute.manager [instance: 
74f612ea-9722-4796-956f-32defd417000]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1332382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322926] Re: Hyper-V driver volumes are attached incorrectly when multiple iSCSI servers are present

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322926

Title:
  Hyper-V driver volumes are attached incorrectly when multiple iSCSI
  servers are present

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Hyper-V can change the order of the mounted drives when rebooting a
  host and thus passthrough disks can be assigned to the wrong instance
  resulting in a critical scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305186] Re: Fake libvirtError incompatibile with real libvirtError

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305186

Title:
  Fake libvirtError incompatibile with real libvirtError

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  PROBLEM

  The existing `fakelibvirt.libvirtError` is actually not compatible
  with the real `libvirt.libvirtError` class in that it accepts
  different kwargs in the `__init__`.

  This is a problem because test code may use either class depending on
  whether `libvirt-python` happens to be installed on the box.

  For example, if `libvirt-python` is installed on the box and you try
  to use `libvirtError` class from a test with the `error_code` kwarg,
  you'll get this exception: http://paste.openstack.org/show/75432/

  This code would work on a machine that doesn't have `libvirt-python`
  installed b/c `fakelibvirt.libvirtError` was used.

  POSSIBLE SOLUTION

  Copy over the real `libvirt.libvirtError` class so that it matches
  exactly.

  Create a `make_libvirtError` convenience function so we can still
  create `libvirtErrors` using the nice `error_code` kwarg in the
  constructor (b/c 99% of the time that's what we want).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1305186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370782] Re: SecurityGroupExists error when booting multiple instances concurrently

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370782

Title:
  SecurityGroupExists error when booting multiple instances concurrently

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If the default security group doesn't exist for some particular
  tenant, booting of a few instances concurrently may lead to
  SecurityGroupExists error as one thread will win the race and create
  the security group, and others will fail.

  This is easily reproduced by running Rally jobs in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362916] Re: _rescan_multipath construct wrong parameter for multipath -r

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362916

Title:
  _rescan_multipath construct wrong parameter for multipath -r

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  At 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L590, 
the purpose of self._run_multipath('-r', check_exit_code=[0, 1, 21]) is to 
setup a command to reconstruct multipath devices.
  But the result of it is multipath - r, not the right format multipath -r.

  I think brackets is missed for '-r', it should be modified to
  self._run_multipath(['-r'], check_exit_code=[0, 1, 21])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355929] Re: test_postgresql_opportunistically fails in stable/havana due to: ERROR: source database template1 is being accessed by other users

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355929

Title:
  test_postgresql_opportunistically fails in stable/havana due to:
  ERROR:  source database template1 is being accessed by other users

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/22/112422/1/check/gate-nova-
  python26/621e0ae/console.html

  This is probably a latent bug in the nova unit tests for postgresql in
  stable/havana, or it's due to slow nodes for the py26 jobs.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6ICBzb3VyY2UgZGF0YWJhc2UgXFxcInRlbXBsYXRlMVxcXCIgaXMgYmVpbmcgYWNjZXNzZWQgYnkgb3RoZXIgdXNlcnNcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfYnJhbmNoOlwic3RhYmxlL2hhdmFuYVwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NjA5ODg1MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  3 hits in 7 days, check queue only but multiple changes and all
  failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420788] Re: Logging blocks on race condition under eventlet

2015-03-12 Thread Alan Pevec
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420788

Title:
  Logging blocks on race condition under eventlet

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  New
Status in Keystone juno series:
  Fix Committed

Bug description:
  Wrong initialization order makes logging block on race condition under
  eventlet.

  bin/keystone-all launcher initialize logging first and after that does
  eventlet patching leaving logging system with generic thred.lock in
  critical sections what leads to infinite thread locks under high load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365901] Re: cinder-api ran into hang loop in python2.6

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365901

Title:
  cinder-api ran into hang loop in python2.6

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  cinder-api ran into hang loop in python2.6

  #cinder-api
  ...
  ...
  snip...
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  ...
  ...
  snip...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373993] Re: Trusted Filter uses unsafe SSL connection

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373993

Title:
  Trusted Filter uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  HTTPSClientAuthConnection uses httplib.HTTPSConnection objects. In
  Python 2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This should be changed to use the requests lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362233] Re: instance_create() DB API method implicitly creates additional DB transactions

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362233

Title:
  instance_create() DB API method implicitly creates additional DB
  transactions

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  In DB API code we have a notion of 'public' and 'private' methods. The
  former are conceptually executed within a *single* DB transaction and
  the latter can either create a new transaction or participate in the
  existing one. The whole point is to be able to roll back the results
  of DB API methods easily and be able to retry method calls on
  connection failures. We had a bp
  (https://blueprints.launchpad.net/nova/+spec/db-session-cleanup) in
  which all DB API have been re-factored to maintain these properties.

  instance_create() is one of the methods that currently violates the
  rules of 'public' DB API methods and creates a concurrent transaction
  implicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the disk section I see:

  source protocol=rbd name=volumes/instance-0004_disk

  where in the original libvirt.xml the file was:

  source protocol=rbd name=volumes/volume-9e1a7835-b780-495c-a88a-
  4558be784dde

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179816] Re: ec2_eror_code mismatch

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1179816

Title:
  ec2_eror_code mismatch

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in Tempest:
  Invalid

Bug description:
  It is reporting InstanceNotFound instead of InvalidAssociationID[.]NotFound
  in 
  tests/boto/test_ec2_network.py 

  self.assertBotoError(ec2_codes.client.InvalidAssociationID.NotFound,
   address.disassociate)

  
  AssertionError :Error code (InstanceNotFound) doesnot match the expexted re 
pattern InvalidAssociationID[.]NotFound

  boto: ERROR: 400 Bad RequInvalidAssociationID[.]NotFoundest
  boto: ERROR: ?xml version=1.0?
  ResponseErrrorsErrorCodeInstanceNotFound/CodeMessageInstance None 
could not be 
found./Message/Error/ErrorsRequestIDreq-05235a67-0a70-46b1-a503-91444ab2b88d/RequestID/Response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1179816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300788] Re: VMware: exceptions when SOAP reply message has no body

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300788

Title:
  VMware: exceptions when SOAP reply message has no body

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Minesweeper logs have the following:

  2014-03-26 11:37:09.753 CRITICAL nova.virt.vmwareapi.driver 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi: _call_method 
(session=52eb4a1e-04de-de0d-5c6a-746a430570a2)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver Traceback 
(most recent call last):
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 856, in _call_method
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver return 
temp_module(*args, **kwargs)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/vim.py, line 196, in vim_request_handler
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver raise 
error_util.VimFaultException(fault_list, excep)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
VimFaultException: Server raised fault: '
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver SOAP body not 
found
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP envelope
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 38
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
HTTP request before method was determined
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 0'
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.754 WARNING nova.virt.vmwareapi.vmops 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi:vmops:_destroy_instance, got this 
exception while un-registering the VM: Server raised fault: '
  SOAP body not found

  while parsing SOAP envelope
  at line 1, column 38

  while parsing HTTP request before method was determined
  at line 1, column 0'

  There are cases when the suds returns a message with no body.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1013417] Re: Cinderclient Doesn't Return A Useful Error When Trying To Create A Volume Larger Than The Quota Allocation

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1013417

Title:
  Cinderclient Doesn't Return A Useful Error When Trying To Create A
  Volume Larger Than The Quota Allocation

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in Python client library for Cinder:
  Fix Committed

Bug description:
  Actually, it is nearly useless. It just returns an exception that it
  casts from a HTTP 500.

  My quota limit is 1000GB, here I try to make a volume that is 2000GB

  g = cinderclient(request).volumes.create(size, display_name=name,
  display_description=description)

  cinderclient connection created using token
  e3fbb3c2d94949b0975db11de85bebc5 and url
  http://10.145.1.51:8776/v1/9da18fcaedf74eb7b1cf73b67b5b870c;

  REQ: curl -i
  http://10.145.1.51:8776/v1/9da18fcaedf74eb7b1cf73b67b5b870c/volumes -X
  POST -H X-Auth-Project-Id: 9da18fcaedf74eb7b1cf73b67b5b870c -H
  User-Agent: python-novaclient -H Content-Type: application/json -H
  Accept: application/json -H X-Auth-Token:
  e3fbb3c2d94949b0975db11de85bebc5

  REQ BODY: {volume: {snapshot_id: null, display_name: My Vol,
  volume_type: null, display_description: , size: 2000}}

  RESP:{'date': 'Thu, 14 Jun 2012 22:14:02 GMT', 'status': '500',
  'content-length': '128', 'content-type': 'application/json;
  charset=UTF-8', 'x-compute-request-id': 'req-316c81e2-3407-4df0-8b0e-
  190bf63f549b'} {computeFault: {message: The server has either
  erred or is incapable of performing the requested operation., code:
  500}}

  *** ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  316c81e2-3407-4df0-8b0e-190bf63f549b)

  This is basically useless from an end-user perspective and doesn't
  allow us to tell users of Horizon anything useful about why this
  error'd. :( It should probably be a 406, not a 500, and the error
  message should be Cannot create a volume of 2000GB because your quota
  is currently 1000GB. Or something along those lines...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1013417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375519] Re: Cisco N1kv: Enable quota support in stable/icehouse

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375519

Title:
  Cisco N1kv: Enable quota support in stable/icehouse

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron icehouse series:
  New

Bug description:
  With the quotas table being populated in stable/icehouse, the N1kv
  plugin should be able to support quotas. Otherwise VMs end up in error
  state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393925] Re: Race condition adding a security group rule when another is in-progress

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393925

Title:
  Race condition adding a security group rule when another is in-
  progress

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I've come across a race condition where I sometimes see a security
  group rule is never added to iptables, if the OVS agent is in the
  middle of applying another security group rule when the RPC arrives.

  Here's an example scenario:

  nova boot --flavor 1 --image $nova_image  dev_server1
  sleep 4
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min  --port_range_max  default
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min 1112 --port_range_max 1112 default

  Wait for VM to complete booting, then check iptables:

  $ sudo iptables-save | grep 111
  -A neutron-openvswi-i741ff910-1 -p tcp -m tcp --dport  -j RETURN

  The second rule is missing, and will only get added if you either add
  another rule, or restart the agent.

  My config is just devstack, running with the latest openstack bits as
  of today.  OVS agent w/vxlan and DVR enabled, nothing fancy.

  I've been able to track this down to the following code (i'll attach
  the complete log as a file due to line wraps):

  OVS agent receives RPC to setup port
  Port info is gathered for devices and filters for security groups are 
created
  Iptables apply is called
  New security group rule is added, triggering RPC message
  RPC received, and agent seems to add device to list that needs refresh

  Security group rule updated on remote: 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256'] security_groups_rule_updated
  Security group rule updated 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256']
  Adding [u'741ff910-12ba-4c1e-9dc9-38f7cbde0dc4'] devices to the 
list of devices for which firewall needs to be refreshed _security_group_updated

  Iptables apply is finished

  rpc_loop() in OVS agent does not notice there is more work to do on
  next loop, so rule never gets added

  At this point I'm thinking it could be that self.devices_to_refilter
  is modified in both _security_group_updated() and setup_port_filters()
  without any lock/semaphore, but the log doesn't explicity implicate it
  (perhaps we trust the timestamps too much?).

  I will continue to investigate, but if someone has an aha! moment
  after reading this far please add a note.

  A colleague here has also been able to duplicate this on his own
  devstack install, so it wasn't my fat-fingering that caused it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363805] Re: test_ipv6.TestIsEnabled.test_enabled failure

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363805

Title:
  test_ipv6.TestIsEnabled.test_enabled failure

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  https://review.openstack.org/#/c/116826/ introduced UT failures on
  systems without the procfs entry.

  
  ==
  FAIL: neutron.tests.unit.test_ipv6.TestIsEnabled.test_memoize
  tags: worker-4
  --
  Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{2014-09-01 15:53:08,028 INFO
  [neutron.common.ipv6_utils] IPv6 is not enabled on this system.}}}

  Traceback (most recent call last):
File neutron/tests/unit/test_ipv6.py, line 77, in test_memoize
File /usr/pkg/lib/python2.7/unittest/case.py, line 422, in assertTrue
  AssertionError: False is not true
  ==
  FAIL: neutron.tests.unit.test_ipv6.TestIsEnabled.test_enabled
  tags: worker-3
  --
  Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{2014-09-01 15:53:08,041 INFO
  [neutron.common.ipv6_utils] IPv6 is not enabled on this system.}}}

  Traceback (most recent call last):
File neutron/tests/unit/test_ipv6.py, line 66, in test_enabled
File /usr/pkg/lib/python2.7/unittest/case.py, line 422, in assertTrue
  AssertionError: False is not true
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381886] Re: nova list show incorrect when neutron re-assign floatingip

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381886

Title:
  nova list show incorrect when neutron re-assign floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  boot more several instances, create a floatingip, when re-assign the 
floatingip to multi instances, nova list will show incorrect result.
  neutron floatingip-associate floatingip-id instance0-pord-id
  neutron floatingip-associate floatingip-id instance1-port-id
  neutron floatingip-associate floatingip-id instance2-port-id
  nova list
  (nova list result will be like:)
  --
  instance0  fixedip0,  floatingip
  instance1  fixedip1,  floatingip
  instance2  fixedip2,  floatingip

  instance0,1,2, they all have floatingip, but run neutron floatingip-list, 
we can see it only bind to instance2.
  another situation is that after a few time(half a min, or longer), nova 
list can show correct result.
  ---
  instance0  fixedip0
  instance1  fixedip1
  instance2  fixedip2,  floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377350] Re: BSN: inconsistency when backend missing during delete

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377350

Title:
  BSN: inconsistency when backend missing during delete

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  When objects are deleted in ML2 and there is a failure in a driver in
  post-commit. There is no retry mechanism to delete that object from
  with the driver at a later time.[1] This means that objects deleted
  while there is no connectivity to the backend controller will never be
  deleted until another even causes a synchronization.


  1.
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1039

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255347] Re: cinder cross_az_attach uses instance AZ value

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255347

Title:
  cinder cross_az_attach uses instance AZ value

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  When checking if an instance is in the same AZ as a volume nova uses
  the instances availability_zone attribute. This isn't the correct way
  to get an instances AZ, it should use the value gotten through
  querying the aggregate the instance is on

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357030] Re: Increase the default poll duration for Cisco N1Kv

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357030

Title:
  Increase the default poll duration for Cisco N1Kv

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Increase the poll duration for Cisco N1Kv, from 10s to 60s, in the
  /etc/neutron/plugins/cisco/cisco_plugins.ini file. The current poll
  duration of 10s causes that VSM to unnecessarily taking frequent CPU
  cycles, resulting in the failure of other tasks in scale configurations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407105] Re: Password Change Doesn't Affirmatively Invalidate Sessions

2015-03-12 Thread Alan Pevec
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1407105

Title:
  Password Change Doesn't Affirmatively Invalidate Sessions

Status in OpenStack Dashboard (Horizon):
  Triaged
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  New
Status in Keystone juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  The password change dialog at /horizon/settings/password/ contains the
  following code:

  code
  if user_is_editable:
  try:
  api.keystone.user_update_own_password(request,
  data['current_password'],
  data['new_password'])
  response = http.HttpResponseRedirect(settings.LOGOUT_URL)
  msg = _(Password changed. Please log in again to continue.)
  utils.add_logout_reason(request, response, msg)
  return response
  except Exception:
  exceptions.handle(request,
    _('Unable to change password.'))
  return False
  else:
  messages.error(request, _('Changing password is not supported.'))
  return False
  /code

  There are at least two security concerns here:
  1) Logout is done by means of an HTTP redirect.  Let's say Eve, as MitM, gets 
ahold of Alice's token somehow.  Alice is worried this may have happened, so 
she changes her password.  If Eve suspects that the request is a 
password-change request (which is the most Eve can do, because we're running 
over HTTPS, right?  Right!?), then it's a simple matter to block the redirect 
from ever reaching the client, or the redirect request from hitting the server. 
 From Alice's PoV, something weird happened, but her new password works, so 
she's not bothered.  Meanwhile, Alice's old login ticket continues to work.
  2) Part of the purpose of changing a password is generally to block those who 
might already have the password from continuing to use it.  A password change 
should trigger (insofar as is possible) a purging of all active logins/tokens 
for that user.  That does not happen here.

  Frankly, I'm not the least bit sure if I've thought of the worst-case
  scenario(s) for point #1.  It just strikes me as very strange not to
  aggressively/proactively kill the ticket/token(s), instead relying on
  the client to do so.  Feel free to apply minds smarter and more
  devious than my own!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279172] Re: Unicode encoding error exists in extended Nova API, when the data contain unicode

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  node = doc.createTextNode(str(data)), then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372438] Re: Race condition in l2pop drops tunnels

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372438

Title:
  Race condition in l2pop drops tunnels

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  The issue was originally raised by a Red Hat performance engineer (Joe
  Talerico)  here: https://bugzilla.redhat.com/show_bug.cgi?id=1136969
  (see starting from comment 4).

  Joe created a Fedora instance in his OS cloud based on RHEL7-OSP5
  (Icehouse), where he installed Rally client to run benchmarks against
  that cloud itself. He assigned a floating IP to that instance to be
  able to access API endpoints from inside the Rally machine. Then he
  ran a scenario which basically started up 100+ new instances in
  parallel, tried to access each of them via ssh, and once it succeeded,
  clean up each created instance (with its ports). Once in a while, his
  Rally instance lost connection to outside world. This was because
  VXLAN tunnel to the compute node hosting the Rally machine was dropped
  on networker node where DHCP, L3, Metadata agents were running. Once
  we restarted OVS agent, the tunnel was recreated properly.

  The scenario failed only if L2POP mechanism was enabled.

  I've looked thru the OVS agent logs and found out that the tunnel was
  dropped due to a legitimate fdb entry removal request coming from
  neutron-server side. So the fault is probably on neutron-server side,
  in l2pop mechanism driver.

  I've then looked thru the patches in Juno to see whether there is
  something related to the issue already merged, and found the patch
  that gets rid of _precommit step when cleaning up fdb entries. Once
  we've applied the patch on the neutron-server node, we stopped to
  experience those connectivity failures.

  After discussion with Vivekanandan Narasimhan, we came up with the
  following race condition that may result in tunnels being dropped
  while legitimate resources are still using them:

  (quoting Vivek below)

  '''
  - - port1 delete request comes in;
  - - port1 delete request acquires lock
  - - port2 create/update request comes in;
  - - port2 create/update waits on due to unavailability of lock
  - - precommit phase for port1 determines that the port is the last one, so we 
should drop the FLOODING_ENTRY;
  - - port1 delete applied to db;
  - - port1 transaction releases the lock
  - - port2 create/update acquires the lock
  - - precommit phase for port2 determines that the port is the first one, so 
request FLOODING_ENTRY + MAC-specific flow creation;
  - - port2 create/update request applied to db;
  - - port2 transaction releases the lock

  Now at this point postcommit of either of them could happen, because 
code-pieces operate outside the
  locked zone.  

  If it happens, this way, tunnel would retain:

  - - postcommit phase for port1 requests FLOODING_ENTRY deletion due to port1 
deletion
  - - postcommit phase requests FLOODING_ENTRY + MAC-specific flow creation for 
port2;

  If it happens the below way, tunnel would break:
  - - postcommit phase for create por2 requests FLOODING_ENTRY + MAC-specific 
flow 
  - - postcommit phase for delete port1 requests FLOODING_ENTRY deletion
  '''

  We considered the patch to get rid of precommit for backport to
  Icehouse [1] that seems to eliminate the race, but we're concerned
  that we reverted that to previous behaviour in Juno as part of DVR
  work [2], though we haven't done any testing to check whether the
  issue is present in Juno (though brief analysis of the code shows that
  it should fail there too).

  Ideally, the fix for Juno should be easily backportable because the
  issue is currently present in Icehouse, and we would like to have the
  same fix for both branches (Icehouse and Juno) instead of backporting
  patch [1] to Icehouse and implementing another patch for Juno.

  [1]: https://review.openstack.org/#/c/95165/
  [2]: https://review.openstack.org/#/c/102398/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368128] Re: glance rbd store get_size uses wrong pool

2015-03-12 Thread Alan Pevec
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1368128

Title:
  glance rbd store get_size uses wrong pool

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  New

Bug description:
  the rbd store's get_size() method ignores the pool of the actual
  parameter and instead uses the glance pool, which breaks cross-pool
  image access

  (one such example would be when we'd reference an rbd ephemeral disk
  snapshot which is in the ephemeral disk pool)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1368128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361542] Re: neutron-l3-agent does not start without IPv6

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361542

Title:
  neutron-l3-agent does not start without IPv6

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  When testing on a one-node-cloud that had ipv6 blacklisted, I found that 
neutron-l3-agent does not start
  because it errors out when it tries to access 
/proc/sys/net/ipv6/conf/default/disable_ipv6

  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 269, in create
  2014-08-26 10:12:57.987 29609 TRACE neutron 
periodic_fuzzy_delay=periodic_fuzzy_delay)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 202, in __init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.manager = 
manager_class(host=host, *args, **kwargs)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 916, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron 
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 230, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.use_ipv6 = 
ipv6_utils.is_enabled()
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/ipv6_utils.py, line 50, in 
is_enabled
  2014-08-26 10:12:57.987 29609 TRACE neutron with open(disabled_ipv6_path, 
'r') as f:
  2014-08-26 10:12:57.987 29609 TRACE neutron IOError: [Errno 2] No such file 
or directory: '/proc/sys/net/ipv6/conf/default/disable_ipv6'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398566] Re: REST API relies on policies being initialized after RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398566

Title:
  REST API relies on policies being initialized after
  RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  A race condition exists where policies may be loaded and processed
  before the neutron extensions  are loaded and the
  RESOURCE_ATTRIBUTE_MAP is populated. This causes problems in system
  behaviour dependent on neutron specific policy checks. Policies are
  loaded at on demand, and if the call instigating the loading of
  policies happens prematurely this can  cause certain neutron specific
  policy checks to not be setup properly as the required mappings from
  policy to check implementations has not been established.

  Related bugs:

  https://bugs.launchpad.net/neutron/+bug/1254555
  https://bugs.launchpad.net/neutron/+bug/1251982
  https://bugs.launchpad.net/neutron/+bug/1280738

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387846] Re: Cisco: Nexus plugin should not be invoked for port update if it is not configured

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1387846

Title:
  Cisco: Nexus plugin should not be invoked for port update if it is not
  configured

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  New

Bug description:
  The Nexus switch is being invoked to update port when it is not
  configured. Only the N1KV plugin should be invoked.

  Affects only stable/icehouse. Nexus plugin is removed in Juno.

  2014-10-30 22:30:12.448 25182 ERROR 
neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on 
Nexus switch
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last):
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 395, in update_port
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 vlan_id = 
self._get_segmentation_id(old_port['network_id'])
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 151, in _get_segmentation_id
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 raise 
cexc.NetworkSegmentIDNotFound(net_id=network_id)
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 NetworkSegmentIDNotFound: 
Segmentation ID for network 6d660e29-99b4-4f2f-b28c-1d98ffb07545 is not found.
  2014-10-30 22:30:12.448 25182 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2
  2014-10-30 22:30:12.458 25182 INFO neutron.api.v2.resource 
[req-1d20f673-af78-4d95-bebf-3f6a23f4effe None] update failed (client error): 
Segmentation ID for network 6d660e29-99b4-4f2f-b28c-1d98ffb07545 is not found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1387846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255317] Re: VMware: can't boot from sparse image copied to volume

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255317

Title:
  VMware: can't boot from sparse image copied to volume

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Using VC Driver, we are unable to boot from a sparse image copied to a
  volume. Scenario is as follows:

  1. Create an image using the cirros vmdk image (linked below) with 
vmware_disktype=sparse
  2. Copy the image to a volume
  3. Boot from the volume

  Expected: Able to boot into OS and see the login screen
  Actual: Operating system is not found

  [1]
  http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.0-i386-disk.vmdk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356609] Re: Networks do not get scheduled to DHCP agents using Cisco N1KV plugin

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356609

Title:
  Networks do not get scheduled to DHCP agents using Cisco N1KV plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  With the config option 'network_auto_schedule = False' set in
  neutron.conf, it is observed that neutron networks do not get
  scheduled to available DHCP agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384487] Re: big switch server manager uses SSLv3

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384487

Title:
  big switch server manager uses SSLv3

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  The communication with the backend is done using the default protocol
  of ssl.wrap_socket, which is SSLv3. This protocol is vulnerable to the
  Poodle attack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373851] Re: security groups db queries load excessive data

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373851

Title:
  security groups db queries load excessive data

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  The security groups db queries are loading extra data from the ports
  table that is unnecessarily hindering performance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373547] Re: Cisco N1kv: Remove unnecessary REST call to delete VM network on controller

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373547

Title:
  Cisco N1kv: Remove unnecessary REST call to delete VM network on
  controller

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  Remove the rest call to delete vm network on the controller and ensure
  that database remains consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383973] Re: image data cannot be removed when deleting a saving status image

2015-03-12 Thread Alan Pevec
*** This bug is a duplicate of bug 1398830 ***
https://bugs.launchpad.net/bugs/1398830

** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1383973

Title:
  image data cannot be removed when deleting a saving status image

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  New
Status in Glance juno series:
  Fix Committed

Bug description:
  The image data in /var/lib/glance/images/ cannot be removed when I
  delete a image that status is saving.

  1. create a image
   glance image-create --name image-v1 --disk-format raw --container-format 
bare --file xx.image --is-public true

  2. list the created image, the status is saving
  [root@node2 ~]# glance image-list
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | Container 
Format | Size | Status |
  
+--+--+-+--+--++
  | 00ec3d8d-41a5-4f7c-9448-694099a39bcf | image-v1 | raw | bare
 | 18   | saving |
  
+--+--+-+--+--++

  3. delete the created image
  glance image-delete image-v1

  4. the image has been deleted but the image data still exists
  [root@node2 ~]# glance image-list
  ++--+-+--+--++
  | ID | Name | Disk Format | Container Format | Size | Status |
  ++--+-+--+--++
  ++--+-+--+--++

  [root@node2 ~]# ls /var/lib/glance/images
  00ec3d8d-41a5-4f7c-9448-694099a39bcf

  This problem exists in both v1 and v2 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1383973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255594] Re: neutron glue code creates tokens excessively, still

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255594

Title:
  neutron glue code creates tokens excessively, still

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Reusing keystone tokens improves OpenStack efficiency and performance.
  For operations that require a token, reusing a token avoids the
  overhead of a request to keystone. For operations that validate
  tokens, reused tokens improve the hit rate of authentication caches
  (e.g., in keystoneclient.middleware). In both cases, the load on the
  keystone server is reduced, thus improving the response time for
  requests that do require new tokens or token validation. Finally,
  since token validation is so CPU intensive, improved auth cache hit
  rate can significantly reduce CPU utilization by keystone.

  In spite of the progress made by
  
http://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5,
  which was committed to address bug #1250580, the neutronv2 network API
  code in nova-compute creates more tokens than necessary, to the point
  where performance degradation is measurable when creating a large
  number of instances.

  Prior to the aforementioned change, nova-compute created a new admin
  token for accessing neutron virtually every time a call was made into
  nova.network.neutronv2.  With aforementioned change, a token is
  created once per thread (i.e., green thread); thus multiple calls
  into neutronv2 can share a token. For example, during instance
  creation, a single token is created then reused 6 times; prior to the
  patch, 7 tokens would have been created by nova.network.neutronv2 per
  nova boot. However, this scheme is far from optimal. Given that
  tokens, by default, have a shelf life of 24H, a single token could be
  shared by _all_ nova.network.neutronv2 calls in a 24-hour period.

  The performance impact of sharing a single neutronv2 admin token is
  easy to observe when creating a large number of instances in parallel.
  In this example, I boot 40 instances in parallel, ping them, then
  delete them. I'm using a 24-core machine with enough RAM and disk
  throughput to never become bottlenecks. Note that I'm running with
  multiple keystone-all worker processes
  (https://review.openstack.org/#/c/42967/). Using the per-thread
  tokens, the last instance becomes active after 40s and the last
  instance is deleted after 65s. Using a single shared token, the last
  instance becomes active after 32s and the last instance is deleted
  after 60s. During the token-per-thread run, keystone-all processes had
  900% CPU utilization (i.e., 9 x 100% of a single core) for the first
  ~10s, then stayed in the 50-100% range for the rest of the run. In the
  single token run, the keystone-all processes never exceeded 150% CPU
  utilization.

  I focused on the nova.network.neutronv2 because it created the most
  tokens during my parallel boot experiment. However there are other
  excessive token offenders. After fixing nova.network.neutronv2, the
  leading auth requestors are glance-index and glance-registry due to a
  high auth cache miss rate. I'm not sure who's creating those new
  tokens however.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341014] Re: Update VSM credential correctly

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341014

Title:
  Update VSM credential correctly

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Today if we modify the VSM credential in the cisco_plugins.ini, the
  older VSM ip address remains in the db, and all requests are sent to
  the older VSM. This patch deletes all VSM credentials on neutron
  start up before adding the newer VSM credentials. Hence making sure
  that there is only one VSM IP address and credential in the db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347734] Re: The container dashboard does not handle unicode url correctly

2015-03-12 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347734

Title:
  The container dashboard does not handle unicode url correctly

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  New

Bug description:
  In many places in the container dashboard, the arguments passed into
  the reverse function are processed with urlquote.

  ie reverse(url_name, args(urlquote(container_name))

  This causes the container name to be quoted twice, since reverse runs
  urlquote also. The results are errors from the swift backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348838] Re: Glance logs password hashes in swift URLs

2015-03-12 Thread Alan Pevec
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1348838

Title:
  Glance logs password hashes in swift URLs

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  New

Bug description:
  Example:

  2014-07-25 20:03:36.346 780 DEBUG glance.registry.api.v1.images
  [1c66afef-0bc9-4413-b63a-c81585c2a981 2eae458f42e64420af5e3a2cab07e03a
  9bc19f6aabc944c382bf553cb8131b17 - - -] Updating image dfd7e14c-
  eb02-487e-8112-d1881ae031d9 with metadata: {u'status': u'active',
  'locations':
  
[u'swift+http://service%3Aimage:GyQLQqJbh3jzBfRvAs8nw8WDQ3xUtO7nw49t33R96WddHww0zJ2CSU7AtgFtf76J@proxy:8770/v2.0
  /glance-images/dfd7e14c-eb02-487e-8112-d1881ae031d9']} update
  /usr/lib/python2.7/dist-packages/glance/registry/api/v1/images.py:445

  We've found that the following regex will catch all of the password
  hashes:

  r(swift|swift\+http|swift\+https)://(.*?:)?.*?@

  Since it's a debug-level log message, we can avoid leaking sensitive
  data by turning off debug logging, but we often find ourselves needing
  the debug logs to diagnose issues.  We'd like to fix this problem at
  the source by sanitizing our the password hashes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1348838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340405] Re: Big Switch plugin missing in migration for agents table

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340405

Title:
  Big Switch plugin missing in migration for agents table

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  There is an issue with the db migration script creating the agent bindings 
table for the Big Switch plugin. [1]
  This seems to be caused by a recent addition of the Big Switch plugin to the 
agent bindings table but not to the agents table. [2]

  
  1. 
https://groups.google.com/a/openflowhub.org/forum/#!topic/floodlight-dev/k7V-ssEtJKQ
  2. 
https://github.com/openstack/neutron/commit/d3be7b040eaa61a4d0ac617026cf5c9132d3831e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311820] Re: Neutron subnet create tooltip has invalid html tags

2015-03-12 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1311820

Title:
  Neutron subnet create tooltip has invalid html tags

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  New

Bug description:
  In the Subnet Detail dialog (Neutron networks), when you hover the
  cursor over 'Allocation Pools' and 'Host Routes' text boxes, the help
  text contains 'lt;' and 'gt;' tags, instead of the standard 
  chars. This kind of rendering should be left for the widget, and not
  be embedded in the text itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1311820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313009] Re: Memory reported improperly in admin dashboard

2015-03-12 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1313009

Title:
  Memory reported improperly in admin dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  New

Bug description:
  The admin dashboard works with memory totals and usages as integers.
  This means that, for example, if you have a total of 1.95 TB of memory
  in your hypervisors you'll see it reported as 1 TB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1313009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309208] Re: NSX: sync thread catches wrong exceptions on not found in db elements

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1309208

Title:
  NSX: sync thread catches wrong exceptions on not found in db elements

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  2014-04-17 14:13:07.361 ERROR neutron.openstack.common.loopingcall 
[req-47b42334-8484-48c9-9f3e-e6ced3b9e3ec None None] in dynamic looping call
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall Traceback 
(most recent call last):
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/openstack/common/loopingcall.py, line 123, in 
_inner
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall idle = 
self.f(*self.args, **self.kw)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 649, in 
_synchronize_state
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
scan_missing=scan_missing)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 508, in 
_synchronize_lswitchports
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
ext_networks=ext_nets)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 464, in 
synchronize_port
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
neutron_port_data['id'])
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 271, in _get_port
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall raise 
n_exc.PortNotFound(port_id=id)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
PortNotFound: Port 0b9ce706-1197-460d-986e-98870bc81ce7 could not be found
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1309208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309187] Re: neutron should catch 404 in notify code from nova

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1309187

Title:
  neutron should catch 404 in notify code from nova

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  a-467f-a10c-96dba7772e46', 'name': 'network-vif-unplugged', 'server_uuid': 
u'dddcaacc-87f0-4a36-b4b1-29cb915f75ba'}]
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova Traceback (most recent 
call last):
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/neutron/neutron/notifiers/nova.py, line 188, in send_events
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova batched_events)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/v1_1/contrib/server_external_events.py,
 line 39, in create
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova return_raw=True)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/base.py, line 152, in _create
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 314, in post
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova return 
self._cs_request(url, 'POST', **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 288, in _cs_request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 270, in _time_request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova resp, body = 
self.request(url, method, **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 264, in request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova NotFound: No instances 
found for any event (HTTP 404) (Request-ID: 
req-142a25e1-1011-435e-837d-8ed9cc576cbc)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1309187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308544] Re: libvirt: Trying to delete a non-existing vif raises an exception

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308544

Title:
  libvirt: Trying to delete a non-existing vif raises an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If an instance fails during its network creation (for example if the
  network-vif-plugged event doesn't arrive in time) a subsequent delete
  will also fail when it tries to delete the vif, leaving the instance
  in a Error(deleting) state.

  This can be avoided by including the --if-exists option to the
  ovs=vsctl command.

  Example of stack trace:

   2014-04-16 12:28:51.949 AUDIT nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Terminating instance
  2014-04-16 12:28:52.309 ERROR nova.virt.libvirt.driver [-] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] During wait destroy, instance disappeared.
  2014-04-16 12:28:52.407 ERROR nova.network.linux_net 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] Unable to execute 
['ovs-vsctl', '--timeout=120', 'del-port', 'br-int', u'qvo67a96e96-10']. 
Exception: Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
del-port br-int qvo67a96e96-10
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no port named qvo67a96e96-10\n'
  2014-04-16 12:28:52.573 ERROR nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Setting instance vm_state to ERROR
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Traceback (most recent call last):
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2261, in do_terminate_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self._delete_instance(context, 
instance, bdms, quotas)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File /mnt/stack/nova/nova/hooks.py, 
line 103, in inner
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] rv = f(*args, **kwargs)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2231, in _delete_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] quotas.rollback()
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] six.reraise(self.type_, self.value, 
self.tb)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2203, in _delete_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self._shutdown_instance(context, 
db_inst, bdms)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2145, in _shutdown_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] requested_networks)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] six.reraise(self.type_, self.value, 
self.tb)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2135, in _shutdown_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] block_device_info)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/driver.py, line 955, in destroy
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] destroy_disks)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/driver.py, line 991, in cleanup
  

[Yahoo-eng-team] [Bug 1336624] Re: Libvirt driver cannot avoid ovs_hybrid

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336624

Title:
  Libvirt driver cannot avoid ovs_hybrid

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  This bug is related to Nova and Neutron.

  Libvirt driver cannot avoid ovs_hybrid though if NoopFirewallDriver is
  selected, while using LibvirtGenericVIFDriver at Nova and ML2+OVS at
  Neutron.

  Since Nova follows binding:vif_detail from Neutron [1], that is
  intended behavior. OVS mech driver in Neutron always return the
  following vif_detail:

vif_details: {
  port_filter: true,
  ovs_hybrid_plug: true,
}

  So, Neutron is right place to configure to avoid ovs_hybrid plugging.
  I think we can set ovs_hybrid_plug=False in OVS mech driver if
  security_group is disabled.

  [1] https://review.openstack.org/#/c/83190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336107] Re: Cisco n1kv plugin missing subnet check on network delete

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336107

Title:
  Cisco n1kv plugin missing subnet check on network delete

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  N1kv plugin should raise an exception during network delete if there
  is a subnet that is tied to that network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352635] Re: Allow Cisco ML2 driver to use the upstream ncclient

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352635

Title:
  Allow Cisco ML2 driver to use the upstream ncclient

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Currently, the Cisco ML2 driver relies on a specially patched and
  maintained custom version of the ncclient 3rd party library for
  communication with various switches.

  Changes have been submitted to the upstream ncclient now so that there
  is no need to maintain a separate version of the ncclient anymore.

  To take advantage of the new ncclient version, a small change needs to
  be made to the Cisco ML2 driver, so that it can detect whether the old
  (custom) ncclient is installed, or whether the new upstream ncclient
  is used.

  Installation and maintenance will be simplified by not requiring a
  custom version of the ncclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324218] Re: Empty Daily Report Page

2015-03-12 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1324218

Title:
  Empty Daily Report Page

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  Is the Daily Report tab implemented yet?
  OpenStack Icehouse on CentOS 6.5 (2.6.32-431.11.2.el6.x86_64 #1 SMP)

  I can see and select both resource usage tabs, however the Daily Report page 
is empty (for over 2 weeks)
  I see no errors in apache, keystone or ceilometer logs on the cloud 
controller node.
  On the compute nodes, I see many of these:
  2014-05-27 09:17:21.625 18382 WARNING ceilometer.transformer.conversions [-] 
dropping sample with no predecessor: (ceilometer.sample.Sample object at 
0x24ce810,)

  Here are package versions of relevant RPMs on the controller node:
  =
  Name: openstack-ceilometer-collector  Relocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:35 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: openstack-ceilometer-api Relocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:35 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: python-ceilometerclient  Relocations: (not relocatable)
  Version : 1.0.8 Vendor: Fedora Project
  Release : 1.el6 Build Date: Mon 16 Dec 2013 
11:20:29 AM PST
  Install Date: Mon 05 May 2014 12:23:26 PM PDT  Build Host: 
buildvm-25.phx2.fedoraproject.org
  Name: openstack-ceilometer-notification  Relocations: (not 
relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:26 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: openstack-ceilometer-central  Relocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:35 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: openstack-ceilometer-common  Relocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:26 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: openstack-ceilometer-alarm   Relocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:35 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: python-ceilometerRelocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 06:07:26 PM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: mongodb  Relocations: (not relocatable)
  Version : 2.4.6 Vendor: Fedora Project
  Release : 1.el6 Build Date: Thu 19 Sep 2013 
12:24:03 PM PDT
  Install Date: Mon 05 May 2014 05:45:00 PM PDT  Build Host: 
buildvm-09.phx2.fedoraproject.org
  =

  
  Relevant RPMs on the compute nodes:
  =
  Name: python-ceilometerRelocations: (not relocatable)
  Version : 2014.1Vendor: Fedora Project
  Release : 2.el6 Build Date: Wed 07 May 2014 
11:33:07 AM PDT
  Install Date: Mon 12 May 2014 09:47:30 AM PDT  Build Host: 
buildvm-16.phx2.fedoraproject.org

  Name: python-ceilometerclient  Relocations: (not relocatable)
  Version : 1.0.8 Vendor: Fedora Project
  Release : 1.el6 Build Date: Mon 16 Dec 2013 
11:20:29 AM PST
  Install Date: Mon 12 May 2014 09:47:28 AM PDT  Build Host: 
buildvm-25.phx2.fedoraproject.org

  Name: 

[Yahoo-eng-team] [Bug 1334647] Re: Nova api service doesn't handle SIGHUP signal properly

2015-03-12 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334647

Title:
  Nova api service doesn't handle SIGHUP signal properly

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  New
Status in Cinder juno series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The Oslo library incubator:
  Invalid

Bug description:
  When SIGHUP signal is send to nova-api service, it doesn't complete
  processing of all pending requests before terminating all the
  processes.

  Steps to reproduce:

  1. Run nova-api service as a daemon.
  2. Send SIGHUP signal to nova-api service.
 kill -1 parent_process_id_of_nova_api

  After getting SIGHUP signal all the processes of nova-api stops instantly, 
without completing existing request, which might cause failure.
  Ideally after getting the SIGHUP signal nova-api process should stop getting 
new requests and wait for existing requests to complete before terminating all 
the processes and restarting all nova-api processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1334647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229819] Re: Unit Test fixes with the new router dashboard

2015-03-12 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1229819

Title:
  Unit Test fixes with the new router dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  New

Bug description:
  Currently the new dashboard router doesn't have its unit test run by 
default. This needs to be fixed.
  Additionally, many existing unit tests such as the network create tests from 
the project and admin dashboards and also the instance create tests have 
been changed to accommodate the cisco n1k plugin only when the plugin is being 
used. The existing solution is very cumbersome with a check being done to test 
the config variable in the local_settings. A better solution needs to be found 
to ensure these are run in a better manner.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1229819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302701] Re: Neutron refuses to delete instance associated with multiple floating addresses

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302701

Title:
  Neutron refuses to delete instance associated with multiple floating
  addresses

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Neutron does not permit one to delete an instance that is associated
  with multiple floating ip addresses.

  Create an instance:

nova boot ... --nic net-id=3ff9b903-e921-4752-a26f-cba8f1433992
  --key-name lars test0

  Add a second fixed ip address:

nova add-fixed-ip test0 3ff9b903-e921-4752-a26f-cba8f1433992
nova show test0  | grep network
| net0 network | 10.0.0.4, 10.0.0.5 
 |

  
  Associate a floating ip address with each fixed address:

$ nova add-floating-ip --fixed-address 10.0.0.5 test0 192.168.200.7
$ nova add-floating-ip --fixed-address 10.0.0.4 test0 192.168.200.6

  Now attempt to delete the instance:

$ nova delete test0
$ nova list | grep test0
| c36e277f-e354-4856-8f5b-9603e6c76b2e | test0 | ERROR  ...

  Neutron server fails with:

ERROR neutron.api.v2.resource [-] delete failed
TRACE neutron.api.v2.resource Traceback (most recent call last):
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
TRACE neutron.api.v2.resource result = method(request=request, **args)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 432, in delete
TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/ovs_neutron_plugin.py,
 line 631, in delete_port
TRACE neutron.api.v2.resource self.disassociate_floatingips(context, id)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/l3_db.py, line 751, in 
disassociate_floatingips
TRACE neutron.api.v2.resource % port_id)
TRACE neutron.api.v2.resource Exception: Multiple floating IPs found for 
port b591c76b-9d72-41d7-a884-2959a1f6b7fc
TRACE neutron.api.v2.resource 

  Disassociating one of the floating ips allows the delete to complete
  successfully:

nova floating-ip-disassociate test0 192.168.200.6
nova delete test0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301432] Re: ODL ML2 driver doesn't warn if the url parameter is not set

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301432

Title:
  ODL ML2 driver doesn't warn if the url parameter is not set

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  If you don't define the url parameter in the [ml2_odl] section of
  the ML2 configuration file, the ODL driver will run happily but no
  request will be made to the ODL service. See
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mechanism_odl.py?id=a57dc2c30ab78ba74cfc51b8fdb457d3374cc87d#n313

  The parameter should have a sane default value (eg
  http://127.0.0.1:8080/controller/nb/v2/neutron) and/or a message
  should be logged to warn the deployer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244860] Re: Two DHCP ports on same network due to cleanup failure

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244860

Title:
  Two DHCP ports on same network due to cleanup failure

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  On a network, neutron port-list --network_id net-id --device_owner
  'network:dhcp' shows there are two ports.  This is checked from the
  mysql database:

  mysql select * from ports where tenant_id='abcd' and 
device_owner='network:dhcp' and 
network_id='7d2e3d47-396d-4867-a2b0-0311465a8454';
  
++--+--+--+---+++---+--+
  | tenant_id  | id   | name | network_id   
| mac_address   | admin_state_up | status | 
device_id | 
device_owner |
  
++--+--+--+---+++---+--+
  | abcd | 3d6a7627-6af9-4fb6-9cf6-591c1373d349 |  | 
7d2e3d47-396d-4867-a2b0-0311465a8454 | fa:16:3e:60:83:3f |  1 | 
ACTIVE | 
dhcp4fff1f08-9922-5c44-b6f8-fd9780f48512-7d2e3d47-396d-4867-a2b0-0311465a8454 | 
network:dhcp |
  | abcd | a4c0eb19-407e-4970-90a8-0128259fb048 |  | 
7d2e3d47-396d-4867-a2b0-0311465a8454 | fa:16:3e:e1:1b:8f |  1 | 
ACTIVE | 
dhcpce80c236-6a89-571d-970b-a1d4bb787827-7d2e3d47-396d-4867-a2b0-0311465a8454 | 
network:dhcp |
  
++--+--+--+---+++---+--+
  2 rows in set (0.00 sec)

  However, the neutron dhcp-agent-list-hosting-net
  7d2e3d47-396d-4867-a2b0-0311465a8454 shows only one DHCP-server
  running.

  This problem is observed in an environment with 4 nodes running dhcp-
  agents.  The neutron API server and the DHCP agents are NOT running on
  the same node.

  What happened is that error occurred when the DHCP server is being
  moved from DHCP-agentA running on nodeA to DHCP-agentB running on
  nodeB.  The sequence is

    neutron dhcp-agent-network-remove agentA net-id (1)
    neutron dhcp-agent-network-add agentB net-id  (2)

  Right before or during the time step 1 is done, nodeA was rebooted.
  So the DHCP-port ws never removed.  When nodeA came back and the DHCP-
  agent restarted, it didn't do the unplug of the dhcp port device.  THe
  DHCP agent also failed to make the release_dhcp_port RPC call to the
  API-server to have the port deleted from mysql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301838] Re: SG rule should not allow an ICMP Policy when icmp-code alone is provided.

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301838

Title:
  SG rule should not allow an ICMP Policy when icmp-code alone is
  provided.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  When we add a Security Group ICMP rule with icmp-type/code, the rule
  gets added properly and it translates to an appropriate firewall
  policy.

  It was noticed that when adding a security group rule, without
  providing the icmp-type (port-range-min) and only providing the icmp-
  code (port-range-max) no error is reported, but there is a mismatch
  with the iptables rule (a generic icmp policy gets added)

  Example:
  neutron --debug security-group-rule-create 
4b3a5866-8cdd-4e15-b51b-9523ede2f6f8 --protocol icmp --direction ingress 
--ethertype ipv4 --port-range-max 4

  translates to a iptables rule like
  -A neutron-openvswi-i49e920d5-c -p icmp -j RETURN

  The Security Group rules listing in Horizon/neutron-client display the icmp 
rule with port-range as None-icmp-code.
  This could be misleading and is inconsistent.
  It would be good if validation is done on the input to check that 
--port-range-max is passed when --port-range-min is provided so that SG 
Group rules are consistent with the iptable rules that are added.

  Please note: iptables does not allow us to add an icmp rule 
  when an icmp-type is not provided and only icmp-code is provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207402] Re: neutron should automatically stamp the database version when it is deployed

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1207402

Title:
  neutron should automatically stamp the database version when it is
  deployed

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  When neutron automatically deploys the database schema it does not
  stamp the version.  According to neutron/db/migration/README this
  means that I have to stamp it to manually.

  Neutron should automatically stamp the version to head when deploys
  the schema the first time.  That way I don't have to determine what
  version I'm running so that I can do a schema migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1207402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278550] Re: test_provider_token_expiration_validation tests fails on slow system

2015-03-12 Thread Alan Pevec
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278550

Title:
  test_provider_token_expiration_validation tests fails on slow system

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  New

Bug description:
  
  The 
keystone.tests.test_token_provider.TestTokenProvider.test_provider_token_expiration_validation
 test fails on a system where it takes a long time to run the tests. The 
problem is that the test is expecting the token to be valid, but with the long 
run time and the shorter default token expiration time, the token turns out to 
not be valid by the time the system gets around to running the test.

  The errors are like:

  in output log:

  {{{
  ...
  Traceback (most recent call last):
File keystone/token/provider.py, line 188, in _is_valid_token
  expires_at = token_data['token']['expires']
  KeyError: 'token'
  ...}}}

  Traceback (most recent call last):
File keystone/tests/test_token_provider.py, line 818, in 
test_provider_token_expiration_validation
  self.token_provider_api._is_valid_token(SAMPLE_V2_TOKEN_VALID))
File keystone/token/provider.py, line 202, in _is_valid_token
  raise exception.TokenNotFound(_('Failed to validate token'))
  TokenNotFound: Failed to validate token

  There's probably another issue indicated here since we shouldn't be
  getting a KeyError in the case of an expired token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1278550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188532] Re: image download process won't terminate after deleting image from dashboard

2015-03-12 Thread Alan Pevec
*** This bug is a duplicate of bug 1398830 ***
https://bugs.launchpad.net/bugs/1398830

** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1188532

Title:
  image download process won't terminate after deleting image from
  dashboard

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Version: Grizzly on ubuntu 12.04

  Firstly, create an image with the dashboard, and input the url of the image, 
rather than selecting a local image file.
  Once the status of the image becomes 'saving', delete it from the dashboard.
  The image is removed from the dashboard as expected, however, the downloading 
process continues in the background.
  If you have a look at /var/lib/glance/images/ , you'll find the size of the 
image is growing.

  Sorry, I'm not sure whether it is an issue of glance or horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1188532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254555] Re: tenant does not see network that is routable from tenant-visible network until neutron-server is restarted

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254555

Title:
  tenant does not see network that is routable from tenant-visible
  network until neutron-server is restarted

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in neutron icehouse series:
  New
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  In TripleO We have a setup script[1] that does this as an admin:

  neutron net-create default-net --shared
  neutron subnet-create --ip_version 4 --allocation-pool 
start=10.0.0.2,end=10.255.255.254 --gateway 10.0.0.1 10.0.0.0/8 
$ID_OF_default_net
  neutron router-create default-router
  neutron router-interface-add default-router $ID_OF_10.0.0.0/8_subnet
  neutron net-create ext-net --router:external=True
  neutron subnet-create ext-net $FLOATING_CIDR --disable-dhcp --alocation-pool 
start=$FLOATING_START,end=$FLOATING_END
  neutron router-gateway-set default-router ext-net

  I would then expect that all users will be able to see ext-net using
  'neutron net-list' and that they will be able to create floating IPs
  on ext-net.

  As of this commit:

  commit c655156b98a0a25568a3745e114a0bae41bc49d1
  Merge: 75ac6c1 c66212c
  Author: Jenkins jenk...@review.openstack.org
  Date:   Sun Nov 24 10:02:04 2013 +

  Merge MidoNet: Added support for the admin_state_up flag

  I see that the ext-net network is not available after I do all of the
  above router/subnet creation. It does become available to tenants as
  soon as I restart neutron-server.

  [1] https://git.openstack.org/cgit/openstack/tripleo-
  incubator/tree/scripts/setup-neutron

  I can reproduce this at will using the TripleO devtest process on real
  hardware. I have not yet reproduced on VMs using the 'devtest'
  workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259144] Re: bigswitch: no need to duplicate check on network_delete

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259144

Title:
  bigswitch: no need to duplicate check on network_delete

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Check if network is in use is not needed in bigswitch plugin: 
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/bigswitch/plugin.py#L587-L599

  as it is done by db_base_plugin, which is called right after:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/bigswitch/plugin.py#L601

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1259144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288923] Re: Failover of a network from one dhcp agent to another breaks DNS

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288923

Title:
  Failover of a network from one dhcp agent to another breaks DNS

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Failing over a network from one dhcp agent to another results in a new
  IP address for the dhcp port.  This breaks dns for all vms on that
  network.This can be reproduced by simply doing a neutron dhcp-
  agent-network-remove and then a neutron dhcp-agent-network-add and
  observing that the dhcp port ip address will change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1288923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365226] Re: Add security group to running instance with nexus monolithic plugin throws error

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365226

Title:
  Add security group to running instance with nexus monolithic plugin
  throws error

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  While adding new security group to an existing instance with cisco
  nexus plugin(provider network) throws the following error.

  # nova add-secgroup   987efb45-1a6c-4a76-a26b-fe9cfdd8073e  test-security
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-8e0b85a2-7499-4420-8a68-3c68aa3ee1c9)

  Looking at the server.log points to an empty host list being passed.
  /var/log/neutron/server.log

  
  2014-09-02 20:10:22.116 52259 INFO neutron.wsgi 
[req-091df3c8-7bdb-42b5-801a-a26a650a451a None] (52259) accepted 
('172.21.9.134', 39434)

  2014-09-02 20:10:22.211 52259 ERROR 
neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on 
Nexus switch
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last):
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 405, in u
  pdate_port
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 
self._invoke_nexus_for_net_create(context, *create_args)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 263, in _
  invoke_nexus_for_net_create
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 [network, attachment])
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 148, in _
  invoke_plugin_per_device
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 return func(*args, **kwargs)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
/usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py,
 line 79,
   in create_network
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 raise 
cisco_exc.NexusComputeHostNotConfigured(host=host)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 NexusComputeHostNotConfigured: 
Connection to None is not configured.
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2
  2014-09-02 20:10:22.256 52259 INFO neutron.api.v2.resource 
[req-8ebea742-09b5-416b-820f-69461c496319 None] update failed (client error): 
Connection to None is not configured.
  2014-09-02 20:10:22.257 52259 INFO neutron.wsgi 
[req-8ebea742-09b5-416b-820f-69461c496319 None] 172.21.9.134 - - [02/Sep/2014 
20:10:22] PUT //v2.0/ports/c2e6b716-5c7d-4d23-ab78-ecd2a649469b.json HTTP/1.1 
404 322 0.140213

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396932] Re: The hostname regex pattern doesn't match valid hostnames

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396932

Title:
  The hostname regex pattern doesn't match valid hostnames

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  The regex used to match hostnames is opinionated, and it's opinions
  differ from RFC 1123 and RFC 952.

  The following hostnames will fail that are valid.

  6952x 
  openstack-1
  a1a
  x.x1x
  example.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367034] Re: NSX: prevents creating multiple networks same vlan but different physical network

2015-03-12 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367034

Title:
  NSX: prevents creating multiple networks same vlan but different
  physical network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  NSX: prevents creating multiple networks same vlan but different
  physical network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308544] Re: libvirt: Trying to delete a non-existing vif raises an exception

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308544

Title:
  libvirt: Trying to delete a non-existing vif raises an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  If an instance fails during its network creation (for example if the
  network-vif-plugged event doesn't arrive in time) a subsequent delete
  will also fail when it tries to delete the vif, leaving the instance
  in a Error(deleting) state.

  This can be avoided by including the --if-exists option to the
  ovs=vsctl command.

  Example of stack trace:

   2014-04-16 12:28:51.949 AUDIT nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Terminating instance
  2014-04-16 12:28:52.309 ERROR nova.virt.libvirt.driver [-] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] During wait destroy, instance disappeared.
  2014-04-16 12:28:52.407 ERROR nova.network.linux_net 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] Unable to execute 
['ovs-vsctl', '--timeout=120', 'del-port', 'br-int', u'qvo67a96e96-10']. 
Exception: Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
del-port br-int qvo67a96e96-10
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no port named qvo67a96e96-10\n'
  2014-04-16 12:28:52.573 ERROR nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Setting instance vm_state to ERROR
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Traceback (most recent call last):
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2261, in do_terminate_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self._delete_instance(context, 
instance, bdms, quotas)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File /mnt/stack/nova/nova/hooks.py, 
line 103, in inner
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] rv = f(*args, **kwargs)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2231, in _delete_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] quotas.rollback()
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] six.reraise(self.type_, self.value, 
self.tb)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2203, in _delete_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self._shutdown_instance(context, 
db_inst, bdms)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2145, in _shutdown_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] requested_networks)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] six.reraise(self.type_, self.value, 
self.tb)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2135, in _shutdown_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] block_device_info)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/driver.py, line 955, in destroy
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] destroy_disks)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/driver.py, line 991, in cleanup
  

[Yahoo-eng-team] [Bug 1398566] Re: REST API relies on policies being initialized after RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398566

Title:
  REST API relies on policies being initialized after
  RESOURCE_ATTRIBUTE_MAP is processed, does nothing to ensure it.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  A race condition exists where policies may be loaded and processed
  before the neutron extensions  are loaded and the
  RESOURCE_ATTRIBUTE_MAP is populated. This causes problems in system
  behaviour dependent on neutron specific policy checks. Policies are
  loaded at on demand, and if the call instigating the loading of
  policies happens prematurely this can  cause certain neutron specific
  policy checks to not be setup properly as the required mappings from
  policy to check implementations has not been established.

  Related bugs:

  https://bugs.launchpad.net/neutron/+bug/1254555
  https://bugs.launchpad.net/neutron/+bug/1251982
  https://bugs.launchpad.net/neutron/+bug/1280738

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300788] Re: VMware: exceptions when SOAP reply message has no body

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300788

Title:
  VMware: exceptions when SOAP reply message has no body

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Minesweeper logs have the following:

  2014-03-26 11:37:09.753 CRITICAL nova.virt.vmwareapi.driver 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi: _call_method 
(session=52eb4a1e-04de-de0d-5c6a-746a430570a2)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver Traceback 
(most recent call last):
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 856, in _call_method
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver return 
temp_module(*args, **kwargs)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/vim.py, line 196, in vim_request_handler
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver raise 
error_util.VimFaultException(fault_list, excep)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
VimFaultException: Server raised fault: '
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver SOAP body not 
found
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP envelope
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 38
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
HTTP request before method was determined
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 0'
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.754 WARNING nova.virt.vmwareapi.vmops 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi:vmops:_destroy_instance, got this 
exception while un-registering the VM: Server raised fault: '
  SOAP body not found

  while parsing SOAP envelope
  at line 1, column 38

  while parsing HTTP request before method was determined
  at line 1, column 0'

  There are cases when the suds returns a message with no body.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400966] Re: [OSSA-2014-041] Glance allows users to download and delete any file in glance-api server (CVE-2014-9493)

2015-03-12 Thread Alan Pevec
** Changed in: glance/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1400966

Title:
  [OSSA-2014-041] Glance allows users to download and delete any file in
  glance-api server (CVE-2014-9493)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in Glance juno series:
  Fix Released
Status in Ansible playbooks for deploying OpenStack:
  Fix Committed
Status in openstack-ansible icehouse series:
  Fix Released
Status in openstack-ansible juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Updating image-location by update images API users can download any file for 
which glance-api has read permission. 
  And the file for which glance-api has write permission will be deleted when 
users delete the image.

  
  For example:
  When users specify '/etc/passwd' as locations value of an image user can get 
the file by image download.

  When locations of an image is set with 'file:///path/to/glance-
  api.conf' the conf will be deleted when users delete the image.

  How to recreate the bug:
  download files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to get such as 
file:///etc/passwd.
   - download the image

  delete files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to delete such as 
file:///path/to/glance-api.conf
   - delete the image

  I found this bug in 2014.2 (742c898956d655affa7351505c8a3a5c72881eae).

  What a big A RE RE!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1400966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255594] Re: neutron glue code creates tokens excessively, still

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255594

Title:
  neutron glue code creates tokens excessively, still

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Reusing keystone tokens improves OpenStack efficiency and performance.
  For operations that require a token, reusing a token avoids the
  overhead of a request to keystone. For operations that validate
  tokens, reused tokens improve the hit rate of authentication caches
  (e.g., in keystoneclient.middleware). In both cases, the load on the
  keystone server is reduced, thus improving the response time for
  requests that do require new tokens or token validation. Finally,
  since token validation is so CPU intensive, improved auth cache hit
  rate can significantly reduce CPU utilization by keystone.

  In spite of the progress made by
  
http://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5,
  which was committed to address bug #1250580, the neutronv2 network API
  code in nova-compute creates more tokens than necessary, to the point
  where performance degradation is measurable when creating a large
  number of instances.

  Prior to the aforementioned change, nova-compute created a new admin
  token for accessing neutron virtually every time a call was made into
  nova.network.neutronv2.  With aforementioned change, a token is
  created once per thread (i.e., green thread); thus multiple calls
  into neutronv2 can share a token. For example, during instance
  creation, a single token is created then reused 6 times; prior to the
  patch, 7 tokens would have been created by nova.network.neutronv2 per
  nova boot. However, this scheme is far from optimal. Given that
  tokens, by default, have a shelf life of 24H, a single token could be
  shared by _all_ nova.network.neutronv2 calls in a 24-hour period.

  The performance impact of sharing a single neutronv2 admin token is
  easy to observe when creating a large number of instances in parallel.
  In this example, I boot 40 instances in parallel, ping them, then
  delete them. I'm using a 24-core machine with enough RAM and disk
  throughput to never become bottlenecks. Note that I'm running with
  multiple keystone-all worker processes
  (https://review.openstack.org/#/c/42967/). Using the per-thread
  tokens, the last instance becomes active after 40s and the last
  instance is deleted after 65s. Using a single shared token, the last
  instance becomes active after 32s and the last instance is deleted
  after 60s. During the token-per-thread run, keystone-all processes had
  900% CPU utilization (i.e., 9 x 100% of a single core) for the first
  ~10s, then stayed in the 50-100% range for the rest of the run. In the
  single token run, the keystone-all processes never exceeded 150% CPU
  utilization.

  I focused on the nova.network.neutronv2 because it created the most
  tokens during my parallel boot experiment. However there are other
  excessive token offenders. After fixing nova.network.neutronv2, the
  leading auth requestors are glance-index and glance-registry due to a
  high auth cache miss rate. I'm not sure who's creating those new
  tokens however.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419043] Re: Conflict on isolated credential setup

2015-03-12 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1419043

Title:
  Conflict on isolated credential setup

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released
Status in Keystone juno series:
  Fix Committed
Status in Tempest:
  Invalid

Bug description:
  From the following run (in progress) -
  http://logs.openstack.org/26/153426/2/gate/gate-tempest-dsvm-
  nova-v21-full/c80cf2c//console.html

  2015-02-06 15:50:27.935 | ==
  2015-02-06 15:50:27.935 | Failed 1 tests - output below:
  2015-02-06 15:50:27.935 | ==
  2015-02-06 15:50:27.935 | 
  2015-02-06 15:50:27.935 | setUpClass 
(tempest.api.compute.admin.test_aggregates_negative.AggregatesAdminNegativeTestJSON)
  2015-02-06 15:50:27.936 | 
---
  2015-02-06 15:50:27.936 | 
  2015-02-06 15:50:27.936 | Captured traceback:
  2015-02-06 15:50:27.936 | ~~~
  2015-02-06 15:50:27.936 | Traceback (most recent call last):
  2015-02-06 15:50:27.936 |   File tempest/test.py, line 273, in 
setUpClass
  2015-02-06 15:50:27.936 | cls.resource_setup()
  2015-02-06 15:50:27.936 |   File 
tempest/api/compute/admin/test_aggregates_negative.py, line 31, in 
resource_setup
  2015-02-06 15:50:27.936 | super(AggregatesAdminNegativeTestJSON, 
cls).resource_setup()
  2015-02-06 15:50:27.936 |   File tempest/api/compute/base.py, line 341, 
in resource_setup
  2015-02-06 15:50:27.936 | super(BaseComputeAdminTest, 
cls).resource_setup()
  2015-02-06 15:50:27.936 |   File tempest/api/compute/base.py, line 44, 
in resource_setup
  2015-02-06 15:50:27.937 | cls.os = cls.get_client_manager()
  2015-02-06 15:50:27.937 |   File tempest/test.py, line 407, in 
get_client_manager
  2015-02-06 15:50:27.937 | creds = 
cls.isolated_creds.get_primary_creds()
  2015-02-06 15:50:27.937 |   File tempest/common/isolated_creds.py, line 
273, in get_primary_creds
  2015-02-06 15:50:27.937 | return self.get_credentials('primary')
  2015-02-06 15:50:27.937 |   File tempest/common/isolated_creds.py, line 
257, in get_credentials
  2015-02-06 15:50:27.937 | credentials = 
self._create_creds(admin=is_admin)
  2015-02-06 15:50:27.937 |   File tempest/common/isolated_creds.py, line 
119, in _create_creds
  2015-02-06 15:50:27.937 | tenant, email)
  2015-02-06 15:50:27.937 |   File tempest/common/isolated_creds.py, line 
65, in _create_user
  2015-02-06 15:50:27.937 | username, password, tenant['id'], email)
  2015-02-06 15:50:27.937 |   File 
tempest/services/identity/json/identity_client.py, line 168, in create_user
  2015-02-06 15:50:27.938 | resp, body = self.post('users', post_body)
  2015-02-06 15:50:27.938 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 169, in post
  2015-02-06 15:50:27.938 | return self.request('POST', url, 
extra_headers, headers, body)
  2015-02-06 15:50:27.938 |   File tempest/common/service_client.py, line 
69, in request
  2015-02-06 15:50:27.938 | raise exceptions.Conflict(ex)
  2015-02-06 15:50:27.938 | Conflict: An object with that identifier 
already exists
  2015-02-06 15:50:27.938 | Details: An object with that identifier already 
exists
  2015-02-06 15:50:27.938 | Details: {u'title': u'Conflict', u'message': 
u'Conflict occurred attempting to store role - Duplicate Entry', u'code': 409}
  2015-02-06 15:50:27.938 | 

  
  Some how isolated_credential calls are failing.

  It appears that this might be racing on creating identical roles on
  multiple users - http://logs.openstack.org/26/153426/2/gate/gate-
  tempest-dsvm-
  nova-v21-full/c80cf2c//logs/apache/keystone.txt.gz#_2015-02-06_15_27_17_988

  That's about the time of the failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1419043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407055] Re: All unit test jobs failing due to timezone change (test_timezone_offset_is_displayed)

2015-03-12 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1407055

Title:
  All unit test jobs failing due to timezone change
  (test_timezone_offset_is_displayed)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Committed

Bug description:
  2015-01-02 06:09:33.597 | 
==
  2015-01-02 06:09:33.597 | FAIL: test_timezone_offset_is_displayed 
(openstack_dashboard.dashboards.settings.user.tests.UserSettingsTest)
  2015-01-02 06:09:33.597 | 
--
  2015-01-02 06:09:33.597 | Traceback (most recent call last):
  2015-01-02 06:09:33.598 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/settings/user/tests.py,
 line 30, in test_timezone_offset_is_displayed
  2015-01-02 06:09:33.598 | self.assertContains(res, UTC +04:00: Russia 
(Moscow) Time)
  2015-01-02 06:09:33.598 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/testcases.py,
 line 351, in assertContains
  2015-01-02 06:09:33.598 | msg_prefix + Couldn't find %s in response % 
text_repr)
  2015-01-02 06:09:33.598 | AssertionError: False is not true : Couldn't find 
'UTC +04:00: Russia (Moscow) Time' in response
  2015-01-02 06:09:33.598 | uFalse is not true : Couldn't find 'UTC 
+04:00: Russia (Moscow) Time' in response = self._formatMessage(uFalse is not 
true : Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response, %s is 
not true % safe_repr(False))
  2015-01-02 06:09:33.598 |   raise self.failureException(uFalse is not true 
: Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response)

  
  Noticed in master and Icehouse jobs, I assume Juno is affected too. The 
timezone appears to be listed as UTC +03:00 now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the disk section I see:

  source protocol=rbd name=volumes/instance-0004_disk

  where in the original libvirt.xml the file was:

  source protocol=rbd name=volumes/volume-9e1a7835-b780-495c-a88a-
  4558be784dde

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305186] Re: Fake libvirtError incompatibile with real libvirtError

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305186

Title:
  Fake libvirtError incompatibile with real libvirtError

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  PROBLEM

  The existing `fakelibvirt.libvirtError` is actually not compatible
  with the real `libvirt.libvirtError` class in that it accepts
  different kwargs in the `__init__`.

  This is a problem because test code may use either class depending on
  whether `libvirt-python` happens to be installed on the box.

  For example, if `libvirt-python` is installed on the box and you try
  to use `libvirtError` class from a test with the `error_code` kwarg,
  you'll get this exception: http://paste.openstack.org/show/75432/

  This code would work on a machine that doesn't have `libvirt-python`
  installed b/c `fakelibvirt.libvirtError` was used.

  POSSIBLE SOLUTION

  Copy over the real `libvirt.libvirtError` class so that it matches
  exactly.

  Create a `make_libvirtError` convenience function so we can still
  create `libvirtErrors` using the nice `error_code` kwarg in the
  constructor (b/c 99% of the time that's what we want).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1305186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279172] Re: Unicode encoding error exists in extended Nova API, when the data contain unicode

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  node = doc.createTextNode(str(data)), then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179816] Re: ec2_eror_code mismatch

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1179816

Title:
  ec2_eror_code mismatch

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  It is reporting InstanceNotFound instead of InvalidAssociationID[.]NotFound
  in 
  tests/boto/test_ec2_network.py 

  self.assertBotoError(ec2_codes.client.InvalidAssociationID.NotFound,
   address.disassociate)

  
  AssertionError :Error code (InstanceNotFound) doesnot match the expexted re 
pattern InvalidAssociationID[.]NotFound

  boto: ERROR: 400 Bad RequInvalidAssociationID[.]NotFoundest
  boto: ERROR: ?xml version=1.0?
  ResponseErrrorsErrorCodeInstanceNotFound/CodeMessageInstance None 
could not be 
found./Message/Error/ErrorsRequestIDreq-05235a67-0a70-46b1-a503-91444ab2b88d/RequestID/Response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1179816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407105] Re: Password Change Doesn't Affirmatively Invalidate Sessions

2015-03-12 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1407105

Title:
  Password Change Doesn't Affirmatively Invalidate Sessions

Status in OpenStack Dashboard (Horizon):
  Triaged
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released
Status in Keystone juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  The password change dialog at /horizon/settings/password/ contains the
  following code:

  code
  if user_is_editable:
  try:
  api.keystone.user_update_own_password(request,
  data['current_password'],
  data['new_password'])
  response = http.HttpResponseRedirect(settings.LOGOUT_URL)
  msg = _(Password changed. Please log in again to continue.)
  utils.add_logout_reason(request, response, msg)
  return response
  except Exception:
  exceptions.handle(request,
    _('Unable to change password.'))
  return False
  else:
  messages.error(request, _('Changing password is not supported.'))
  return False
  /code

  There are at least two security concerns here:
  1) Logout is done by means of an HTTP redirect.  Let's say Eve, as MitM, gets 
ahold of Alice's token somehow.  Alice is worried this may have happened, so 
she changes her password.  If Eve suspects that the request is a 
password-change request (which is the most Eve can do, because we're running 
over HTTPS, right?  Right!?), then it's a simple matter to block the redirect 
from ever reaching the client, or the redirect request from hitting the server. 
 From Alice's PoV, something weird happened, but her new password works, so 
she's not bothered.  Meanwhile, Alice's old login ticket continues to work.
  2) Part of the purpose of changing a password is generally to block those who 
might already have the password from continuing to use it.  A password change 
should trigger (insofar as is possible) a purging of all active logins/tokens 
for that user.  That does not happen here.

  Frankly, I'm not the least bit sure if I've thought of the worst-case
  scenario(s) for point #1.  It just strikes me as very strange not to
  aggressively/proactively kill the ticket/token(s), instead relying on
  the client to do so.  Feel free to apply minds smarter and more
  devious than my own!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420788] Re: Logging blocks on race condition under eventlet

2015-03-12 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420788

Title:
  Logging blocks on race condition under eventlet

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released
Status in Keystone juno series:
  Fix Committed

Bug description:
  Wrong initialization order makes logging block on race condition under
  eventlet.

  bin/keystone-all launcher initialize logging first and after that does
  eventlet patching leaving logging system with generic thred.lock in
  critical sections what leads to infinite thread locks under high load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297962] Re: [sru] Nova-compute doesnt start

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297962

Title:
  [sru] Nova-compute doesnt start

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup rv = 
meth(*args,**kwargs)
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in baselineCPU
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup if ret is 
None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup libvirtError: 
this function is not supported by the connection driver: virConnectBaselineCPU

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255347] Re: cinder cross_az_attach uses instance AZ value

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255347

Title:
  cinder cross_az_attach uses instance AZ value

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  When checking if an instance is in the same AZ as a volume nova uses
  the instances availability_zone attribute. This isn't the correct way
  to get an instances AZ, it should use the value gotten through
  querying the aggregate the instance is on

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314677] Re: nova-cells fails when using JSON file to store cell information

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314677

Title:
  nova-cells fails when using JSON file to store cell information

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  As recommended in http://docs.openstack.org/havana/config-
  reference/content/section_compute-cells.html#cell-config-optional-json
  I'm creating the nova-cells config with the cell information stored in
  a json file. However, when I do this nova-cells fails to start with
  this error in the logs:

  2014-04-29 11:52:05.240 16759 CRITICAL nova [-] __init__() takes exactly 3 
arguments (1 given)
  2014-04-29 11:52:05.240 16759 TRACE nova Traceback (most recent call last):
  2014-04-29 11:52:05.240 16759 TRACE nova   File /usr/bin/nova-cells, line 
10, in module
  2014-04-29 11:52:05.240 16759 TRACE nova sys.exit(main())
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/cells.py, line 40, in main
  2014-04-29 11:52:05.240 16759 TRACE nova manager=CONF.cells.manager)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 257, in create
  2014-04-29 11:52:05.240 16759 TRACE nova db_allowed=db_allowed)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 139, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cells/manager.py, line 87, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.state_manager = 
cell_state_manager()
  2014-04-29 11:52:05.240 16759 TRACE nova TypeError: __init__() takes exactly 
3 arguments (1 given)

  
  I have had a dig into the code and it appears that CellsManager creates an 
instance of CellStateManager with no arguments. CellStateManager __new__ runs 
and creates an instance of CellStateManagerFile which runs __new__ and __init__ 
with cell_state_cls and cells_config_path set. At this point __new__ returns 
CellStateManagerFile and the new instance's __init__() method is invoked 
(CellStateManagerFile.__init__) with the original arguments (there weren't any) 
which then results in the stack trace.

  It seems reasonable for CellStateManagerFile to derive the
  cells_config_path info for itself so I've patched it locally with

  === modified file 'state.py'
  --- state.py  2014-04-30 15:10:16 +
  +++ state.py  2014-04-30 15:10:26 +
  @@ -155,7 +155,7 @@
   config_path = CONF.find_file(cells_config)
   if not config_path:
   raise 
cfg.ConfigFilesNotFoundError(config_files=[cells_config])
  -return CellStateManagerFile(cell_state_cls, config_path)
  +return CellStateManagerFile(cell_state_cls)
   
   return CellStateManagerDB(cell_state_cls)
   
  @@ -450,7 +450,9 @@
   
   
   class CellStateManagerFile(CellStateManager):
  -def __init__(self, cell_state_cls, cells_config_path):
  +def __init__(self, cell_state_cls=None):
  +cells_config = CONF.cells.cells_config
  +cells_config_path = CONF.find_file(cells_config)
   self.cells_config_path = cells_config_path
   super(CellStateManagerFile, self).__init__(cell_state_cls)
   

  
  Ubuntu: 14.04
  nova-cells: 1:2014.1-0ubuntu1

  nova.conf:

  [DEFAULT]
  dhcpbridge_flagfile=/etc/nova/nova.conf
  dhcpbridge=/usr/bin/nova-dhcpbridge
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  force_dhcp_release=True
  iscsi_helper=tgtadm
  libvirt_use_virtio_for_bridges=True
  connection_type=libvirt
  root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  verbose=True
  ec2_private_dns_show_ip=True
  api_paste_config=/etc/nova/api-paste.ini
  volumes_path=/var/lib/nova/volumes
  enabled_apis=ec2,osapi_compute,metadata
  auth_strategy=keystone
  compute_driver=libvirt.LibvirtDriver
  quota_driver=nova.quota.NoopQuotaDriver

  
  [cells]
  enable=True
  name=cell
  cell_type=compute
  cells_config=/etc/nova/cells.json

  
  cells.json: 
  {
  parent: {
  name: parent,
  api_url: http://api.example.com:8774;,
  transport_url: rabbit://rabbit.example.com,
  weight_offset: 0.0,
  weight_scale: 1.0,
  is_parent: true
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1398830] Re: [OSSA 2015-003] Glance image leak when in saving state (CVE-2014-9623)

2015-03-12 Thread Alan Pevec
** Changed in: glance/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1398830

Title:
  [OSSA 2015-003] Glance image leak when in saving state (CVE-2014-9623)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in Glance juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Tushar Patil reported that
  https://bugs.launchpad.net/glance/+bug/1383973 can be leverage to
  conduct a denial of service attack on Glance backend store.

  The image in saving state is not taken into account by global quota
  enforcement.

  Attached is a script to reproduce the behavior:

  
  Steps to reproduce (tested on file backend store)

1.  Check how many images are present in the directory that the Filesystem 
backend store write the image data to (filesystem_store_datadir parameter).
2.  Run the program for 1 hour
3.  Again count images (step 1), it should be the same as recorded in Step 
1.

  We ran this program for 1 hour in our environment.
  Before running the program, count of images in the file store 
(/opt/stack/data/glance/images) was 6.

  After running the program for 1 hr,

*   Total count of images in the folder /opt/stack/data/glance/images = 806 
(it should have been 6)
*   Total count of images created = 1014
*   Total count of images deleted in saving state = 800
*   Total count of images deleted = 1014


  Considering the bug is already public, fix should be proposed directly
  on gerrit, this new report will let us work on the impact statement
  and coordinate the security work in parallel to the public fix being
  merged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1398830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1013417] Re: Cinderclient Doesn't Return A Useful Error When Trying To Create A Volume Larger Than The Quota Allocation

2015-03-12 Thread Alan Pevec
** Changed in: nova/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1013417

Title:
  Cinderclient Doesn't Return A Useful Error When Trying To Create A
  Volume Larger Than The Quota Allocation

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Python client library for Cinder:
  Fix Committed

Bug description:
  Actually, it is nearly useless. It just returns an exception that it
  casts from a HTTP 500.

  My quota limit is 1000GB, here I try to make a volume that is 2000GB

  g = cinderclient(request).volumes.create(size, display_name=name,
  display_description=description)

  cinderclient connection created using token
  e3fbb3c2d94949b0975db11de85bebc5 and url
  http://10.145.1.51:8776/v1/9da18fcaedf74eb7b1cf73b67b5b870c;

  REQ: curl -i
  http://10.145.1.51:8776/v1/9da18fcaedf74eb7b1cf73b67b5b870c/volumes -X
  POST -H X-Auth-Project-Id: 9da18fcaedf74eb7b1cf73b67b5b870c -H
  User-Agent: python-novaclient -H Content-Type: application/json -H
  Accept: application/json -H X-Auth-Token:
  e3fbb3c2d94949b0975db11de85bebc5

  REQ BODY: {volume: {snapshot_id: null, display_name: My Vol,
  volume_type: null, display_description: , size: 2000}}

  RESP:{'date': 'Thu, 14 Jun 2012 22:14:02 GMT', 'status': '500',
  'content-length': '128', 'content-type': 'application/json;
  charset=UTF-8', 'x-compute-request-id': 'req-316c81e2-3407-4df0-8b0e-
  190bf63f549b'} {computeFault: {message: The server has either
  erred or is incapable of performing the requested operation., code:
  500}}

  *** ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  316c81e2-3407-4df0-8b0e-190bf63f549b)

  This is basically useless from an end-user perspective and doesn't
  allow us to tell users of Horizon anything useful about why this
  error'd. :( It should probably be a 406, not a 500, and the error
  message should be Cannot create a volume of 2000GB because your quota
  is currently 1000GB. Or something along those lines...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1013417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313009] Re: Memory reported improperly in admin dashboard

2015-03-12 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1313009

Title:
  Memory reported improperly in admin dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  The admin dashboard works with memory totals and usages as integers.
  This means that, for example, if you have a total of 1.95 TB of memory
  in your hypervisors you'll see it reported as 1 TB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1313009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309187] Re: neutron should catch 404 in notify code from nova

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1309187

Title:
  neutron should catch 404 in notify code from nova

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  a-467f-a10c-96dba7772e46', 'name': 'network-vif-unplugged', 'server_uuid': 
u'dddcaacc-87f0-4a36-b4b1-29cb915f75ba'}]
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova Traceback (most recent 
call last):
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/neutron/neutron/notifiers/nova.py, line 188, in send_events
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova batched_events)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/v1_1/contrib/server_external_events.py,
 line 39, in create
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova return_raw=True)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/base.py, line 152, in _create
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 314, in post
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova return 
self._cs_request(url, 'POST', **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 288, in _cs_request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 270, in _time_request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova resp, body = 
self.request(url, method, **kwargs)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova   File 
/opt/stack/python-novaclient/novaclient/client.py, line 264, in request
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova NotFound: No instances 
found for any event (HTTP 404) (Request-ID: 
req-142a25e1-1011-435e-837d-8ed9cc576cbc)
  2014-04-17 12:51:46.689 TRACE neutron.notifiers.nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1309187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372438] Re: Race condition in l2pop drops tunnels

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372438

Title:
  Race condition in l2pop drops tunnels

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  The issue was originally raised by a Red Hat performance engineer (Joe
  Talerico)  here: https://bugzilla.redhat.com/show_bug.cgi?id=1136969
  (see starting from comment 4).

  Joe created a Fedora instance in his OS cloud based on RHEL7-OSP5
  (Icehouse), where he installed Rally client to run benchmarks against
  that cloud itself. He assigned a floating IP to that instance to be
  able to access API endpoints from inside the Rally machine. Then he
  ran a scenario which basically started up 100+ new instances in
  parallel, tried to access each of them via ssh, and once it succeeded,
  clean up each created instance (with its ports). Once in a while, his
  Rally instance lost connection to outside world. This was because
  VXLAN tunnel to the compute node hosting the Rally machine was dropped
  on networker node where DHCP, L3, Metadata agents were running. Once
  we restarted OVS agent, the tunnel was recreated properly.

  The scenario failed only if L2POP mechanism was enabled.

  I've looked thru the OVS agent logs and found out that the tunnel was
  dropped due to a legitimate fdb entry removal request coming from
  neutron-server side. So the fault is probably on neutron-server side,
  in l2pop mechanism driver.

  I've then looked thru the patches in Juno to see whether there is
  something related to the issue already merged, and found the patch
  that gets rid of _precommit step when cleaning up fdb entries. Once
  we've applied the patch on the neutron-server node, we stopped to
  experience those connectivity failures.

  After discussion with Vivekanandan Narasimhan, we came up with the
  following race condition that may result in tunnels being dropped
  while legitimate resources are still using them:

  (quoting Vivek below)

  '''
  - - port1 delete request comes in;
  - - port1 delete request acquires lock
  - - port2 create/update request comes in;
  - - port2 create/update waits on due to unavailability of lock
  - - precommit phase for port1 determines that the port is the last one, so we 
should drop the FLOODING_ENTRY;
  - - port1 delete applied to db;
  - - port1 transaction releases the lock
  - - port2 create/update acquires the lock
  - - precommit phase for port2 determines that the port is the first one, so 
request FLOODING_ENTRY + MAC-specific flow creation;
  - - port2 create/update request applied to db;
  - - port2 transaction releases the lock

  Now at this point postcommit of either of them could happen, because 
code-pieces operate outside the
  locked zone.  

  If it happens, this way, tunnel would retain:

  - - postcommit phase for port1 requests FLOODING_ENTRY deletion due to port1 
deletion
  - - postcommit phase requests FLOODING_ENTRY + MAC-specific flow creation for 
port2;

  If it happens the below way, tunnel would break:
  - - postcommit phase for create por2 requests FLOODING_ENTRY + MAC-specific 
flow 
  - - postcommit phase for delete port1 requests FLOODING_ENTRY deletion
  '''

  We considered the patch to get rid of precommit for backport to
  Icehouse [1] that seems to eliminate the race, but we're concerned
  that we reverted that to previous behaviour in Juno as part of DVR
  work [2], though we haven't done any testing to check whether the
  issue is present in Juno (though brief analysis of the code shows that
  it should fail there too).

  Ideally, the fix for Juno should be easily backportable because the
  issue is currently present in Icehouse, and we would like to have the
  same fix for both branches (Icehouse and Juno) instead of backporting
  patch [1] to Icehouse and implementing another patch for Juno.

  [1]: https://review.openstack.org/#/c/95165/
  [2]: https://review.openstack.org/#/c/102398/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278550] Re: test_provider_token_expiration_validation tests fails on slow system

2015-03-12 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278550

Title:
  test_provider_token_expiration_validation tests fails on slow system

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  
  The 
keystone.tests.test_token_provider.TestTokenProvider.test_provider_token_expiration_validation
 test fails on a system where it takes a long time to run the tests. The 
problem is that the test is expecting the token to be valid, but with the long 
run time and the shorter default token expiration time, the token turns out to 
not be valid by the time the system gets around to running the test.

  The errors are like:

  in output log:

  {{{
  ...
  Traceback (most recent call last):
File keystone/token/provider.py, line 188, in _is_valid_token
  expires_at = token_data['token']['expires']
  KeyError: 'token'
  ...}}}

  Traceback (most recent call last):
File keystone/tests/test_token_provider.py, line 818, in 
test_provider_token_expiration_validation
  self.token_provider_api._is_valid_token(SAMPLE_V2_TOKEN_VALID))
File keystone/token/provider.py, line 202, in _is_valid_token
  raise exception.TokenNotFound(_('Failed to validate token'))
  TokenNotFound: Failed to validate token

  There's probably another issue indicated here since we shouldn't be
  getting a KeyError in the case of an expired token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1278550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315556] Re: Disabling a domain does not disable the projects in that domain

2015-03-12 Thread Alan Pevec
** Changed in: keystone/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1315556

Title:
  Disabling a domain does not disable the projects in that domain

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  User from an enabled domain can still get a token scoped to a project
  in a disabled domain.

  Steps to reproduce.

  1. create domains domainA and domainB
  2. create user userA and project projectA in domainA
  3. create user userB and project projectB in domainB
  4. assign userA some role for projectB
  5. disable domainB
  6. authenticate to get a  token for userA scoped to projectB. This should 
fail as projectB's domain (domainB) is disabled.

  Looks like the fix would be the check for the project domain to make
  sure it is also enabled. See

  
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L112

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1315556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383614] Re: not using L3 network prevents a user from launching instances

2015-03-12 Thread Alan Pevec
** Changed in: horizon/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1383614

Title:
  not using L3 network prevents a user from launching instances

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  In Icehouse, usign a setup, where neutron-l3-agent is not used at all, 
  starting new instances works via cli, but not via Horizon. Error message is:

  2014-09-24 05:46:43,533 13795 ERROR horizon.tables.base Error while checking 
action permissions.
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1136, 
in _filter_action
  return action._allowed(request, datum) and row_matched
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
132, in _allowed
  self.allowed(request, datum))
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/volumes/volumes/tables.py,
 line 75, in allowed
  usages = quotas.tenant_quota_usages(request)
File /usr/lib/python2.7/site-packages/horizon/utils/memoized.py, line 90, 
in wrapped
  value = cache[key] = func(*args, **kwargs)
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 203, in tenant_quota_usages
  floating_ips = network.tenant_floating_ip_list(request)
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/network.py,
 line 50, in tenant_floating_ip_list
  return NetworkClient(request).floating_ips.list()
File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py,
 line 332, in list
  fips = self.client.list_floatingips(tenant_id=tenant_id, **search_opts)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
111, in with_params
  ret = self.function(instance, *args, **kwargs)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
437, in list_floatingips
  **_params)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1250, in list
  for r in self._pagination(collection, path, **params):
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1263, in _pagination
  res = self.get(path, params=params)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1236, in get
  headers=headers, params=params)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1221, in retry_request
  headers=headers, params=params)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1164, in do_request
  self._handle_fault_response(status_code, replybody)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
1134, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
91, in exception_handler_v20
  message=message)
  NeutronClientException: 404 Not Found

  The resource could not be found.

  
  How to reproduce:
  edit neutron.conf:
  comment out the service plugins.
  #service_plugins
  
=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

  restart neutron services.

  This is fixed in Juno, and the issue exists in Icehouse only

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1383614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345947] Re: DHCPNAK after neutron-dhcp-agent restart

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1345947

Title:
  DHCPNAK after neutron-dhcp-agent restart

Status in Grenade - OpenStack upgrade testing:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed

Bug description:
  After rolling out a configuration change, we restarted neutron-dhcp-agent 
service, and then dnsmasq logs start flooding: DHCPNAK ... lease not found.
  DHCPNAK is replied by dnsmasq for all DHCPREQUEST renews from all VMs. 
However the MAC and IP pairs exist in host files.
  The log flooding increases when more and more VMs start renewing and they 
keep retrying until IP expire and send DHCPDISCOVER and reinit the IP.
  The log flooding gradually disappears when the VMs IP expire and send 
DHCPDISCOVER, to which dnsmasq respond DHCPOFFER properly.

  Analysis:  
  I noticed that option --leasefile-ro is used in dnsmasq command when started 
by neutron dhcp-agent. According to dnsmasq manual, this option should be used 
together with --dhcp-script to customize the lease database. However, the 
option --dhcp-script was removed when fixing bug 1202392.
  Because of this, dnsmasq will not save lease information in persistent 
storage, and when it is restarted, lease information is lost.

  Solution:
  Simply replace --leasefile-ro by --dhcp-leasefile=path to dhcp runtime 
files/lease would solve the problem. (patch attached)

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1345947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393925] Re: Race condition adding a security group rule when another is in-progress

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393925

Title:
  Race condition adding a security group rule when another is in-
  progress

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I've come across a race condition where I sometimes see a security
  group rule is never added to iptables, if the OVS agent is in the
  middle of applying another security group rule when the RPC arrives.

  Here's an example scenario:

  nova boot --flavor 1 --image $nova_image  dev_server1
  sleep 4
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min  --port_range_max  default
  neutron security-group-rule-create --direction ingress --protocol tcp 
--port_range_min 1112 --port_range_max 1112 default

  Wait for VM to complete booting, then check iptables:

  $ sudo iptables-save | grep 111
  -A neutron-openvswi-i741ff910-1 -p tcp -m tcp --dport  -j RETURN

  The second rule is missing, and will only get added if you either add
  another rule, or restart the agent.

  My config is just devstack, running with the latest openstack bits as
  of today.  OVS agent w/vxlan and DVR enabled, nothing fancy.

  I've been able to track this down to the following code (i'll attach
  the complete log as a file due to line wraps):

  OVS agent receives RPC to setup port
  Port info is gathered for devices and filters for security groups are 
created
  Iptables apply is called
  New security group rule is added, triggering RPC message
  RPC received, and agent seems to add device to list that needs refresh

  Security group rule updated on remote: 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256'] security_groups_rule_updated
  Security group rule updated 
[u'5f0f5036-d14c-4b57-a855-ed39deaea256']
  Adding [u'741ff910-12ba-4c1e-9dc9-38f7cbde0dc4'] devices to the 
list of devices for which firewall needs to be refreshed _security_group_updated

  Iptables apply is finished

  rpc_loop() in OVS agent does not notice there is more work to do on
  next loop, so rule never gets added

  At this point I'm thinking it could be that self.devices_to_refilter
  is modified in both _security_group_updated() and setup_port_filters()
  without any lock/semaphore, but the log doesn't explicity implicate it
  (perhaps we trust the timestamps too much?).

  I will continue to investigate, but if someone has an aha! moment
  after reading this far please add a note.

  A colleague here has also been able to duplicate this on his own
  devstack install, so it wasn't my fat-fingering that caused it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378215] Re: If db deadlock occurs for some reason while deleting an image, no one can delete the image any more

2015-03-12 Thread Alan Pevec
** Changed in: glance/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378215

Title:
  If db deadlock occurs for some reason while deleting an image, no one
  can delete the image any more

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in Glance juno series:
  Fix Released

Bug description:
  Glance api returns 500 Internal Server Error, if db deadlock occurs in 
glance-registry for some reason while deleting an image. 
  The image 'status' is set to deleted and 'deleted' is set to False. As 
deleted is still False, the image is visible in image list but it can not be 
deleted any more.

  If you try to delete this image it will raise 404 (Not Found) error
  for V1 api and 500 (HTTPInternalServerError) for V2 api.

  Note:
  To reproduce this issue I've explicitly raised db_exception.DBDeadlock 
exception from _image_child_entry_delete_all method under 
\glance\db\sqlalchemy\api.py.

  glance-api.log
  --
  2014-10-06 00:53:10.037 6827 INFO glance.registry.client.v1.client 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c423
  1bcc17d8c55c83997 - - -] Registry client request DELETE 
/images/f9f8a40d-530b-498c-9fbc-86f29da555f4 raised ServerError
  2014-10-06 00:53:10.045 6827 INFO glance.wsgi.server 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c4231bcc17d8c55c83997 - - -] Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 433, 
in handle_one_response
  result = self.application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
  response = req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/osprofiler/web.py, line 106, 
in __call__
  return request.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
748, in __call__
  return self._call_app(env, start_response)
File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
684, in _call_app
  return self._app(env, _fake_start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
  response = req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
  response = req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, 

[Yahoo-eng-team] [Bug 1309208] Re: NSX: sync thread catches wrong exceptions on not found in db elements

2015-03-12 Thread Alan Pevec
** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1309208

Title:
  NSX: sync thread catches wrong exceptions on not found in db elements

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  2014-04-17 14:13:07.361 ERROR neutron.openstack.common.loopingcall 
[req-47b42334-8484-48c9-9f3e-e6ced3b9e3ec None None] in dynamic looping call
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall Traceback 
(most recent call last):
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/openstack/common/loopingcall.py, line 123, in 
_inner
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall idle = 
self.f(*self.args, **self.kw)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 649, in 
_synchronize_state
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
scan_missing=scan_missing)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 508, in 
_synchronize_lswitchports
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
ext_networks=ext_nets)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/plugins/vmware/common/sync.py, line 464, in 
synchronize_port
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
neutron_port_data['id'])
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 271, in _get_port
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall raise 
n_exc.PortNotFound(port_id=id)
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall 
PortNotFound: Port 0b9ce706-1197-460d-986e-98870bc81ce7 could not be found
  2014-04-17 14:13:07.361 TRACE neutron.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1309208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


<    1   2   3   4   5   6   7   8   9   10   >