[Yahoo-eng-team] [Bug 1460350] [NEW] Cells: Race deleting instance can lead to instances undeleted at the top

2015-05-30 Thread melanie witt
Public bug reported:

Seen in check-tempest-dsvm-cells job failure, example trace from [1]:

Traceback (most recent call last):
  File tempest/api/compute/servers/test_list_servers_negative.py, line 153, 
in test_list_servers_detail_server_is_deleted
self.assertEqual([], actual)
  File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = []
actual= [{u'OS-DCF:diskConfig': u'MANUAL',
  u'OS-EXT-AZ:availability_zone': u'nova',
  u'OS-EXT-STS:power_state': 0,
  u'OS-EXT-STS:task_state': None,
  u'OS-EXT-STS:vm_state': u'deleted',
  u'OS-SRV-USG:launched_at': None,
  u'OS-SRV-USG:terminated_at': u'2015-05-17T15:46:15.00',
  u'accessIPv4': u'',
  u'accessIPv6': u'',
  u'addresses': {},
  u'config_drive': u'',
  u'created': u'2015-05-17T15:46:15Z',
  u'flavor': {u'id': u'42',
  u'links': [{u'href': 
u'http://127.0.0.1:8774/82eeb74985844a9daa71b162f663e981/flavors/42',
  u'rel': u'bookmark'}]},
  u'hostId': u'',
  u'id': u'45b1decf-8f52-4075-8869-acb9f48de159',
  u'image': {u'id': u'990c6a37-da73-4a74-be3d-eff98dcf7727',
 u'links': [{u'href': 
u'http://127.0.0.1:8774/82eeb74985844a9daa71b162f663e981/images/990c6a37-da73-4a74-be3d-eff98dcf7727',
 u'rel': u'bookmark'}]},
  u'key_name': None,
  u'links': [{u'href': 
u'http://127.0.0.1:8774/v2/82eeb74985844a9daa71b162f663e981/servers/45b1decf-8f52-4075-8869-acb9f48de159',
  u'rel': u'self'},
 {u'href': 
u'http://127.0.0.1:8774/82eeb74985844a9daa71b162f663e981/servers/45b1decf-8f52-4075-8869-acb9f48de159',
  u'rel': u'bookmark'}],
  u'metadata': {},
  u'name': u'ListServersNegativeTestJSON-instance-1205034409',
  u'os-extended-volumes:volumes_attached': [],
  u'status': u'DELETED',
  u'tenant_id': u'82eeb74985844a9daa71b162f663e981',
  u'updated': u'2015-05-17T15:46:15Z',
  u'user_id': u'aac5bd38fc264166b9b365426557b4d2'}]

The test creates an instance and immediately deletes it before it's
scheduled. After the delete has happened at the top via local delete,
updates from the child cells can arrive and undelete the instance.
This is possible because the code in nova/cells/messaging.py does
read_deleted='yes' and db.instance_update() will update all fields
provided (unlike objects). I also tried removing the local delete logic
in nova/compute/cells_api.py and it didn't help -- the destroy in the
child will trigger a instance_destroy_at_top() but it's still possible
for the instance.save() update to occur after the destroy, resulting
again in undeleted instance.

This issue should go away when instance_update_at_top() is converted to
use objects, as the deleted etc fields won't ever be in what_changed.

[1] http://logs.openstack.org/09/183909/2/check/check-tempest-dsvm-
cells/9799bb0

** Affects: nova
 Importance: Low
 Assignee: melanie witt (melwitt)
 Status: Triaged


** Tags: cells

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460350

Title:
  Cells: Race deleting instance can lead to instances undeleted at the
  top

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Seen in check-tempest-dsvm-cells job failure, example trace from [1]:

  Traceback (most recent call last):
File tempest/api/compute/servers/test_list_servers_negative.py, line 153, 
in test_list_servers_detail_server_is_deleted
  self.assertEqual([], actual)
File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = []
  actual= [{u'OS-DCF:diskConfig': u'MANUAL',
u'OS-EXT-AZ:availability_zone': u'nova',
u'OS-EXT-STS:power_state': 0,
u'OS-EXT-STS:task_state': None,
u'OS-EXT-STS:vm_state': u'deleted',
u'OS-SRV-USG:launched_at': None,
u'OS-SRV-USG:terminated_at': u'2015-05-17T15:46:15.00',
u'accessIPv4': u'',
u'accessIPv6': u'',
u'addresses': {},
u'config_drive': u'',
u'created': u'2015-05-17T15:46:15Z',
u'flavor': {u'id': u'42',
u'links': [{u'href': 
u'http://127.0.0.1:8774/82eeb74985844a9daa71b162f663e981/flavors/42',
u'rel': u'bookmark'}]},
u'hostId': u'',
u'id': u'45b1decf-8f52-4075-8869-acb9f48de159',
u'image': {u'id': u'990c6a37-da73-4a74-be3d-eff98dcf7727',
   

[Yahoo-eng-team] [Bug 1449850] Re: Join multiple criteria together

2015-05-30 Thread Dave Chen
As the comments in this patch:
(https://review.openstack.org/#/c/133135/), we won't fix this this time.

** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449850

Title:
  Join multiple criteria together

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  SQLAlchemy supports to join multiple criteria together, this is
  provided to build the query statements when there is multiple
  filtering criterion instead of constructing query statement one by
  one,  just *assume* SQLAlchemy prefer to use it in this way, and the
  code looks more clean after refactoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174499] Re: Keystone token hashing is MD5

2015-05-30 Thread Diane Fleming
Not clear what needs to change in the API docs.

** Changed in: openstack-api-site
   Status: Confirmed = Won't Fix

** Changed in: openstack-api-site
   Status: Won't Fix = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1174499

Title:
  Keystone token hashing is MD5

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack API documentation site:
  Incomplete
Status in Python client library for Keystone:
  Fix Released

Bug description:
  https://github.com/openstack/python-
  keystoneclient/blob/master/keystoneclient/common/cms.py

  def cms_hash_token(token_id):
  
  return: for ans1_token, returns the hash of the passed in token
  otherwise, returns what it was passed in.
  
  if token_id is None:
  return None
  if is_ans1_token(token_id):
  hasher = hashlib.md5()
  hasher.update(token_id)
  return hasher.hexdigest()
  else:
  return token_id

  
  MD5 is a deprecated mechanism, it should be replaces with at least SHA1, if 
not SHA256.
  Keystone should be able to support multiple Hash types, and the auth_token 
middleware should query Keystone to find out which type is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1174499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435665] Re: logged in user is able to perform operation, even after it has been disabled

2015-05-30 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435665

Title:
  logged in user is able to perform operation, even after it has been
  disabled

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  A user is able to perform some actions even after that user is
  disabled from keystone. Steps to reproduce

  1) Log into  dashboard
  2) open instance tab
  3) click on launch instance
  4) from keystone disable user
  5) now i am able to launch instance

  I can not see other tabs except instances. Most probably, i will be able to 
perform any operation available in instance tab.
  The same way, i created a volume even after that user was disabled. But this 
time volume tab was open before disabling user

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372985] Re: Create Network dialog box checkboxes mean different things and could provide additional guidance.

2015-05-30 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372985

Title:
   Create Network dialog box checkboxes mean different things and could
  provide additional guidance.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  The checkboxes from the Create Network dialog under the Admin tab are
  interpreted differently in the Networks table. Shared is Yes/No. Admin
  State is UP/DOWN. I'm not sure what reflects if External Network is
  checked or not, and I'm not sure what Status can be, other than
  ACTIVE, or how one might change the Status.  Better guidance in the
  dialog would be helpful otherwise your first indicator of what you
  have chosen isn't until after you commit and look at the ensuing
  table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421132] Re: test_rebuild_availability_range is failing from time to time

2015-05-30 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421132

Title:
  test_rebuild_availability_range is failing from time to time

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Functional test test_rebuild_availability_range is failing quite often
  with the following stacktrace:

  2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.890 | Traceback (most 
recent call last):
  2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.892 |   File 
neutron/tests/functional/db/test_ipam.py, line 198, in 
test_rebuild_availability_range
  2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.893 | 
self._create_port(self.port_id)
  2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.894 |   File 
neutron/tests/functional/db/test_ipam.py, line 128, in _create_port
  2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.896 | 
self.plugin.create_port(self.cxt, {'port': port})
  2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.897 |   File 
neutron/db/db_base_plugin_v2.py, line 1356, in create_port
  2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.898 | context, 
ip_address, network_id, subnet_id, port_id)
  2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.900 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 470, 
in __exit__
  2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.901 | self.rollback()
  2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.902 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
  2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.904 | 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-02-11 02:48:35.598 | 2015-02-11 02:48:29.905 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 467, 
in __exit__
  2015-02-11 02:48:35.599 | 2015-02-11 02:48:29.906 | self.commit()
  2015-02-11 02:48:35.600 | 2015-02-11 02:48:29.907 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 377, 
in commit
  2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.909 | 
self._prepare_impl()
  2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.910 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 357, 
in _prepare_impl
  2015-02-11 02:48:35.602 | 2015-02-11 02:48:29.911 | 
self.session.flush()
  2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.913 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1919, 
in flush
  2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.914 | 
self._flush(objects)
  2015-02-11 02:48:35.604 | 2015-02-11 02:48:29.915 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2037, 
in _flush
  2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.917 | 
transaction.rollback(_capture_exception=True)
  2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.918 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
  2015-02-11 02:48:35.606 | 2015-02-11 02:48:29.919 | 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-02-11 02:48:35.607 | 2015-02-11 02:48:29.921 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2001, 
in _flush
  2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.922 | 
flush_context.execute()
  2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.923 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 
372, in execute
  2015-02-11 02:48:35.609 | 2015-02-11 02:48:29.925 | rec.execute(self)
  2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.926 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 
526, in execute
  2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.927 | uow
  2015-02-11 02:48:35.611 | 2015-02-11 02:48:29.929 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
46, in save_obj
  2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.930 | uowtransaction)
  2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.931 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
171, in _organize_states_for_save
  2015-02-11 02:48:35.613 | 2015-02-11 02:48:29.933 | 
state_str(existing)))
  2015-02-11 02:48:35.614 | 2015-02-11 02:48:29.934 | FlushError: New 
instance IPAllocation at 0x7fdadf237d10 with identity key (class 
'neutron.db.models_v2.IPAllocation', (u'10.10.10.2', u'test_sub_id', 
'test_net_id')) conflicts with persistent instance IPAllocation at 
0x7fdad2488cd0

  
  See for example 
http://logs.openstack.org/35/149735/4/gate/gate-neutron-dsvm-functional/fc960fe/console.html

  Logstack query:
  

[Yahoo-eng-team] [Bug 1403291] Re: VM's fail to receive DHCPOFFER messages

2015-05-30 Thread Kevin Benton
The problem disappeared before it could be narrowed down.

** Changed in: neutron
   Status: New = Incomplete

** Changed in: neutron
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403291

Title:
  VM's fail to receive DHCPOFFER messages

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  http://logs.openstack.org/46/142246/1/check//check-tempest-dsvm-
  neutron-full/ff04c3e/console.html#_2014-12-16_23_45_56_966

  message:check_public_network_connectivity AND
  message:AssertionError: False is not true : Timed out waiting for
  AND message:to become reachable AND tags:tempest.txt

  420 hits in 7 days, check and gate, all failures.  Seems like this is
  probably a known issue already so could be a duplicate of another bug,
  but given elastic-recheck didn't comment on my patch when this failed
  I'm reporting a new bug and a new e-r query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiY2hlY2tfcHVibGljX25ldHdvcmtfY29ubmVjdGl2aXR5XCIgQU5EIG1lc3NhZ2U6XCJBc3NlcnRpb25FcnJvcjogRmFsc2UgaXMgbm90IHRydWUgOiBUaW1lZCBvdXQgd2FpdGluZyBmb3JcIiBBTkQgbWVzc2FnZTpcInRvIGJlY29tZSByZWFjaGFibGVcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg3ODcwOTM1OTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459726] Re: api servers hang with 100% CPU if syslog restarted

2015-05-30 Thread Doug Hellmann
** Changed in: oslo.log
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459726

Title:
  api servers hang with 100% CPU if syslog restarted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Logging configuration library for OpenStack:
  Invalid
Status in python-eventlet package in Ubuntu:
  Confirmed

Bug description:
  Affected:

  glance-api
  glance-registry
  neutron-server
  nova-api

  If service was configured to use rsyslog and rsyslog was restarted
  after API server started, it hangs on next log line with 100% CPU. If
  server have few workers, each worker will eat own 100% CPU share.

  Steps to reproduce:
  1. Configure syslog:
  use_syslog=true
  syslog_log_facility=LOG_LOCAL4
  2. restart api service
  3. restart rsyslog

  Execute some command to force logging. F.e.: neutron net-create foo,
  nova boot, etc.

  Expected result: normal operation

  Actual result:
  with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.

  Strace on hung service:

  gettimeofday({1432827199, 745141}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, 151keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0, 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745226}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, 151keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0, 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745325}, NULL) = 0

  Tested on:
  nova, glance, neutron:  1:2014.2.3, Ubuntu version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1459726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp