[Yahoo-eng-team] [Bug 1784155] Re: nova_placement service start not coordinated with api db sync on multiple controllers

2018-07-29 Thread Mike Bayer
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Package changed: nova (Ubuntu) => ubuntu

** Package changed: ubuntu => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784155

Title:
  nova_placement service start not coordinated with api db sync on
  multiple controllers

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  New

Bug description:
  On a loaded HA / galera environment using VMs I can fairly
  consistently reproduce a race condition where the nova_placement
  service is started on controllers where the database is not yet
  available.   The nova_placement service itself does not seem to be
  able to tolerate this condition upon startup and it then fails to
  recover.   Mitigation here can either involve synchronizing these
  conditions or getting nova-placement to be more resilient.

  The symptoms of overcloud deploy failure look like two out of three
  controllers having the nova_placement container in an unhealthy state:

  TASK [Debug output for task which failed: Check for unhealthy containers 
after step 3] ***
  Saturday 28 July 2018  10:19:29 + (0:00:00.663)   0:30:26.152 
* 
  fatal: [stack2-overcloud-controller-2]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "3597b92e9714
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  fatal: [stack2-overcloud-controller-1]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "322c5ea53895
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  ok: [stack2-overcloud-controller-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }
  ok: [stack2-overcloud-compute-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }

  NO MORE HOSTS LEFT
  *

  
  inspecting placement_wsgi_error.log shows the first stack trace that the 
nova_placement database is missing the "traits" table:

  [Sat Jul 28 10:17:06.525018 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Target WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api' cannot be loaded as Python module.
  [Sat Jul 28 10:17:06.525067 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Exception occurred processing WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api'.
  [Sat Jul 28 10:17:06.525101 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
Traceback (most recent call last):
  [Sat Jul 28 10:17:06.525124 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/var/www/cgi-bin/nova/nova-placement-api", line 54, in 
  [Sat Jul 28 10:17:06.525165 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
application = init_application()
  [Sat Jul 28 10:17:06.525174 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/wsgi.py", 
line 88, in init_application
  [Sat Jul 28 10:17:06.525198 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
return deploy.loadapp(conf.CONF)
  [Sat Jul 28 10:17:06.525205 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 111, in loadapp
  [Sat Jul 28 10:17:06.525300 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
update_database()
  [Sat Jul 28 10:17:06.525310 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 92, in update_database
  [Sat Jul 28 10:17:06.525329 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
resource_provider.ensure_trait_sync(ctx)
  [Sat Jul 28 10:17:06.525337 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 146, in ensure_trait_sync
  [Sat Jul 28 10:17:06.526277 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
_trait_sync(ctx)

  ...

  [Sat Jul 28 10:17:06.531950 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
raise errorclass(errno, errval)
  [Sat Jul 28 10:17:06.532049 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 
'nova_placement.traits' doesn't exist") 

[Yahoo-eng-team] [Bug 1644263] [NEW] passlib 1.7.0 deprecates sha512_crypt.encrypt()

2016-11-23 Thread Mike Bayer
Public bug reported:

tests are failing due to a new deprecation warning:

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_backend_sql.py", line 59, in setUp
self.load_fixtures(default_fixtures)
  File "keystone/tests/unit/core.py", line 754, in load_fixtures
user_copy = self.identity_api.create_user(user_copy)
  File "keystone/common/manager.py", line 123, in wrapped
__ret_val = __f(*args, **kwargs)
  File "keystone/identity/core.py", line 410, in wrapper
return f(self, *args, **kwargs)
  File "keystone/identity/core.py", line 420, in wrapper
return f(self, *args, **kwargs)
  File "keystone/identity/core.py", line 925, in create_user
ref = driver.create_user(user['id'], user)
  File "keystone/common/sql/core.py", line 429, in wrapper
return method(*args, **kwargs)
  File "keystone/identity/backends/sql.py", line 121, in create_user
user = utils.hash_user_password(user)
  File "keystone/common/utils.py", line 129, in hash_user_password
return dict(user, password=hash_password(password))
  File "keystone/common/utils.py", line 136, in hash_password
password_utf8, rounds=CONF.crypt_strength)
  File 
"/var/lib/jenkins/workspace/openstack_gerrit/keystone/.tox/sqla_py27/lib/python2.7/site-packages/passlib/utils/decor.py",
 line 190, in wrapper
warn(msg % tmp, DeprecationWarning, stacklevel=2)
DeprecationWarning: the method 
passlib.handlers.sha2_crypt.sha512_crypt.encrypt() is deprecated as of Passlib 
1.7, and will be removed in Passlib 2.0, use .hash() instead.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1644263

Title:
  passlib 1.7.0 deprecates sha512_crypt.encrypt()

Status in OpenStack Identity (keystone):
  New

Bug description:
  tests are failing due to a new deprecation warning:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_backend_sql.py", line 59, in setUp
  self.load_fixtures(default_fixtures)
File "keystone/tests/unit/core.py", line 754, in load_fixtures
  user_copy = self.identity_api.create_user(user_copy)
File "keystone/common/manager.py", line 123, in wrapped
  __ret_val = __f(*args, **kwargs)
File "keystone/identity/core.py", line 410, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 420, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 925, in create_user
  ref = driver.create_user(user['id'], user)
File "keystone/common/sql/core.py", line 429, in wrapper
  return method(*args, **kwargs)
File "keystone/identity/backends/sql.py", line 121, in create_user
  user = utils.hash_user_password(user)
File "keystone/common/utils.py", line 129, in hash_user_password
  return dict(user, password=hash_password(password))
File "keystone/common/utils.py", line 136, in hash_password
  password_utf8, rounds=CONF.crypt_strength)
File 
"/var/lib/jenkins/workspace/openstack_gerrit/keystone/.tox/sqla_py27/lib/python2.7/site-packages/passlib/utils/decor.py",
 line 190, in wrapper
  warn(msg % tmp, DeprecationWarning, stacklevel=2)
  DeprecationWarning: the method 
passlib.handlers.sha2_crypt.sha512_crypt.encrypt() is deprecated as of Passlib 
1.7, and will be removed in Passlib 2.0, use .hash() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1644263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594898] [NEW] functional DB tests based on SqlFixture don't actually use non-sqlite DB

2016-06-21 Thread Mike Bayer
ionalError) no such 
function: CURDATE [SQL: u'SELECT CURDATE()']


At the end there, that's a SQLite error.  You're not supposed to get
those in the MySQL test suite :).

The problem is that the SqlFixture is calling upon
neutron.db.api.get_engine() but this is in no way associated with the
context that oslo.db creates within the MySQLOpportunisticFixture
approach.   As neutron is using enginefacade now we need to swap in the
facade that's specific to oslo_db.sqlalchemy.test_base.DbFixture and
make sure everything is linked up.

Note that this problem does not impact the alembic migration tests, as
that test suite does its own set up of alembic fixtures.

I'm working on a reorg of the test fixtures here so this can work, as we
will need these fixtures to be effective for the upcoming CIDR stored
functions / triggers to be tested.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Mike Bayer (zzzeek)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594898

Title:
  functional DB tests based on SqlFixture don't actually use non-sqlite
  DB

Status in neutron:
  New

Bug description:
  Currently only neutron/tests/functional/db/test_ipam.py seems to use
  this fixture, however it is not interacting correctly with oslo.db
  such that it actually uses the engine set up by oslo.

  Add a test like this:

  diff --git a/neutron/tests/functional/db/test_ipam.py 
b/neutron/tests/functional/db/test_ipam.py
  index 0f28f74..d14bf6e 100644
  --- a/neutron/tests/functional/db/test_ipam.py
  +++ b/neutron/tests/functional/db/test_ipam.py
  @@ -156,8 +156,8 @@ class IpamTestCase(base.BaseTestCase):
   
   
   class TestIpamMySql(common_base.MySQLTestCase, IpamTestCase):
  -pass
  -
  +def test_we_are_on_mysql(self):
  +self.cxt.session.execute("SELECT CURDATE()")
   
   class TestIpamPsql(common_base.PostgreSQLTestCase, IpamTestCase):
   pass

  
  then run:

  [classic@photon2 neutron]$  tox -e functional 
neutron.tests.functional.db.test_ipam
  functional develop-inst-nodeps: /home/classic/dev/redhat/openstack/neutron
  functional installed:  ( ... output skipped ... )
  functional runtests: PYTHONHASHSEED='545881821'
  functional runtests: commands[0] | 
/home/classic/dev/redhat/openstack/neutron/tools/ostestr_compat_shim.sh 
neutron.tests.functional.db.test_ipam

  ( ... output skipped ... )

  {3} neutron.tests.functional.db.test_ipam.IpamTestCase.test_allocate_fixed_ip 
[1.510751s] ... ok
  {1} 
neutron.tests.functional.db.test_ipam.TestIpamMySql.test_allocate_fixed_ip 
[1.822431s] ... ok
  {2} 
neutron.tests.functional.db.test_ipam.IpamTestCase.test_allocate_ip_exausted_pool
 [2.468420s] ... ok
  {1} 
neutron.tests.functional.db.test_ipam.TestIpamPsql.test_allocate_ip_exausted_pool
 ... SKIPPED: backend 'postgresql' unavailable
  {0} 
neutron.tests.functional.db.test_ipam.TestIpamMySql.test_allocate_ip_exausted_pool
 [2.873318s] ... ok
  {2} neutron.tests.functional.db.test_ipam.TestIpamMySql.test_we_are_on_mysql 
[0.993651s] ... FAILED
  {0} neutron.tests.functional.db.test_ipam.TestIpamPsql.test_allocate_fixed_ip 
... SKIPPED: backend 'postgresql' unavailable
  {1} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamMySql.test_allocate_fixed_ip
 [1.133034s] ... ok
  {0} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamPsql.test_allocate_ip_exausted_pool
 ... SKIPPED: backend 'postgresql' unavailable
  {2} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamPsql.test_allocate_fixed_ip
 ... SKIPPED: backend 'postgresql' unavailable
  {3} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamMySql.test_allocate_ip_exausted_pool
 [2.740086s] ... ok

  ==
  Failed 1 tests - output below:
  ==

  neutron.tests.functional.db.test_ipam.TestIpamMySql.test_we_are_on_mysql
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/functional/db/test_ipam.py", line 160, in 
test_we_are_on_mysql
  self.cxt.session.execute("SELECT CURDATE()")
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1034, in execute
  bind, close_with_result=True).execute(clause, params or {})
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
  return meth(self, multiparams, params)
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
   

[Yahoo-eng-team] [Bug 1474069] [NEW] DeprecatedDecorators test does not setup fixtures correctly

2015-07-13 Thread Mike Bayer
Public bug reported:

this test appears to rely upon test suite setup in a different test,
outside of the test_backend_sql.py suite entirely.Below is a run of
this specific test, but you get the same error if you run all of
test_backend_sql at once as well.

[mbayer@thinkpad keystone]$ tox   -v  -e py27 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
using tox.ini: /home/mbayer/dev/jenkins_scripts/tmp/keystone/tox.ini
using tox-1.8.1 from /usr/lib/python2.7/site-packages/tox/__init__.pyc
py27 create: /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27
  /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox$ /usr/bin/python 
-mvirtualenv --setuptools --python /usr/bin/python2.7 py27 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-0.log
py27 installdeps: 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt, 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-1.log
py27 develop-inst: /home/mbayer/dev/jenkins_scripts/tmp/keystone
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U -e 
/home/mbayer/dev/jenkins_scripts/tmp/keystone 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-2.log
py27 runtests: PYTHONHASHSEED='3819984772'
py27 runtests: commands[0] | bash tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ /usr/bin/bash 
tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 
running testr
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpclgNWA
{0} 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 [0.245028s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2418, in _get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option: fatal_deprecations


Captured pythonlogging:
~~~
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
registered 'sha512_crypt' handler: class 
'passlib.handlers.sha2_crypt.sha512_crypt'


==
Failed 1 tests - output below:
==

keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
-

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 

[Yahoo-eng-team] [Bug 1445675] [NEW] missing index on virtual_interfaces can cause long queries that can cause timeouts in launching instances

2015-04-17 Thread Mike Bayer
;
+--+-++--+---+---+-+---+--++
| id   | select_type | table  | type | possible_keys | key   | 
key_len | ref   | rows | Extra  |
+--+-++--+---+---+-+---+--++
|1 | SIMPLE  | virtual_interfaces | ref  | vuidx | vuidx | 111  
   | const |1 | Using index condition; Using where |
+--+-++--+---+---+-+---+--++
1 row in set (0.00 sec)


and we get 0.00 response time for both queries:

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c5'  LIMIT 1;
Empty set (0.00 sec)

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c4'  LIMIT 1;
+-+++-+---+---++--+--+
| created_at  | updated_at | deleted_at | deleted | id| address 
  | network_id | instance_uuid| uuid
 |
+-+++-+---+---++--+--+
| 2014-08-12 22:22:14 | NULL   | NULL   |   0 | 58393 | 
address_58393 | 22 | 41f1b859-8c5d-4c27-a52e-3e97652dfe7a | 
0a269012-cbc7-4093-9602-35f003a766c4 |
+-+++-+---+---++--+--+
1 row in set (0.00 sec)


whether or not the index includes deleted doesn't really matter.  If we're 
searching for UUIDs, we get that UUID row first, then the deleted=0 is 
checked, not a big deal.

For an immediate fix,  I propose to add the aforementioned index to the
virtual_interfaces.uuid column.

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445675

Title:
  missing index on virtual_interfaces can cause long queries that can
  cause timeouts in launching instances

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In a load test where a nova environment w/ networking enabled was set
  up to have ~250K instances,  attempting to launch 50 instances would
  cause many to time out, with the error Timeout while waiting on RPC
  response - topic: network, RPC method: allocate_for_instance.
  The tester isolated the latency here to queries against the
  virtual_interfaces table, which in this test is executed some 500
  times, spending ~.5 seconds per query for a total of 200 seconds.  An
  example query looks like:

  SELECT virtual_interfaces.created_at , virtual_interfaces.updated_at , 
virtual_interfaces.deleted_at , virtual_interfaces.deleted , 
virtual_interfaces.id , virtual_interfaces.address , 
virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'9774e729-7695-4e2b-a9b2-a104a4b020d0'
  LIMIT 1;

  Query profiling against this table /query directly proceeded as
  follows:

  I scripted up direct DB access to get 250K rows in a blank database:

  MariaDB [nova] select count(*) from virtual_interfaces;
  +--+
  | count(*) |
  +--+
  |   25 |
  +--+
  1 row in set (0.09 sec)

  emitting the query when the row is found, on this particular system is
  returning in .03 sec:

  MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid

[Yahoo-eng-team] [Bug 1431571] [NEW] ArchiveTestCase erroneously assumes the tables that are populated

2015-03-12 Thread Mike Bayer
Public bug reported:

Running subsets of Nova tests or individual tests within test_db_api
reveals a simple error in several of the tests within ArchiveTestCase.

A test such as test_archive_deleted_rows_2_tables attempts the
following:

1. places six rows into instance_id_mappings
2. places six rows into instances
3. runs the archive_deleted_rows_ routine with a max of 7 rows to archive
4. runs a SELECT of instances and instance_id_mappings, and confirms that only 
5 remain.

Running this test directly with PYTHONHASHSEED=random will very easily
encounter failures such as:

Traceback (most recent call last):
  File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7869, in test_archive_deleted_rows_2_tables
self.assertEqual(len(iim_rows) + len(i_rows), 5)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 8 != 5


or 

Traceback (most recent call last):
  File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7872, in test_archive_deleted_rows_2_tables
self.assertEqual(len(iim_rows) + len(i_rows), 5)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 10 != 5


The reason is that the archive_deleted_rows() routine looks for rows in *all* 
tables, in *non-deterministic order*, e.g. by searching through 
models.__dict__.itervalues().   In the 8 != 5 case, there are rows present 
also in the instance_types table.  By PDBing into archive_deleted_rows during 
the test, we can see here:

ARCHIVED 4 ROWS FROM TABLE instances
ARCHIVED 3 ROWS FROM TABLE instance_types
Traceback (most recent call last):
...
testtools.matchers._impl.MismatchError: 8 != 5

that is, the archiver locates seven rows just between instances and
instance_types, then stops.  It never even gets to the
instance_id_mappings table.

The serious problem with the way this test is designed, is that if we
were to make it ignore only certain tables, or make the ordering fixed,
or anything else, that will never keep the test from breaking again, any
time a new table is added which contains rows when the test fixtures
start.

The only solution to making these tests runnable in their current form
is to limit the listing of tables that are searched in
archive_deleted_rows; that is, the test needs to inject a fixture into
it.  The most straightforward way to achieve this would look like this:

 @require_admin_context
-def archive_deleted_rows(context, max_rows=None):
+def archive_deleted_rows(context, max_rows=None, 
_limit_tablenames_fixture=None):
 Move up to max_rows rows from production tables to the corresponding
 shadow tables.
 
@@ -5870,6 +5870,9 @@ def archive_deleted_rows(context, max_rows=None):
 if hasattr(model_class, __tablename__):
 tablenames.append(model_class.__tablename__)
 rows_archived = 0
+if _limit_tablenames_fixture:
+tablenames = set(tablenames).intersection(_limit_tablenames_fixture)
+
 for tablename in tablenames:
 rows_archived += archive_deleted_rows_for_table(context, tablename,
  max_rows=max_rows - rows_archived)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431571

Title:
  ArchiveTestCase erroneously assumes the tables that are populated

Status in OpenStack Compute (Nova):
  New

Bug description:
  Running subsets of Nova tests or individual tests within test_db_api
  reveals a simple error in several of the tests within ArchiveTestCase.

  A test such as test_archive_deleted_rows_2_tables attempts the
  following:

  1. places six rows into instance_id_mappings
  2. places six rows into instances
  3. runs the archive_deleted_rows_ routine with a max of 7 rows to archive
  4. runs a SELECT of instances and instance_id_mappings, and confirms that 
only 5 remain.

  Running this test directly with PYTHONHASHSEED=random will very easily
  encounter failures such as:

  Traceback (most recent call last):
File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7869, in test_archive_deleted_rows_2_tables
  self.assertEqual(len(iim_rows) + len(i_rows), 5)
File 

[Yahoo-eng-team] [Bug 1397796] [NEW] alembic v. 0.7.1 will support remove_fk and others not expected by heal_script

2014-11-30 Thread Mike Bayer
Public bug reported:

neutron/db/migration/alembic_migrations/heal_script.py seems to have a
hardcoded notion of what commands Alembic is prepared to pass within the
execute_alembic_command() call.   When Alembic 0.7.1 is released, the
tests in neutron.tests.unit.db.test_migration will fail as follows:

Traceback (most recent call last):
  File neutron/tests/unit/db/test_migration.py, line 194, in 
test_models_sync
self.db_sync(self.get_engine())
  File neutron/tests/unit/db/test_migration.py, line 136, in db_sync
migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
  File neutron/db/migration/cli.py, line 61, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py,
 line 165, in upgrade
script.run_env()
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py,
 line 382, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py,
 line 241, in load_python_file
module = load_module_py(module_id, path)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py,
 line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File neutron/db/migration/alembic_migrations/env.py, line 109, in 
module
run_migrations_online()
  File neutron/db/migration/alembic_migrations/env.py, line 100, in 
run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/environment.py,
 line 742, in run_migrations
self.get_context().run_migrations(**kw)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/migration.py,
 line 305, in run_migrations
step.migration_fn(**kw)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
heal_script.heal()
  File neutron/db/migration/alembic_migrations/heal_script.py, line 81, 
in heal
execute_alembic_command(el)
  File neutron/db/migration/alembic_migrations/heal_script.py, line 92, 
in execute_alembic_command
METHODS[command[0]](*command[1:])
KeyError: 'remove_fk'


I'll send a review for the obvious fix though I have a suspicion there's
something more deliberate going on here, so consider this just a heads
up!

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397796

Title:
  alembic v. 0.7.1 will support remove_fk and others not expected by
  heal_script

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  neutron/db/migration/alembic_migrations/heal_script.py seems to have a
  hardcoded notion of what commands Alembic is prepared to pass within
  the execute_alembic_command() call.   When Alembic 0.7.1 is released,
  the tests in neutron.tests.unit.db.test_migration will fail as
  follows:

  Traceback (most recent call last):
File neutron/tests/unit/db/test_migration.py, line 194, in 
test_models_sync
  self.db_sync(self.get_engine())
File neutron/tests/unit/db/test_migration.py, line 136, in db_sync
  migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
File neutron/db/migration/cli.py, line 61, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py,
 line 165, in upgrade
  script.run_env()
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py,
 line 382, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py,
 line 241, in load_python_file
  module = load_module_py(module_id, path)
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py,
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File neutron/db/migration/alembic_migrations/env.py, line 109, in 
module
  run_migrations_online()
File neutron

[Yahoo-eng-team] [Bug 1380823] [NEW] outerjoins used as a result of plugin architecture are inefficient

2014-10-13 Thread Mike Bayer
Public bug reported:

Hi there -

I'm posting this as a bug sort of as a means to locate who best to talk
about a. how critical these queries are and b. what other approaches
would be feasible (I'm zzzeek on IRC).

We're talking here about the plugin architecture in
neutron/db/common_db_mixin.py, where the register_model_query_hook()
method presents a way of applying modifiers to queries.This system
appears to be used by:  db/external_net_db.py, plugins/ml2/plugin.py,
db/portbindings_db.py, plugins/metaplugin/meta_neutron_plugin.py.

What the use of the hook has in common in these cases is that a LEFT
OUTER JOIN is applied to the Query early on, in anticipation of either
the filter_hook or result_filters being applied to the query, but only
*possibly*, and then even within those hooks as supplied, again only
*possibly*.   It's these two *possiblies* that leads to the use of
LEFT OUTER JOIN - this extra table is present in the query's FROM
clause, but if we decide we don't need to filter on it, its OK!  it's
just a left outer join.  And even, in the case of external_net_db.py,
maybe we even add a criteria WHERE extra model id IS NULL, that is
doing a not contains off of this left outer join.

The result is that we can get a query like this:

SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS NOT
NULL

this can happen for example if using External_net_db_mixin, the
outerjoin to ExternalNetwork is created, _network_filter_hook applies
expr.or_(ExternalNetwork.network_id != expr.null()), and that's it.

The database will usually have a much easier time if this query is
expressed correctly:

   SELECT a.* FROM a INNER JOIN b ON a.id=b.aid


the reason this bugs me is because the SQL output is being compromised as a 
result of how the plugin system is organized here.   Preferable would be a 
system where the plugins are either organized into fewer functions that perform 
all the checking at once, or if the plugin system had more granularity to know 
that it needs to apply an optional JOIN or not.   

There's a lot of ways I could propose reorganizing this but I wanted to
talk to someone on IRC to make sure that no external projects are using
these hooks, and to get some other background.

Overall long term I seek to consolidate the use of model_query into
oslo.db, so I'm looking to take in all of its variants into a common
form.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1380823

Title:
  outerjoins used as a result of plugin architecture are inefficient

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi there -

  I'm posting this as a bug sort of as a means to locate who best to
  talk about a. how critical these queries are and b. what other
  approaches would be feasible (I'm zzzeek on IRC).

  We're talking here about the plugin architecture in
  neutron/db/common_db_mixin.py, where the register_model_query_hook()
  method presents a way of applying modifiers to queries.This system
  appears to be used by:  db/external_net_db.py, plugins/ml2/plugin.py,
  db/portbindings_db.py, plugins/metaplugin/meta_neutron_plugin.py.

  What the use of the hook has in common in these cases is that a LEFT
  OUTER JOIN is applied to the Query early on, in anticipation of either
  the filter_hook or result_filters being applied to the query, but only
  *possibly*, and then even within those hooks as supplied, again only
  *possibly*.   It's these two *possiblies* that leads to the use of
  LEFT OUTER JOIN - this extra table is present in the query's FROM
  clause, but if we decide we don't need to filter on it, its OK!  it's
  just a left outer join.  And even, in the case of external_net_db.py,
  maybe we even add a criteria WHERE extra model id IS NULL, that is
  doing a not contains off of this left outer join.

  The result is that we can get a query like this:

  SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS
  NOT NULL

  this can happen for example if using External_net_db_mixin, the
  outerjoin to ExternalNetwork is created, _network_filter_hook applies
  expr.or_(ExternalNetwork.network_id != expr.null()), and that's it.

  The database will usually have a much easier time if this query is
  expressed correctly:

 SELECT a.* FROM a INNER JOIN b ON a.id=b.aid

  
  the reason this bugs me is because the SQL output is being compromised as a 
result of how the plugin system is organized here.   Preferable would be a 
system where the plugins are either organized into fewer functions that perform 
all the checking at once, or if the plugin system had more granularity to know 
that it needs to apply an optional JOIN or not.   

  There's a lot of ways I could propose reorganizing this but I wanted
  to talk to someone on IRC to make sure that no external 

[Yahoo-eng-team] [Bug 1375467] [NEW] db deadlock on _instance_update()

2014-09-29 Thread Mike Bayer
Public bug reported:

continuing from the same pattern as that of
https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
unhandled deadlocks on derivatives of _instance_update(), such as the
stacktrace below.  As _instance_update() is a point of transaction
demarcation based on its use of get_session(), the @_retry_on_deadlock
should be added to this method.

Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply\
incoming.message))\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch\
return self._do_dispatch(endpoint, method, ctxt, args)\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch\
result = getattr(endpoint, method)(ctxt, **new_args)\
File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 887, in 
instance_update\
service)\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139, 
in inner\
return func(*args, **kwargs)\
File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 130, in 
instance_update\
context, instance_uuid, updates)\
File /usr/lib/python2.7/site-packages/nova/db/api.py, line 742, in 
instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 164, in 
wrapper\
return f(*args, **kwargs)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2208, 
in instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2299, 
in _instance_update\
session.add(instance_ref)\
File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 447, 
in __exit__\
self.rollback()\
File /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line 
58, in __exit__\
compat.reraise(exc_type, exc_value, exc_tb)\
File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 444, 
in __exit__\
self.commit()\
File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 443, in _wrap\
_raise_if_deadlock_error(e, self.bind.dialect.name)\
File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 427, in _raise_if_deadlock_error\
raise exception.DBDeadlock(operational_error)\
DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get lock; 
try restarting transaction\') None None\

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 887, 
in instance_update\
  service)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 
139, in inner\
  return func(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File /usr/lib/python2.7/site-packages/nova/db/api.py, line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 
447, in __exit__\
  self.rollback()\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, 
line 58, in __exit__

[Yahoo-eng-team] [Bug 1347891] [NEW] mis-use of XML canonicalization in keystone tests

2014-07-23 Thread Mike Bayer
Public bug reported:

running the keystone suite on a new Fedora VM, I get many many failures
of the variety of XML comparison failing, in a non-deterministic way:

[classic@localhost keystone]$ tox -e py27 --  
keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
py27 develop-inst-noop: /home/classic/dev/redhat/keystone
py27 runtests: PYTHONHASHSEED='2335155056'
py27 runtests: commands[0] | python setup.py testr --slowest 
--testr-args=keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
running testr
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests  --load-list 
/tmp/tmpCKSHDr
==
FAIL: keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
tags: worker-0
--
Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
Deprecated: keystone.common.kvs.Base is deprecated as of Icehouse in favor of 
keystone.common.kvs.KeyValueStore and may be removed in Juno.
Registering Dogpile Backend 
keystone.tests.test_kvs.KVSBackendForcedKeyMangleFixture as 
openstack.kvs.KVSBackendForcedKeyMangleFixture
Registering Dogpile Backend keystone.tests.test_kvs.KVSBackendFixture as 
openstack.kvs.KVSBackendFixture
KVS region configuration for token-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
It is recommended to only use the base key-value-store implementation for the 
token driver for testing purposes.  Please use 
keystone.token.backends.memcache.Token or keystone.token.backends.sql.Token 
instead.
KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
Using default dogpile sha1_mangle_key as KVS region os-revoke-driver key_mangler
Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed to 
event `identity.OS-TRUST:trust.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` subscribed 
to event `identity.OS-OAUTH1:consumer.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.disabled`.
Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.disabled`.
Callback: `keystone.contrib.revoke.core.Manager._domain_callback` subscribed to 
event `identity.domain.disabled`.
Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of 
Icehouse in favor of support for application/json only and may be removed in 
K.
Auth token not in the request header. Will not build auth context.
arg_dict: {}
}}}

Traceback (most recent call last):
  File 
/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
return func(*args, **keywargs)
  File keystone/tests/test_versions.py, line 460, in test_v3_disabled
self.assertThat(data, matchers.XMLEquals(expected))
  File 
/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
raise mismatch_error
MismatchError: expected = version 
xmlns=http://docs.openstack.org/identity/api/v2.0; id=v2.0 status=stable 
updated=2014-04-17T00:00:00Z
  media-types
media-type base=application/json 
type=application/vnd.openstack.identity-v2.0+json/
media-type base=application/xml 
type=application/vnd.openstack.identity-v2.0+xml/
  /media-types
  links
link href=http://localhost:26739/v2.0/; rel=self/
link href=http://docs.openstack.org/; rel=describedby type=text/html/
  /links
  link href=http://localhost:26739/v2.0/; rel=self/
  link href=http://docs.openstack.org/; rel=describedby type=text/html/
/version

actual = version xmlns=http://docs.openstack.org/identity/api/v2.0; id=v2.0 
status=stable updated=2014-04-17T00:00:00Z
  

[Yahoo-eng-team] [Bug 1346673] [NEW] fixtures in neutron.tests.base blow away default database config

2014-07-21 Thread Mike Bayer
Public bug reported:

Really trying to narrow this one down fully, and just putting this up
because this is as far as I've gotten.

Basically, the lines in neutron/tests/base.py:

  line 159:self.addCleanup(CONF.reset)
  line 182:self.useFixture(self.messaging_conf)

cause cfg.CONF to get totally wiped out in the database config.  I
don't yet understand why this is the case.

if you then run any test that extends BaseTestCase, and then run
neutron/tests/unit/test_db_plugin.py - NeutronDbPluginV2AsMixinTestCase
in the same process, these two tests fail:

Traceback (most recent call last):
  File 
/Users/classic/dev/redhat/openstack/neutron/neutron/tests/unit/test_db_plugin.py,
 line 3943, in setUp
self.plugin = importutils.import_object(DB_PLUGIN_KLASS)
  File 
/Users/classic/dev/redhat/openstack/neutron/neutron/openstack/common/importutils.py,
 line 38, in import_object
return import_class(import_str)(*args, **kwargs)
  File 
/Users/classic/dev/redhat/openstack/neutron/neutron/db/db_base_plugin_v2.py, 
line 72, in __init__
db.configure_db()
  File /Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py, line 
45, in configure_db
register_models()
  File /Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py, line 
68, in register_models
facade = _create_facade_lazily()
  File /Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py, line 
34, in _create_facade_lazily
_FACADE = session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)
  File 
/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py,
 line 977, in from_config
retry_interval=conf.database.retry_interval)
  File 
/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py,
 line 893, in __init__
**engine_kwargs)
  File 
/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py,
 line 650, in create_engine
if sqlite in connection_dict.drivername:
AttributeError: 'NoneType' object has no attribute 'drivername'

I'm getting this error running tox on a subset of tests, however it's
difficult to reproduce as the subprocesses have to work out just right.

To reproduce, just install nose and do:

.tox/py27/bin/nosetests -v
neutron.tests.unit.test_db_plugin:DbModelTestCase
neutron.tests.unit.test_db_plugin:NeutronDbPluginV2AsMixinTestCase

That is, DbModelTestCase is a harmless test but because it runs
base.BaseTestCase first, cfg.CONF gets blown away.

I don't know what the solution should be here, cfg.CONF shouldn't be
reset but I don't know what messaging_conffixture.ConfFixture is or
how CONF.reset was supposed to work as it blows away DB config.  The
cfg.CONF in the first place seems to get set up via this path:

  string(7)exec2()
  
/Users/classic/dev/redhat/openstack/neutron/neutron/tests/unit/test_db_plugin.py(26)module()
- from neutron.api import extensions
  
/Users/classic/dev/redhat/openstack/neutron/neutron/api/extensions.py(31)module()
- from neutron import manager
  /Users/classic/dev/redhat/openstack/neutron/neutron/manager.py(20)module()
- from neutron.common import rpc as n_rpc
  
/Users/classic/dev/redhat/openstack/neutron/neutron/common/rpc.py(22)module()
- from neutron import context
  /Users/classic/dev/redhat/openstack/neutron/neutron/context.py(26)module()
- from neutron import policy
  /Users/classic/dev/redhat/openstack/neutron/neutron/policy.py(55)module()
- cfg.CONF.import_opt('policy_file', 'neutron.common.config')
  
/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py(1764)import_opt()
- __import__(module_str)
  
/Users/classic/dev/redhat/openstack/neutron/neutron/common/config.py(135)module()
- max_overflow=20, pool_timeout=10)
 /Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/options.py(145)set_defaults()
- conf.register_opts(database_opts, group='database')

e.g. oslo.db set_defaults() sets it up.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346673

Title:
  fixtures in neutron.tests.base blow away default database config

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Really trying to narrow this one down fully, and just putting this up
  because this is as far as I've gotten.

  Basically, the lines in neutron/tests/base.py:

line 159:self.addCleanup(CONF.reset)
line 182:self.useFixture(self.messaging_conf)

  cause cfg.CONF to get totally wiped out in the database config.  I
  don't yet understand why this is the case.

  if you then run any test that extends BaseTestCase, and then run
  neutron/tests/unit/test_db_plugin.py -
  NeutronDbPluginV2AsMixinTestCase in the 

[Yahoo-eng-team] [Bug 1329482] [NEW] test_quota.py QuotaReserveSqlAlchemyTestCase fails to clean up changes to QUOTA_SYNC_FUNCTIONS

2014-06-12 Thread Mike Bayer
/dev/redhat/nova/nova/db/sqlalchemy/api.py, line 202, in 
wrapped
return f(*args, **kwargs)
  File /Users/classic/dev/redhat/nova/nova/db/sqlalchemy/api.py, line 3130, 
in quota_reserve
updates = sync(elevated, project_id, user_id, session)
  File /Users/classic/dev/redhat/nova/nova/tests/test_quota.py, line 2101, in 
sync
self.sync_called.add(res_name)
AttributeError: 'QuotaReserveSqlAlchemyTestCase' object has no attribute 
'sync_called'

The symptom is cryptic here, but essentially the callables swapped in by
this test refer to self as a QuotaReserveSqlAlchemyTestCase object,
which has long since been torn down and no longer has the sync_called
set associated with it.

I'd love to use mock.patch() for this kind of thing, but for the moment
I'm going to submit a straightforward patch that restores
QUOTA_SYNC_FUNCTIONS.

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress


** Tags: low-hanging-fruit

** Changed in: nova
 Assignee: (unassigned) = Mike Bayer (zzzeek)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329482

Title:
  test_quota.py QuotaReserveSqlAlchemyTestCase fails to clean up changes
  to QUOTA_SYNC_FUNCTIONS

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In nova/tests/test_quota.py - QuotaReserveSqlAlchemyTestCase.setUp(),
  alternate function definitions are swapped into
  nova.db.sqlalchemy.api.QUOTA_SYNC_FUNCTIONS, however they are not
  reverted in any corresponding tearDown() method.   I'm guessing this
  isn't typically noticed as when using either nose or the testr-style
  tools, test_quota.py is run well after other tests which rely on these
  functions, such as those in nova/tests/api/ec2/test_cinder_cloud.py.
  However, I've been using py.test which has a different natural test
  ordering.  The issue can be seen using Nose by running
  test_cinder_cloud after test_quota:

  $ ../.venv/bin/nosetests -v nova/tests/test_quota.py 
nova/tests/api/ec2/test_cinder_cloud.py -x
  nova.tests.test_quota.BaseResourceTestCase.test_no_flag ... ok
  nova.tests.test_quota.BaseResourceTestCase.test_quota_no_project ... ok
  nova.tests.test_quota.BaseResourceTestCase.test_quota_with_project ... ok
  nova.tests.test_quota.BaseResourceTestCase.test_with_flag ... ok

  [ ... tests continue to run ... ]

  
nova.tests.test_quota.QuotaReserveSqlAlchemyTestCase.test_quota_reserve_until_refresh
 ... ok
  nova.tests.api.ec2.test_cinder_cloud.CinderCloudTestCase.test_create_image 
... 
/Users/classic/dev/redhat/.venv/lib/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:33:
 SAWarning: The IN-predicate on instances.uuid was invoked with an empty 
sequence. This results in a contradiction, which nonetheless can be expensive 
to evaluate.  Consider alternative strategies for improved performance.
return o[0](self, self.expr, op, *(other + o[1:]), **kwargs)
  ERROR

  ==
  ERROR: 
nova.tests.api.ec2.test_cinder_cloud.CinderCloudTestCase.test_create_image
  --
  _StringException: pythonlogging:'': {{{
  AUDIT [nova.service] Starting conductor node (version 2014.2)
  INFO [nova.virt.driver] Loading compute driver 'nova.virt.fake.FakeDriver'
  AUDIT [nova.service] Starting compute node (version 2014.2)
  AUDIT [nova.compute.resource_tracker] Auditing locally available compute 
resources
  AUDIT [nova.compute.resource_tracker] Free ram (MB): 7680
  AUDIT [nova.compute.resource_tracker] Free disk (GB): 1028
  AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
  AUDIT [nova.compute.resource_tracker] PCI stats: []
  INFO [nova.compute.resource_tracker] Compute_service record created for 
93851743013149aabf5b0a5492cef513:fake-mini
  AUDIT [nova.service] Starting scheduler node (version 2014.2)
  INFO [nova.network.driver] Loading network driver 'nova.network.linux_net'
  AUDIT [nova.service] Starting network node (version 2014.2)
  AUDIT [nova.service] Starting consoleauth node (version 2014.2)
  INFO [nova.virt.driver] Loading compute driver 'nova.virt.fake.FakeDriver'
  AUDIT [nova.service] Starting compute node (version 2014.2)
  AUDIT [nova.compute.resource_tracker] Auditing locally available compute 
resources
  AUDIT [nova.compute.resource_tracker] Free ram (MB): 7680
  AUDIT [nova.compute.resource_tracker] Free disk (GB): 1028
  AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
  AUDIT [nova.compute.resource_tracker] PCI stats: []
  INFO [nova.compute.resource_tracker] Compute_service record created for 
d207a83c4c3f4b0ab622668b19210a10:fake-mini
  WARNING [nova.service] Service killed that has no database entry
  }}}

  Traceback (most recent call last):
File 
/Users/classic/dev