Mike, For the "typical" case, your proposal sounds reasonable to me. That should protect against cross-session locking while still getting the benefits of testing DML without committing to disk.
The issue I was originally raising is, of course, the "special" case -- testing of migrations -- which, I think, could be solved in much the same way. Given N test runners, create N empty schemata, hand each migration-test-runner a schema from that pool. When that test runner is done, drop and recreate that schema. AIUI, Nodepool is already doing something similar here: https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/tests/__init__.py#n71 Regards, Devananda On Mon, Jun 9, 2014 at 12:58 PM, Mike Bayer <mba...@redhat.com> wrote: > > On Jun 9, 2014, at 1:08 PM, Mike Bayer <mba...@redhat.com> wrote: > >> >> On Jun 9, 2014, at 12:50 PM, Devananda van der Veen >> <devananda....@gmail.com> wrote: >> >>> There may be some problems with MySQL when testing parallel writes in >>> different non-committing transactions, even in READ COMMITTED mode, >>> due to InnoDB locking, if the queries use non-unique secondary indexes >>> for UPDATE or SELECT..FOR UPDATE queries. This is done by the >>> "with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places >>> in Nova. So I would not recommend this approach, even though, in >>> principle, I agree it would be a much more efficient way of testing >>> database reads/writes. >>> >>> More details here: >>> http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and >>> http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html >> >> OK, but just to clarify my understanding, what is the approach to testing >> writes in parallel right now, are we doing CREATE DATABASE for two entirely >> distinct databases with some kind of generated name for each one? >> Otherwise, if the parallel tests are against the same database, this issue >> exists regardless (unless autocommit mode is used, is FOR UPDATE accepted >> under those conditions?) > > Took a look and this seems to be the case, from oslo.db: > > def create_database(engine): > """Provide temporary user and database for each particular > test.""" > driver = engine.name > > auth = { > 'database': ''.join(random.choice(string.ascii_lowercase) > for i in moves.range(10)), > # ... > > sqls = [ > "drop database if exists %(database)s;", > "create database %(database)s;" > ] > > Just thinking out loud here, I’ll move these ideas to a new wiki page after > this post. My idea now is that OK, we provide ad-hoc databases for tests, > but look into the idea that we create N ad-hoc databases, corresponding to > parallel test runs - e.g. if we are running five tests concurrently, we make > five databases. Tests that use a database will be dished out among this > pool of available schemas. In the *typical* case (which means not the case > that we’re testing actual migrations, that’s a special case) we build up the > schema on each database via migrations or even create_all() just once, run > tests within rolled-back transactions one-per-database, then the DBs are torn > down when the suite is finished. > > Sorry for the thread hijack. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev