On 07/08/2013 04:35 PM, Clint Byrum wrote:
Excerpts from Adam Young's message of 2013-07-08 13:18:55 -0700:
On 07/01/2013 01:35 PM, Clint Byrum wrote:
The way the new keystone-manage command "token_flush" works right now
is quite broken by MySQL and InnoDB's gap locking behavior:
https://bugs.launchpad.net/1188378
Presumably other SQL databases like PostgreSQL will have similar problems
with doing massive deletes, but I am less familiar with them.
I am trying to solve this in keystone, and my first attempt is here:
https://review.openstack.org/#/c/32044/
However, MySQL does not support using "LIMIT" in a sub-query that
is feeding an IN() clause, so that approach will not work. Likewise,
sqlalchemy does not support the MySQL specific extension to DELETE which
allows it to have a LIMIT clause.
Now, I can do some hacky things, like just deleting all of the expired
tokens from the oldest single second, but that could also potentially
be millions of tokens, and thus, millions of gaps to lock.
So, there is just not one way to work for all databases, and we have to
have a special mode for MySQL.
I was wondering if anybody has suggestions and/or examples of how to do
that with sqlalchemy.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
In general, if you have millions of roles, you need a real database. I
would not try to work through SQL Alchemy for this. Instead, you
probably just want to make sure that the token_flush is run fairly
regularly on your database.
I'm not sure I understand you.
* I am asking about millions of tokens, not roles.
Heh, I mean Rows, and somehow type roles.
* I am asking about MySQL.. presumably a "real" database.
I have to admit I am a bit of a Postgresql Bigot. I don't really
consider MySQL a real database, althought it has improved a lot over the
years. I am not up to speed on"InnoDB's gap locking behavior" but it is
not something I would expect to be a problem in Postgresql.
* In the bug, I am suggesting that running token_flush once every
second will be _a disaster_ on a busy site with MySQL because of
the gap locking behavior in InnoDB. We need to delete a small number
per transaction.
once every second would be strange indeed. I would think maybe once
every five minutes or so. Schedule your clean up IAW your deployment
and usage.
Deleting a chunk of tokens in bulk would be preferable to doing client
side iteration, I can;t see how that would not be the case.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev