On 05/26/2017 10:56 AM, Dan Smith wrote:
As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades
for a given project in one step. Another was to simply have mutliple
python virtual environments and just run in-line migrations from a
version specific venv (this is what the OSA tooling does). Does one way
work better than the other? Any thoughts on how this could be better?

IMHO, and speaking from a Nova perspective, I think that maintaining a
separate repo of migrations is a bad idea. We occasionally have to fix a
migration to handle a case where someone is stuck and can't move past a
certain revision due to some situation that was not originally
understood. If you have a separate copy of our migrations, you wouldn't
get those fixes. Nova hasn't compacted migrations in a while anyway, so
there's not a whole lot of value there I think.


+1 I think it's very important that migration logic not be duplicated. Nova's (and everyone else's) migration files have the information on how to move between specific schema versions. Any concatenation of these into an effective "N+X" migration should be on the fly as much as is possible.



The other thing to consider is that our _schema_ migrations often
require _data_ migrations to complete before moving on. That means you
really have to move to some milestone version of the schema, then
move/transform data, and then move to the next milestone. Since we
manage those according to releases, those are the milestones that are
most likely to be successful if you're stepping through things.

I do think that the idea of being able to generate a small utility
container (using the broad sense of the word) from each release, and
using those to step through N, N+1, N+2 to arrive at N+3 makes the most
sense.

+1



Nova has offline tooling to push our data migrations (even though the
command is intended to be runnable online). The concern I would have
would be over how to push Keystone's migrations mechanically, since I
believe they moved forward with their proposal to do data migrations in
stored procedures with triggers. Presumably there is a need for
something similar to nova's online-data-migrations command which will
trip all the triggers and provide a green light for moving on?

I haven't looked at what Keystone is doing, but to the degree they are using triggers, those triggers would only impact new data operations as they continue to run into the schema that is straddling between two versions (e.g. old column/table still exists, data should be synced to new column/table). If they are actually running a stored procedure to migrate existing data (which would be surprising to me...) then I'd assume that invokes just like any other "ALTER TABLE" instruction in their migrations. If those operations themselves rely on the triggers, that's fine.

But a keystone person to chime in would be much better than me just making stuff up.








In the end, projects support N->N+1 today, so if you're just stepping
through actual 1-version gaps, you should be able to do as many of those
as you want and still be running "supported" transitions. There's a lot
of value in that, IMHO.

--Dan

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to