Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-27 Thread Joshua Hesketh


On 4/23/15 11:41 PM, Dan Smith wrote:

That change works on the dataset. However I was referring to the
db/object api (which I have no real knowledge of) that it should be able
to get_by_uuid unmigrated instances and in my case I got the traceback
given in that paste. It's possible I'm just using the API incorrectly.

You should be able to pull migrated or unmigrated instances, yes. The
tests for the flavor stuff do this (successfully) at length. The
traceback you have seems like a half-migrated instance, where there is a
non-null flavor column on the instance_extra record, which shouldn't be
possible. The line numbers also don't match master, so I'm not sure
exactly what you're running.

If you can track down where in your dataset that is hitting and maybe
figure out what is going on, we can surely address it, but the current
tests cover all the cases I can think of at the moment.

Any chance you're tripping over something that was a casualty of
previous failures here?
I suspect I did manage to create an instance in between the two states 
when trying to set up my tests. Your fix works fine now, thanks.





Oh yes, we want that restriction, but a way around it for instances that
may be stuck or just testing purposes could be useful.

Yeah, as long as we're clear about the potential for problems if they
run --force on a moving target...


Yep. If this could get reviewed+merged that'd be great. I need this for 
turbo-hipster to sanely step through the migrate part.


Cheers,
Josh



--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-23 Thread Dan Smith
 That change works on the dataset. However I was referring to the
 db/object api (which I have no real knowledge of) that it should be able
 to get_by_uuid unmigrated instances and in my case I got the traceback
 given in that paste. It's possible I'm just using the API incorrectly.

You should be able to pull migrated or unmigrated instances, yes. The
tests for the flavor stuff do this (successfully) at length. The
traceback you have seems like a half-migrated instance, where there is a
non-null flavor column on the instance_extra record, which shouldn't be
possible. The line numbers also don't match master, so I'm not sure
exactly what you're running.

If you can track down where in your dataset that is hitting and maybe
figure out what is going on, we can surely address it, but the current
tests cover all the cases I can think of at the moment.

Any chance you're tripping over something that was a casualty of
previous failures here?

 Oh yes, we want that restriction, but a way around it for instances that
 may be stuck or just testing purposes could be useful.

Yeah, as long as we're clear about the potential for problems if they
run --force on a moving target...

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Joshua Hesketh

On 4/22/15 1:20 AM, Dan Smith wrote:

The migrate_flavor_data command didn't actually work on the CLI (unless
I'm missing something or did something odd). See
https://review.openstack.org/#/c/175890/ where I fix the requirement of
max_number. This likely means that operators have not bothered to do or
test the migrate_flavor_data yet which could be a concern.

Yep, I saw and +2d, thanks. I think it's pretty early in kilo testing
and since you don't *have* to do this for kilo, it's not really been an
issue yet.


Sure, but for people doing continuous deployment, they clearly haven't 
ran the migrate_flavor_data (or if they have, they haven't filed any 
bugs about it not working[0]).


I also found what I believe to be a bug with the flavor migration code. 
I started working on a fix by my limited knowledge of nova's objects has 
hindered me. Any thoughts on the legitimacy of the bug would be helpful 
though: https://bugs.launchpad.net/nova/+bug/1447132 . Basically for 
some of the datasets that turbo-hipster uses there are no entries in the 
new instance_extra table stopping any flavor migration from actually 
running. Then in your change (174480) you check the metadata table 
instead of the extras table causing the migration to fail even though 
migrate_flavor_data thinks there is nothing to do.


I think it's worth noting that your change (174480) will require 
operators (particularly continuous deployers) to run migrate_flavor_data 
and given the difficulties I've found I'm not sure it's ready to be ran. 
If we encounter bugs against real datasets with migrate_flavor_data then 
deployers will be unable to upgrade until migrate_flavor_data is fixed. 
This may make things awkward if a deployer updates their codebase and 
can't run the migrations. Clearly they'll have to roll-back the changes. 
This is the scenario turbo-hipster is meant to catch - migrations that 
don't work on real datasets.


If migrate_flavor_data is broken a backport into Kilo will be needed so 
that if Liberty requires all the flavor migrations to be finished they 
can indeed be ran before upgrading to Liberty. This may give reason not 
to block on having the flavors migrated until the M-release but I 
realise that has other undersiable consequences (ie high code maintenance).


Cheers,
Josh

[0] I found another one even: https://review.openstack.org/#/c/176172/




That said, I'm surprised migrate_flavor_data isn't just done as a
migration. I'm guessing there is a reason it's a separate tool and that
it has been discussed before, but if we are going to have a migration
enforcing the data to be migrated, then wouldn't it be sensible enough
to just do it at that point?

The point of this is that it's *not* done as a migration. Doing this
transition as a DB migration would require hours of downtime for large
deployments while rolling over this migration. Instead, the code in kilo
can handle the data being in either place, and converts it on update.
The nova-manage command allows background conversion (hence the
max-number limit for throttling) to take place while the system is running.

Thanks for fixing up nova-manage and for making T-H aware!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Dan Smith
 Sure, but for people doing continuous deployment, they clearly haven't
 ran the migrate_flavor_data (or if they have, they haven't filed any
 bugs about it not working[0]).

Hence the usefulness of T-H here, right? The point of the migration
check is to make sure that people _do_ run it before the leave kilo.
Right now, they have nothing other than the big note in the release
notes about doing it. Evidence seems to show that they're not seeing it,
which is exactly why we need the check :)

 I also found what I believe to be a bug with the flavor migration code.
 I started working on a fix by my limited knowledge of nova's objects has
 hindered me. Any thoughts on the legitimacy of the bug would be helpful
 though: https://bugs.launchpad.net/nova/+bug/1447132 . Basically for
 some of the datasets that turbo-hipster uses there are no entries in the
 new instance_extra table stopping any flavor migration from actually
 running. Then in your change (174480) you check the metadata table
 instead of the extras table causing the migration to fail even though
 migrate_flavor_data thinks there is nothing to do.

I don't think this has anything to do with the objects, it's probably
just a result of my lack of sqlalchemy-fu. Sounds like you weren't close
to a fix, so I'll try to cook something up.

So, a question here: These data sets were captured at some point in
time, right? Does that mean that they were from say Icehouse era and
have had nothing done to them since? Meaning, are there data sets that
actually had juno or kilo running on them? This instance_extra thing
would be the case for any instance that hasn't been touched in a long
time, so it's legit. However, as we move to more online migration of
data, I do wonder if taking an ancient dataset, doing schema migrations
forward to current and then expecting it to work is really reflective of
reality.

Can you shed some light on what is really going on?

 I think it's worth noting that your change (174480) will require
 operators (particularly continuous deployers) to run migrate_flavor_data
 and given the difficulties I've found I'm not sure it's ready to be ran.

Right, that's the point.

 If we encounter bugs against real datasets with migrate_flavor_data then
 deployers will be unable to upgrade until migrate_flavor_data is fixed.
 This may make things awkward if a deployer updates their codebase and
 can't run the migrations. Clearly they'll have to roll-back the changes.
 This is the scenario turbo-hipster is meant to catch - migrations that
 don't work on real datasets.

Right, that's why we're holding until we make sure that it works.

 If migrate_flavor_data is broken a backport into Kilo will be needed so
 that if Liberty requires all the flavor migrations to be finished they
 can indeed be ran before upgrading to Liberty. This may give reason not
 to block on having the flavors migrated until the M-release but I
 realise that has other undersiable consequences (ie high code maintenance).

Backports to fix this are fine IMHO, and just like any other bug found
in actual running of things. It's too bad that none of our continuous
deployment people seem to have found this yet, but that's a not uncommon
occurrence. Obviously if we hit something that makes it too painful to
get right in kilo, then we can punt for another cycle.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Dan Smith
 That is correct -- these are icehouse datasets which have been
 upgraded, but never had an juno run against them. It would be hard for
 turbo hipster to do anything else, as it doesn't actually run a cloud.
 We can explore ideas around how to run live upgrade code, but its
 probably a project to pursue separately.

Sure, no complaints. I just came to the realization when thinking about
this that the datasets probably need to be refreshed over time to keep
us testing real things, which I think was the point of T-H in the first
place.

 One quick option here is I can reach out and ask our dataset donors if
 they have more recent dumps they'd be willing to share.

Yeah, that'd be sweet.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Michael Still
On Thu, Apr 23, 2015 at 1:31 AM, Dan Smith d...@danplanet.com wrote:
 Sure, but for people doing continuous deployment, they clearly haven't
 ran the migrate_flavor_data (or if they have, they haven't filed any
 bugs about it not working[0]).

 Hence the usefulness of T-H here, right? The point of the migration
 check is to make sure that people _do_ run it before the leave kilo.
 Right now, they have nothing other than the big note in the release
 notes about doing it. Evidence seems to show that they're not seeing it,
 which is exactly why we need the check :)

 I also found what I believe to be a bug with the flavor migration code.
 I started working on a fix by my limited knowledge of nova's objects has
 hindered me. Any thoughts on the legitimacy of the bug would be helpful
 though: https://bugs.launchpad.net/nova/+bug/1447132 . Basically for
 some of the datasets that turbo-hipster uses there are no entries in the
 new instance_extra table stopping any flavor migration from actually
 running. Then in your change (174480) you check the metadata table
 instead of the extras table causing the migration to fail even though
 migrate_flavor_data thinks there is nothing to do.

 I don't think this has anything to do with the objects, it's probably
 just a result of my lack of sqlalchemy-fu. Sounds like you weren't close
 to a fix, so I'll try to cook something up.

 So, a question here: These data sets were captured at some point in
 time, right? Does that mean that they were from say Icehouse era and
 have had nothing done to them since? Meaning, are there data sets that
 actually had juno or kilo running on them? This instance_extra thing
 would be the case for any instance that hasn't been touched in a long
 time, so it's legit. However, as we move to more online migration of
 data, I do wonder if taking an ancient dataset, doing schema migrations
 forward to current and then expecting it to work is really reflective of
 reality.

That is correct -- these are icehouse datasets which have been
upgraded, but never had an juno run against them. It would be hard for
turbo hipster to do anything else, as it doesn't actually run a cloud.
We can explore ideas around how to run live upgrade code, but its
probably a project to pursue separately.

One quick option here is I can reach out and ask our dataset donors if
they have more recent dumps they'd be willing to share.

 Can you shed some light on what is really going on?

 I think it's worth noting that your change (174480) will require
 operators (particularly continuous deployers) to run migrate_flavor_data
 and given the difficulties I've found I'm not sure it's ready to be ran.

 Right, that's the point.

 If we encounter bugs against real datasets with migrate_flavor_data then
 deployers will be unable to upgrade until migrate_flavor_data is fixed.
 This may make things awkward if a deployer updates their codebase and
 can't run the migrations. Clearly they'll have to roll-back the changes.
 This is the scenario turbo-hipster is meant to catch - migrations that
 don't work on real datasets.

 Right, that's why we're holding until we make sure that it works.

 If migrate_flavor_data is broken a backport into Kilo will be needed so
 that if Liberty requires all the flavor migrations to be finished they
 can indeed be ran before upgrading to Liberty. This may give reason not
 to block on having the flavors migrated until the M-release but I
 realise that has other undersiable consequences (ie high code maintenance).

 Backports to fix this are fine IMHO, and just like any other bug found
 in actual running of things. It's too bad that none of our continuous
 deployment people seem to have found this yet, but that's a not uncommon
 occurrence. Obviously if we hit something that makes it too painful to
 get right in kilo, then we can punt for another cycle.

 Thanks!

Michael


-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Joshua Hesketh


On 4/23/15 1:31 AM, Dan Smith wrote:

Sure, but for people doing continuous deployment, they clearly haven't
ran the migrate_flavor_data (or if they have, they haven't filed any
bugs about it not working[0]).

Hence the usefulness of T-H here, right? The point of the migration
check is to make sure that people _do_ run it before the leave kilo.
Right now, they have nothing other than the big note in the release
notes about doing it. Evidence seems to show that they're not seeing it,
which is exactly why we need the check :)


I also found what I believe to be a bug with the flavor migration code.
I started working on a fix by my limited knowledge of nova's objects has
hindered me. Any thoughts on the legitimacy of the bug would be helpful
though: https://bugs.launchpad.net/nova/+bug/1447132 . Basically for
some of the datasets that turbo-hipster uses there are no entries in the
new instance_extra table stopping any flavor migration from actually
running. Then in your change (174480) you check the metadata table
instead of the extras table causing the migration to fail even though
migrate_flavor_data thinks there is nothing to do.

I don't think this has anything to do with the objects, it's probably
just a result of my lack of sqlalchemy-fu. Sounds like you weren't close
to a fix, so I'll try to cook something up.
Yeah it was my sqlalchemy-fu letting me down too. However I mentioned 
the objects because I was down a rabbit hole trying to figure out all of 
the code surrounding loading flavours from either system-metadata or extras.


If I selected all the instance_type_id's from the system-metadata table 
and used those uuid's to load the object with something like:

instance = objects.Instance.get_by_uuid(
context, instance_uuid,
expected_attrs=['system_metadata', 'flavor'])

The tests would fail at that point when trying to read in the flavor as 
json. http://paste.openstack.org/show/205158/


Basically without digging further it seems like I should be able to load 
an instance by uuid regardless of the state of my flavor(s). Since this 
fails it seems like there is something wrong with the flavor handling on 
the objects.




So, a question here: These data sets were captured at some point in
time, right? Does that mean that they were from say Icehouse era and
have had nothing done to them since? Meaning, are there data sets that
actually had juno or kilo running on them? This instance_extra thing
would be the case for any instance that hasn't been touched in a long
time, so it's legit. However, as we move to more online migration of
data, I do wonder if taking an ancient dataset, doing schema migrations
forward to current and then expecting it to work is really reflective of
reality.

Can you shed some light on what is really going on?


As mikal has followed up, that's correct. We have intended to refresh 
our datasets and will try and get some more recent ones, but testing the 
old ones has also proven useful.


Another interesting thing is that migrate_flavor_data avoids migrating 
instances that are in the middle of an operation. The snapshot of 
databases we have include instances in this state. Since turbo-hipster 
is just testing against a snapshot in time there is no way for those 
instances to leave their working state and hence migrate_flavor_data can 
never finish (every run will leave instances undone). This therefore 
blocks the migrations from ever finishing.


I don't think this is a real world problem though, so once we get this 
migration closer to merging we might have to force the instances to be 
migrated in our datasets. In fact, that could be a useful feature so I 
wrote it here: https://review.openstack.org/#/c/176574/


Cheers,
Josh




I think it's worth noting that your change (174480) will require
operators (particularly continuous deployers) to run migrate_flavor_data
and given the difficulties I've found I'm not sure it's ready to be ran.

Right, that's the point.


If we encounter bugs against real datasets with migrate_flavor_data then
deployers will be unable to upgrade until migrate_flavor_data is fixed.
This may make things awkward if a deployer updates their codebase and
can't run the migrations. Clearly they'll have to roll-back the changes.
This is the scenario turbo-hipster is meant to catch - migrations that
don't work on real datasets.

Right, that's why we're holding until we make sure that it works.


If migrate_flavor_data is broken a backport into Kilo will be needed so
that if Liberty requires all the flavor migrations to be finished they
can indeed be ran before upgrading to Liberty. This may give reason not
to block on having the flavors migrated until the M-release but I
realise that has other undersiable consequences (ie high code maintenance).

Backports to fix this are fine IMHO, and just like any other bug found
in actual running of things. It's too bad that none of our continuous
deployment people seem to have found 

Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-22 Thread Joshua Hesketh

On 4/23/15 1:16 PM, Dan Smith wrote:

If I selected all the instance_type_id's from the system-metadata table
and used those uuid's to load the object with something like:
 instance = objects.Instance.get_by_uuid(
 context, instance_uuid,
 expected_attrs=['system_metadata', 'flavor'])

The tests would fail at that point when trying to read in the flavor as
json. http://paste.openstack.org/show/205158/

Basically without digging further it seems like I should be able to load
an instance by uuid regardless of the state of my flavor(s). Since this
fails it seems like there is something wrong with the flavor handling on
the objects.

You should. The above is a reasonable result to get without the fix to
ensure that we create instance_extra records for instances missing it.

Do you still see the above with

   https://review.openstack.org/#/c/176387/

applied?


That change works on the dataset. However I was referring to the 
db/object api (which I have no real knowledge of) that it should be able 
to get_by_uuid unmigrated instances and in my case I got the traceback 
given in that paste. It's possible I'm just using the API incorrectly.





Another interesting thing is that migrate_flavor_data avoids migrating
instances that are in the middle of an operation. The snapshot of
databases we have include instances in this state. Since turbo-hipster
is just testing against a snapshot in time there is no way for those
instances to leave their working state and hence migrate_flavor_data can
never finish (every run will leave instances undone). This therefore
blocks the migrations from ever finishing.

Ah, yeah, that's interesting, but I think it's a restriction we have to
make for sanity.


Oh yes, we want that restriction, but a way around it for instances that 
may be stuck or just testing purposes could be useful.


Cheers,
Josh



--Dan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-21 Thread Michael Still
Hey, turbo hipster already knows how to upgrade major releases, so
adding this should be possible.

That said, I've been travelling all day so haven't had a chance to
look at this. If Josh doesn't beat me to it, I will take a look when I
get to my hotel tonight.

(We should also note that we can just merge a thing without turbo
hipster passing if we understand the reason for the test failure.
Sure, that breaks the rest of the turbo hipster runs, but we're not
100% blocked here.)

Michael

On Tue, Apr 21, 2015 at 12:09 AM, Dan Smith d...@danplanet.com wrote:
 This proposed patch requiring a data migration in Nova master is making
 Turbo Hipster face plant - https://review.openstack.org/#/c/174480/

 This is because we will require Kilo deployers to fully migrate their
 flavor data from system_metadata to instance_extra before they upgrade
 to the next release. They (and turbo hipster) will need to do this first:

 https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L963

 I'm not sure how you want to handle this, either by converting your data
 sets, or only converting the ones that master runs.

 It would be nice to merge this patch as soon as grenade is ready to do
 so, as it's blocking all other db migrations in master.

 Thanks!

 --Dan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-21 Thread Joshua Hesketh

Hey,

So I can add support to turbo-hipster to apply migrate_flavor_data when 
it hits your migration (in fact I started to do this). This means the 
tests will pass on your change and continue to once it merges. I'll 
update the datasets after the next release but it'll probably be to a 
version before migrate_flavor_data is required so we'll actually still 
be able to test that functionality against real datasets.


The migrate_flavor_data command didn't actually work on the CLI (unless 
I'm missing something or did something odd). See 
https://review.openstack.org/#/c/175890/ where I fix the requirement of 
max_number. This likely means that operators have not bothered to do or 
test the migrate_flavor_data yet which could be a concern.


That said, I'm surprised migrate_flavor_data isn't just done as a 
migration. I'm guessing there is a reason it's a separate tool and that 
it has been discussed before, but if we are going to have a migration 
enforcing the data to be migrated, then wouldn't it be sensible enough 
to just do it at that point?


If I were to have a guess at that reason I would say it is that you can 
do this command live without taking nova api offline whereas migrations 
are generally done as downtime and this could be a long migration.


Cheers,
Josh

Rackspace Australia

On 4/21/15 4:52 PM, Michael Still wrote:

Hey, turbo hipster already knows how to upgrade major releases, so
adding this should be possible.

That said, I've been travelling all day so haven't had a chance to
look at this. If Josh doesn't beat me to it, I will take a look when I
get to my hotel tonight.

(We should also note that we can just merge a thing without turbo
hipster passing if we understand the reason for the test failure.
Sure, that breaks the rest of the turbo hipster runs, but we're not
100% blocked here.)

Michael

On Tue, Apr 21, 2015 at 12:09 AM, Dan Smith d...@danplanet.com wrote:

This proposed patch requiring a data migration in Nova master is making
Turbo Hipster face plant - https://review.openstack.org/#/c/174480/

This is because we will require Kilo deployers to fully migrate their
flavor data from system_metadata to instance_extra before they upgrade
to the next release. They (and turbo hipster) will need to do this first:

https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L963

I'm not sure how you want to handle this, either by converting your data
sets, or only converting the ones that master runs.

It would be nice to merge this patch as soon as grenade is ready to do
so, as it's blocking all other db migrations in master.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-21 Thread Dan Smith
 (We should also note that we can just merge a thing without turbo
 hipster passing if we understand the reason for the test failure.
 Sure, that breaks the rest of the turbo hipster runs, but we're not
 100% blocked here.)

Indeed, the fact that it's failing actually proves the patch works,
which is very helpful. But, if I break it, everyone else loses T-H
goodness until it's fixed, so I just wanted us to be as on top of it as
possible.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-21 Thread Dan Smith
 The migrate_flavor_data command didn't actually work on the CLI (unless
 I'm missing something or did something odd). See
 https://review.openstack.org/#/c/175890/ where I fix the requirement of
 max_number. This likely means that operators have not bothered to do or
 test the migrate_flavor_data yet which could be a concern.

Yep, I saw and +2d, thanks. I think it's pretty early in kilo testing
and since you don't *have* to do this for kilo, it's not really been an
issue yet.

 That said, I'm surprised migrate_flavor_data isn't just done as a
 migration. I'm guessing there is a reason it's a separate tool and that
 it has been discussed before, but if we are going to have a migration
 enforcing the data to be migrated, then wouldn't it be sensible enough
 to just do it at that point?

The point of this is that it's *not* done as a migration. Doing this
transition as a DB migration would require hours of downtime for large
deployments while rolling over this migration. Instead, the code in kilo
can handle the data being in either place, and converts it on update.
The nova-manage command allows background conversion (hence the
max-number limit for throttling) to take place while the system is running.

Thanks for fixing up nova-manage and for making T-H aware!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

2015-04-20 Thread Dan Smith
This proposed patch requiring a data migration in Nova master is making
Turbo Hipster face plant - https://review.openstack.org/#/c/174480/

This is because we will require Kilo deployers to fully migrate their
flavor data from system_metadata to instance_extra before they upgrade
to the next release. They (and turbo hipster) will need to do this first:

https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L963

I'm not sure how you want to handle this, either by converting your data
sets, or only converting the ones that master runs.

It would be nice to merge this patch as soon as grenade is ready to do
so, as it's blocking all other db migrations in master.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev