Gyorgy Szombathelyi <gyorgy.szombathe...@doclerholding.com> wrote:



-----Original Message-----
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]

Gyorgy Szombathelyi <gyorgy.szombathe...@doclerholding.com> wrote:

Hi!

Maybe I'm overseeing something, but I'm wondering if table renames
breaks rolling upgrades.

Yes, they would. Do you suggest that we allowed such a migration anywhere
in Newton?
Well, if you don't count the contract phase, then of course not.

E.g. if I'm upgrading neutron server instances one-by-one, and doing a
db expand during the process, the old neutron-server instances will
continue to work. However, without the new renamed tables, the new
instances won't work.  Just would be good to know:

With the current approach to neutron-server upgrades that neutron team
supports, we don’t allow new neutron-servers running while contract phase
is not applied yet. The process is documented at:

http://docs.openstack.org/developer/neutron/devref/upgrade.html#server
-upgrade
Ah, OK, then I misunderstood it.

- In this form, why the expand phase is useful? The expanded database
won't work with a new server instance, and it is not required for the
older ones.

It contains operations that are safe to execute while old neutron-server
instances are running. After expansion happens, you must shutdown all
neutron-servers, then apply contract phase, then start your new neutron-
servers. This is the only scenario currently supported.
Ok, understand. However I don't think there's a big difference in execution time between a contract or an just upgrade head, so for not too big installations, the two-phase upgrade can be skipped safely, I think. There'll be not much more
downtime.

The split is a foundational work for the next step of forbidding any data migration in contraction phase, and postponing schema contraction to later when it’s safe to execute it (+2 releases from the time of the corresponding expansion schema change). I agree that the way it’s implemented just now is not saving much time, and does not provide you with no downtime. We are looking into this direction though, so stay tuned.


- Actually what would be the workflow for a downtime-less upgrade?
From Liberty to Mitaka it worked fairly good, even without using
expand-contract.

Depending on what you mean by asking for downtime-less. If it’s data plane
wise, current agents should be able to work and upgrade without disrupting
any data flows. If you suggest that neutron API should be available at all
times, then this scenario is just not supported at this moment.

While your mileage may vary, I really doubt you could safely upgrade L->M
without API downtime. You are either lucky, or you just haven’t noticed
some failures. The way we currently handle database access does not
accommodate for no-downtime upgrade process for api endpoints. We are
looking into implementing some form of it in Ocata, but only time will tell if
we succeed to deliver. Until then, you can’t avoid API downtime.

Since the old instances could work more or less with the new SQL schema (at least I didn't notice any immediate failures, and looking at the Mitaka contract code, it
is not so intrusive), it was fairly good with this process:
- remove one neutron-server from the LB
- upgrade it
- upgrade the schema
- put it back to the LB
- remove the other neutron-servers from the LB
- upgrade them
- put back them to the LB

Then you were lucky. :)


From Mitaka to Newton, the new instance immediately fails because of the renaming ml2_dvr_port_bindings and ml2_network_segments tables, and the old instances
also fail if they started after the renames (contract).
But if it is designed this way, I should accept it then.

Yeah, it was never meant to work. If it worked for your isolated case, it’s pure luck. You should have bought a lottery ticket!

Ihar

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to