Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Ole Holm Nielsen

On 29-09-2023 17:33, Ryan Novosielski wrote:
I’ll just say, we haven’t done an online/jobs running upgrade recently 
(in part because we know our database upgrade will take a long time, and 
we have some processes that rely on -M), but we have done it and it does 
work fine. So the paranoia isn’t necessary unless you know that, like 
us, the DB upgrade time is not tenable (Ole’s wiki has some great 
suggestions for how to test that, but they aren’t especially Slurm 
specific, it’s just a dry-run).


Slurm upgrades are clearly documented by SchedMD, and there's no reason 
to worry if you follow the official procedures.  At least, it has always 
worked for us :-)


Just my 2 cents: The detailed list of upgrade steps/commands (first dbd, 
then ctld, then slurmds, finally login nodes) are documented in my Wiki 
page 
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_installation/#upgrading-slurm


The Slurm dbd upgrade instructions in 
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_installation/#make-a-dry-run-database-upgrade 
are totally Slurm specific, since that's the only database upgrade I've 
ever made :-)  I highly recommend doing the database dry-run upgrade on 
a test node before doing the real dbd upgrade!


/Ole



Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Groner, Rob
My team lead brought that up also, that we could go ahead and change the 
symlink that EVERYTHING uses, and nothing would happen...until the service is 
restarted.  That's good that it's not a timing-related change.  Of course, we 
do run the risk that a node will variously reboot on its own, and thus pick up 
the change before we're expecting it to.  For patch level changes, that really 
wouldn't be a problem, but if we consider doing this for a major version 
change, then it probably matters more.


From: slurm-users  on behalf of Ryan 
Novosielski 
Sent: Friday, September 29, 2023 11:33 AM
To: Slurm User Community List 
Subject: Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

You don't often get email from novos...@rutgers.edu. Learn why this is 
important<https://aka.ms/LearnAboutSenderIdentification>
I’ll just say, we haven’t done an online/jobs running upgrade recently (in part 
because we know our database upgrade will take a long time, and we have some 
processes that rely on -M), but we have done it and it does work fine. So the 
paranoia isn’t necessary unless you know that, like us, the DB upgrade time is 
not tenable (Ole’s wiki has some great suggestions for how to test that, but 
they aren’t especially Slurm specific, it’s just a dry-run).

As far as the shared symlink thing goes, I think you’d be fine, dependent on 
whether or not you have anything else stored in the shared software tree, 
changing the symlink and just not restarting compute nodes’ slurmd until you’re 
ready — though again, you can do this while jobs are running, so there’s not 
really a reason to wait, except in cases like ours where it’s just easier to 
reboot the node than one process for running nodes, and then rebooting, and 
wanting to be sure that the rebooted compute node and the running upgraded node 
will operate exactly the same.

On Sep 29, 2023, at 10:10, Paul Edmon  wrote:

This is one of the reasons we stick with using RPM's rather than the symlink 
process. It's just cleaner and avoids the issue of having the install on shared 
storage that may get overwhelmed with traffic or suffer outages. Also the 
package manager automatically removes the previous versions and local installs 
stuff. I've never been a fan of the symlink method has it runs counter to the 
entire point and design of Linux and package managers which are supposed to do 
this heavy lifting for you.

Rant aside :). Generally for minor upgrades the process is less touchy. For our 
setup we follow the following process that works well for us, but does create 
an outage for the period of the upgrade.

1. Set all partitions to down: This makes sure no new jobs are scheduled.
2. Suspend all jobs: This makes sure jobs aren't running while we upgrade.
3. Stop slurmctld and slurmdbd.
4. Upgrade the slurmdbd. Restart slurmdbd
5. Upgrade the slurmd and slurmctld across the cluster.
6. Restart slurmd and slurmctld simultaneously using choria.
7. Unsuspend all jobs
8. Reopen all partitions.

For major upgrades we always take a mysqldump and backup the spool for the 
slurmctld before upgrading just in case something goes wrong. We've had this 
happen before when the slurmdbd upgrade cut out early (note, always run the 
slurmdbd and slurmctld upgrades in -D mode and not via systemctl as systemctl 
can timeout and kill the upgrade midway for large upgrades).

That said I've also skipped steps 1, 2, 7, and 8 before for minor upgrades and 
it works fine. The slurmd, slurmctld, and slurmdbd can all run on different 
versions so long as the slurmdbd > slurmctld > slurmd.  So if you want to do a 
live upgrade you can do it. However out paranoia we general stop everything. 
The entire process takes about an hour start to finish, with the longest part 
being the pausing of all the jobs.

-Paul Edmon-

On 9/29/2023 9:48 AM, Groner, Rob wrote:
I did already see the upgrade section of Jason's talk, but it wasn't much about 
the mechanics of the actual upgrade process, more of a big picture it seemed.  
It dealt a lot with different parts of slurm at different versions, which is 
something we don't have.

One little wrinkle here is that while, yes, we're using a symlink to point to 
what version of slurm is the current one...it's all on a shared filesystem.  
So, ALL nodes, slurmdb, slurmctld are using that same symlink.  There is no 
means to upgrade one component at a time.  That means to upgrade, EVERYTHING 
has to come down before it could come back up.  Jason's slides seemed to 
indicate that, if there were separate symlinks, then I could focus on just the 
slurmdb first and upgrade it...then focus on slurmctld and upgrade it, and then 
finally the nodes (take down their slurmd, upgrade the link, bring up slurmd).  
So maybe that's what I'm missing.

Otherwise, I think what I'm saying is that I see references to a "rolling 
upgrade", but I don't see any guide to a rolling upgrade.  I 

Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Ryan Novosielski
I’ll just say, we haven’t done an online/jobs running upgrade recently (in part 
because we know our database upgrade will take a long time, and we have some 
processes that rely on -M), but we have done it and it does work fine. So the 
paranoia isn’t necessary unless you know that, like us, the DB upgrade time is 
not tenable (Ole’s wiki has some great suggestions for how to test that, but 
they aren’t especially Slurm specific, it’s just a dry-run).

As far as the shared symlink thing goes, I think you’d be fine, dependent on 
whether or not you have anything else stored in the shared software tree, 
changing the symlink and just not restarting compute nodes’ slurmd until you’re 
ready — though again, you can do this while jobs are running, so there’s not 
really a reason to wait, except in cases like ours where it’s just easier to 
reboot the node than one process for running nodes, and then rebooting, and 
wanting to be sure that the rebooted compute node and the running upgraded node 
will operate exactly the same.

On Sep 29, 2023, at 10:10, Paul Edmon  wrote:

This is one of the reasons we stick with using RPM's rather than the symlink 
process. It's just cleaner and avoids the issue of having the install on shared 
storage that may get overwhelmed with traffic or suffer outages. Also the 
package manager automatically removes the previous versions and local installs 
stuff. I've never been a fan of the symlink method has it runs counter to the 
entire point and design of Linux and package managers which are supposed to do 
this heavy lifting for you.

Rant aside :). Generally for minor upgrades the process is less touchy. For our 
setup we follow the following process that works well for us, but does create 
an outage for the period of the upgrade.

1. Set all partitions to down: This makes sure no new jobs are scheduled.
2. Suspend all jobs: This makes sure jobs aren't running while we upgrade.
3. Stop slurmctld and slurmdbd.
4. Upgrade the slurmdbd. Restart slurmdbd
5. Upgrade the slurmd and slurmctld across the cluster.
6. Restart slurmd and slurmctld simultaneously using choria.
7. Unsuspend all jobs
8. Reopen all partitions.

For major upgrades we always take a mysqldump and backup the spool for the 
slurmctld before upgrading just in case something goes wrong. We've had this 
happen before when the slurmdbd upgrade cut out early (note, always run the 
slurmdbd and slurmctld upgrades in -D mode and not via systemctl as systemctl 
can timeout and kill the upgrade midway for large upgrades).

That said I've also skipped steps 1, 2, 7, and 8 before for minor upgrades and 
it works fine. The slurmd, slurmctld, and slurmdbd can all run on different 
versions so long as the slurmdbd > slurmctld > slurmd.  So if you want to do a 
live upgrade you can do it. However out paranoia we general stop everything. 
The entire process takes about an hour start to finish, with the longest part 
being the pausing of all the jobs.

-Paul Edmon-

On 9/29/2023 9:48 AM, Groner, Rob wrote:
I did already see the upgrade section of Jason's talk, but it wasn't much about 
the mechanics of the actual upgrade process, more of a big picture it seemed.  
It dealt a lot with different parts of slurm at different versions, which is 
something we don't have.

One little wrinkle here is that while, yes, we're using a symlink to point to 
what version of slurm is the current one...it's all on a shared filesystem.  
So, ALL nodes, slurmdb, slurmctld are using that same symlink.  There is no 
means to upgrade one component at a time.  That means to upgrade, EVERYTHING 
has to come down before it could come back up.  Jason's slides seemed to 
indicate that, if there were separate symlinks, then I could focus on just the 
slurmdb first and upgrade it...then focus on slurmctld and upgrade it, and then 
finally the nodes (take down their slurmd, upgrade the link, bring up slurmd).  
So maybe that's what I'm missing.

Otherwise, I think what I'm saying is that I see references to a "rolling 
upgrade", but I don't see any guide to a rolling upgrade.  I just see the 14 
steps  in https://slurm.schedmd.com/quickstart_admin.html#upgrade, and I guess 
I'd always thought of that as the full octane, high fat upgrade.  I've only 
ever done upgrades during one of our many scheduled downtimes, because the 
upgrades were always to a new major version, and because I'm a scared little 
chicken, so I figured there were maybe some smaller subset of steps if only 
upgrading a patchlevel change.  Smaller change, less risk, less precautionary 
steps...?  I'm seeing now that's not the case.

Thank you all for the suggestions!

Rob



From: slurm-users 
<mailto:slurm-users-boun...@lists.schedmd.com>
 on behalf of Ryan Novosielski 
<mailto:novos...@rutgers.edu>
Sent: Friday, September 29, 2023 2:48 AM
To: Slurm User Community List 
<mailto:slurm-users@lists.schedmd.com>
Subject: Re: [s

Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Paul Edmon
This is one of the reasons we stick with using RPM's rather than the 
symlink process. It's just cleaner and avoids the issue of having the 
install on shared storage that may get overwhelmed with traffic or 
suffer outages. Also the package manager automatically removes the 
previous versions and local installs stuff. I've never been a fan of the 
symlink method has it runs counter to the entire point and design of 
Linux and package managers which are supposed to do this heavy lifting 
for you.



Rant aside :). Generally for minor upgrades the process is less touchy. 
For our setup we follow the following process that works well for us, 
but does create an outage for the period of the upgrade.



1. Set all partitions to down: This makes sure no new jobs are scheduled.

2. Suspend all jobs: This makes sure jobs aren't running while we upgrade.

3. Stop slurmctld and slurmdbd.

4. Upgrade the slurmdbd. Restart slurmdbd

5. Upgrade the slurmd and slurmctld across the cluster.

6. Restart slurmd and slurmctld simultaneously using choria.

7. Unsuspend all jobs

8. Reopen all partitions.


For major upgrades we always take a mysqldump and backup the spool for 
the slurmctld before upgrading just in case something goes wrong. We've 
had this happen before when the slurmdbd upgrade cut out early (note, 
always run the slurmdbd and slurmctld upgrades in -D mode and not via 
systemctl as systemctl can timeout and kill the upgrade midway for large 
upgrades).



That said I've also skipped steps 1, 2, 7, and 8 before for minor 
upgrades and it works fine. The slurmd, slurmctld, and slurmdbd can all 
run on different versions so long as the slurmdbd > slurmctld > slurmd.  
So if you want to do a live upgrade you can do it. However out paranoia 
we general stop everything. The entire process takes about an hour start 
to finish, with the longest part being the pausing of all the jobs.



-Paul Edmon-


On 9/29/2023 9:48 AM, Groner, Rob wrote:
I did already see the upgrade section of Jason's talk, but it wasn't 
much about the mechanics of the actual upgrade process, more of a big 
picture it seemed.  It dealt a lot with different parts of slurm at 
different versions, which is something we don't have.


One little wrinkle here is that while, yes, we're using a symlink to 
point to what version of slurm is the current one...it's all on a 
shared filesystem.  So, ALL nodes, slurmdb, slurmctld are using that 
same symlink.  There is no means to upgrade one component at a time.  
That means to upgrade, EVERYTHING has to come down before it could 
come back up.  Jason's slides seemed to indicate that, if there were 
separate symlinks, then I could focus on just the slurmdb first and 
upgrade it...then focus on slurmctld and upgrade it, and then finally 
the nodes (take down their slurmd, upgrade the link, bring up slurmd). 
So maybe that's what I'm missing.


Otherwise, I think what I'm saying is that I see references to a 
"rolling upgrade", but I don't see any guide to a rolling upgrade.  I 
just see the 14 steps  in 
https://slurm.schedmd.com/quickstart_admin.html#upgrade 
<https://slurm.schedmd.com/quickstart_admin.html#upgrade>, and I guess 
I'd always thought of that as the full octane, high fat upgrade.  I've 
only ever done upgrades during one of our many scheduled downtimes, 
because the upgrades were always to a new major version, and because 
I'm a scared little chicken, so I figured there were maybe some 
smaller subset of steps if only upgrading a patchlevel change.  
Smaller change, less risk, less precautionary steps...?  I'm seeing 
now that's not the case.


Thank you all for the suggestions!

Rob



*From:* slurm-users  on behalf 
of Ryan Novosielski 

*Sent:* Friday, September 29, 2023 2:48 AM
*To:* Slurm User Community List 
*Subject:* Re: [slurm-users] Steps to upgrade slurm for a patchlevel 
change?



You don't often get email from novos...@rutgers.edu. Learn why this is 
important <https://aka.ms/LearnAboutSenderIdentification>



I started off writing there’s really no particular process for 
these/just do your changes and start the new software (be mindful of 
any PATH that might contain data that’s under your software tree, if 
you have that setup), and that you might need to watch the timeouts, 
but I figured I’d have a look at the upgrade guide to be sure.


There’s really nothing onerous in there. I’d personally back up my 
database and state save directories just because I’d rather be safe 
than sorry, or for if have to go backwards and want to be sure. You 
can run SlurmCtld for a good while with no database (note that -M on 
the command line will be broken during that time), just being mindful 
of the RAM on the SlurmCtld machine/don’t restart it before the DB is 
back up, and backing up our fairly large database doesn’t take all 
that long. Whether or not 5 is require

Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Groner, Rob
I did already see the upgrade section of Jason's talk, but it wasn't much about 
the mechanics of the actual upgrade process, more of a big picture it seemed.  
It dealt a lot with different parts of slurm at different versions, which is 
something we don't have.

One little wrinkle here is that while, yes, we're using a symlink to point to 
what version of slurm is the current one...it's all on a shared filesystem.  
So, ALL nodes, slurmdb, slurmctld are using that same symlink.  There is no 
means to upgrade one component at a time.  That means to upgrade, EVERYTHING 
has to come down before it could come back up.  Jason's slides seemed to 
indicate that, if there were separate symlinks, then I could focus on just the 
slurmdb first and upgrade it...then focus on slurmctld and upgrade it, and then 
finally the nodes (take down their slurmd, upgrade the link, bring up slurmd).  
So maybe that's what I'm missing.

Otherwise, I think what I'm saying is that I see references to a "rolling 
upgrade", but I don't see any guide to a rolling upgrade.  I just see the 14 
steps  in https://slurm.schedmd.com/quickstart_admin.html#upgrade, and I guess 
I'd always thought of that as the full octane, high fat upgrade.  I've only 
ever done upgrades during one of our many scheduled downtimes, because the 
upgrades were always to a new major version, and because I'm a scared little 
chicken, so I figured there were maybe some smaller subset of steps if only 
upgrading a patchlevel change.  Smaller change, less risk, less precautionary 
steps...?  I'm seeing now that's not the case.

Thank you all for the suggestions!

Rob



From: slurm-users  on behalf of Ryan 
Novosielski 
Sent: Friday, September 29, 2023 2:48 AM
To: Slurm User Community List 
Subject: Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

You don't often get email from novos...@rutgers.edu. Learn why this is 
important<https://aka.ms/LearnAboutSenderIdentification>
I started off writing there’s really no particular process for these/just do 
your changes and start the new software (be mindful of any PATH that might 
contain data that’s under your software tree, if you have that setup), and that 
you might need to watch the timeouts, but I figured I’d have a look at the 
upgrade guide to be sure.

There’s really nothing onerous in there. I’d personally back up my database and 
state save directories just because I’d rather be safe than sorry, or for if 
have to go backwards and want to be sure. You can run SlurmCtld for a good 
while with no database (note that -M on the command line will be broken during 
that time), just being mindful of the RAM on the SlurmCtld machine/don’t 
restart it before the DB is back up, and backing up our fairly large database 
doesn’t take all that long. Whether or not 5 is required mostly depends on how 
long you think it will take you to do 6-11 (which could really take you seconds 
if your process is really as simple as stop, change symlink, start), 12 you’re 
going to do no matter what, 13 you don’t need if you skipped 5, and 14 is up to 
you. So practically, that’s what you’re going to do anyway.

We just did an upgrade last week, and the only difference is that our compute 
nodes are stateless, so the compute node upgrades were a reboot (we could 
upgrade them running, but we did it during a maintenance period anyway, so 
why?).

If you want to do this with running jobs, I’d definitely back up the state save 
directory, but as long as you watch the timeouts, it’s pretty uneventful. You 
won’t have that long database upgrade period, since no database modifications 
will be required, so it’s pretty much like upgrading anything else.

--
#BlackLivesMatter

|| \\UTGERS, |---*O*---
||_// the State  | Ryan Novosielski - novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\of NJ  | Office of Advanced Research Computing - MSB A555B, Newark
 `'

On Sep 28, 2023, at 11:58, Groner, Rob  wrote:


There's 14 steps to upgrading slurm listed on their website, including shutting 
down and backing up the database.  So far we've only updated slurm during a 
downtime, and it's been a major version change, so we've taken all the steps 
indicated.

We now want to upgrade from 23.02.4 to 23.02.5.

Our slurm builds end up in version named directories, and we tell production 
which one to use via symlink.  Changing the symlink will automatically change 
it on our slurm controller node and all slurmd nodes.

Is there an expedited, simple, slimmed down upgrade path to follow if we're 
looking at just a . level upgrade?

Rob



Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Ryan Novosielski
I started off writing there’s really no particular process for these/just do 
your changes and start the new software (be mindful of any PATH that might 
contain data that’s under your software tree, if you have that setup), and that 
you might need to watch the timeouts, but I figured I’d have a look at the 
upgrade guide to be sure.

There’s really nothing onerous in there. I’d personally back up my database and 
state save directories just because I’d rather be safe than sorry, or for if 
have to go backwards and want to be sure. You can run SlurmCtld for a good 
while with no database (note that -M on the command line will be broken during 
that time), just being mindful of the RAM on the SlurmCtld machine/don’t 
restart it before the DB is back up, and backing up our fairly large database 
doesn’t take all that long. Whether or not 5 is required mostly depends on how 
long you think it will take you to do 6-11 (which could really take you seconds 
if your process is really as simple as stop, change symlink, start), 12 you’re 
going to do no matter what, 13 you don’t need if you skipped 5, and 14 is up to 
you. So practically, that’s what you’re going to do anyway.

We just did an upgrade last week, and the only difference is that our compute 
nodes are stateless, so the compute node upgrades were a reboot (we could 
upgrade them running, but we did it during a maintenance period anyway, so 
why?).

If you want to do this with running jobs, I’d definitely back up the state save 
directory, but as long as you watch the timeouts, it’s pretty uneventful. You 
won’t have that long database upgrade period, since no database modifications 
will be required, so it’s pretty much like upgrading anything else.

--
#BlackLivesMatter

|| \\UTGERS, |---*O*---
||_// the State  | Ryan Novosielski - novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\of NJ  | Office of Advanced Research Computing - MSB A555B, Newark
 `'

On Sep 28, 2023, at 11:58, Groner, Rob  wrote:


There's 14 steps to upgrading slurm listed on their website, including shutting 
down and backing up the database.  So far we've only updated slurm during a 
downtime, and it's been a major version change, so we've taken all the steps 
indicated.

We now want to upgrade from 23.02.4 to 23.02.5.

Our slurm builds end up in version named directories, and we tell production 
which one to use via symlink.  Changing the symlink will automatically change 
it on our slurm controller node and all slurmd nodes.

Is there an expedited, simple, slimmed down upgrade path to follow if we're 
looking at just a . level upgrade?

Rob



Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-29 Thread Ole Holm Nielsen

On 9/28/23 17:58, Groner, Rob wrote:
There's 14 steps to upgrading slurm listed on their website, including 
shutting down and backing up the database.  So far we've only updated 
slurm during a downtime, and it's been a major version change, so we've 
taken all the steps indicated.


We now want to upgrade from 23.02.4 to 23.02.5.

Our slurm builds end up in version named directories, and we tell 
production which one to use via symlink.  Changing the symlink will 
automatically change it on our slurm controller node and all slurmd nodes.


Is there an expedited, simple, slimmed down upgrade path to follow if 
we're looking at just a . level upgrade?


Upgrading minor releases usually work without any problems.  We use RPMs 
in stead of the symlink method, so I can't speak for symlinks.  I 
recommend you to read the latest SLUG'23 presentations and Jason Booth's 
slides https://slurm.schedmd.com/SLUG23/Field-Notes-7.pdf page 26+.


For those using the RPM method, I have collected some notes in my Wiki 
page 
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_installation/#upgrading-slurm


/Ole



Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-28 Thread David
A colleague of mine has it scripted out quite well, so I can't speak to
*all* of the details. However, we have a user that we submit our jobs as
and it does the steps for upgrading (yum, dnf, etc). The jobs are
wholenode/exclusive so nothing else can run there, and then a few other
steps might be taken (node reboots etc). I think we might have some level
of reservation in there so nodes can drain (which would help expedite the
situation a bit but it still would depend on your longest running job).

This has worked well for . releases/patches and effectively behaves like a
rolling upgrade. Yours might even be easier/quicker since it's symlinks
(which is SchedMD's preferred method, iirc). Speaking of which, I believe
one of the SchedMD folks gave some pointers on that in the past, perhaps in
a presentation at SLUG. So you could peruse there, as well.



On Thu, Sep 28, 2023 at 12:04 PM Groner, Rob  wrote:

>
> There's 14 steps to upgrading slurm listed on their website, including
> shutting down and backing up the database.  So far we've only updated slurm
> during a downtime, and it's been a major version change, so we've taken all
> the steps indicated.
>
> We now want to upgrade from 23.02.4 to 23.02.5.
>
> Our slurm builds end up in version named directories, and we tell
> production which one to use via symlink.  Changing the symlink will
> automatically change it on our slurm controller node and all slurmd nodes.
>
> Is there an expedited, simple, slimmed down upgrade path to follow if
> we're looking at just a . level upgrade?
>
> Rob
>
>

-- 
David Rhey
---
Advanced Research Computing
University of Michigan


[slurm-users] Steps to upgrade slurm for a patchlevel change?

2023-09-28 Thread Groner, Rob

There's 14 steps to upgrading slurm listed on their website, including shutting 
down and backing up the database.  So far we've only updated slurm during a 
downtime, and it's been a major version change, so we've taken all the steps 
indicated.

We now want to upgrade from 23.02.4 to 23.02.5.

Our slurm builds end up in version named directories, and we tell production 
which one to use via symlink.  Changing the symlink will automatically change 
it on our slurm controller node and all slurmd nodes.

Is there an expedited, simple, slimmed down upgrade path to follow if we're 
looking at just a . level upgrade?

Rob