Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-18 Thread sebgoa

On Oct 18, 2013, at 3:47 PM, "kel...@backbonetechnology.com" 
 wrote:

> Hi,
> 
> This post might help you out.
> 
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/
> 
> If you need further help, please let me know.
> 

Kelcey, did you read Milamber's instructions ? it seems more streamlined than 
the process you went through.
I copied you so that you could go through it and tell us if you tried his 
procedure.

-sebastien

> -Kelcey
> 
> Sent from my HTC
> 
> - Reply message -
> From: "sebgoa" 
> To: , "Kelcey Jamison Damage" 
> , "Travis Graham" 
> Subject: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll 
> back to 4.1.1 backup initially failed: could not start management service. 
> Finally got it to work. I hope this helps someone else avoid my same pain...
> Date: Fri, Oct 18, 2013 12:12 AM
> 
> On Oct 16, 2013, at 10:48 AM, Milamber  wrote:
> 
> > Hello,
> > 
> > > curl: (7) couldn't connect to host
> > Your "integration.api.port" isn't define in your global settings. See below.
> > 
> > 
> > Current instructions to upgrade from 4.1.1 to 4.2 are not complete on 
> > official docs (missing 2 important steps before upgrade).
> > 
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.0-to-4.1
>  (4.2 in reality)
> > 
> > Please note too, in 4.2.0, currently the HA with CLVM (iscsi) don't works 
> > for Guest VM (works for VR/SystemVM). This have been fixed for 4.2.1
> > See : 
> https://issues.apache.org/jira/browse/CLOUDSTACK-4627
> 
> > 
> https://issues.apache.org/jira/browse/CLOUDSTACK-4777
> 
> > You can cherry-pick the commit to fix on 4.2.0 tag.
> > 
> > 
> > Here my process for Centos 6.4, KVM, CS 4.1.1 to CS 4.2.0:
> > 
> > BEFORE UPGRADE
> > 
> > 1/ With the Web UI, add a new template with the name "systemvm-kvm-4.2" 
> > (exactly) from this link:
> > 
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
> 
> > Don't start upgrade until the status change to "Download complete"
> > 
> > 2/ With the Web UI, go to Global settings, check if the 
> > "integration.api.port" is sets to 8096. If not, set up. Restart Mngt 
> > service.
> > (you can revert this change after upgrade for security reasons)
> > 
> > For point 1/ ant 2/, See in the docs, the start of process 3.0.x to 4.2
> > 
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> 
> > 
> > 
> > START OF UPGRADE
> > 
> > 3/ Stop CS services (all mgnt, all nodes)
> > /etc/init.d/cloudstack-management stop
> > /etc/init.d/cloudstack-usage stop
> > /etc/init.d/cloudstack-agent stop
> > 
> > mysqldump -u root -p cloud > cloudstack-cloud-backup.sql
> > mysqldump -u root -p cloud_usage > cloudstack-cloud_usage-backup.sql
> > mysqldump -u root -p cloudbridge > cloudstack-cloudbridge-backup.sql
> > 
> > 4/ Modify /etc/yum.repos.d/cloudstack.repo to change to 4.2 repo
> > 
> > 
> > 5/ Upgrade (all mgnt, all nodes)
> > yum clean all && yum update   (full update CS + CentOS)
> > 
> > OR for only CS update
> > yum upgrade cloudstack-management
> > yum upgrade cloudstack-agent
> > 
> > See step 10.a-e in docs
> > 
> > 
> > 6/ Start agent (all nodes)
> > /etc/init.d/cloudstack-agent start
> > 
> > 7/ Start Mngt (first mgnt)
> > /etc/init.d/cloudstack-management start
> > 
> > Check log
> > tail -f /var/log/cloudstack/management | grep -v DEBUG
> > 
> > 8/ Restart (recreate) the SSVM, Console proxy and all VR
> > cloudstack-sysvmadm -d ipOfDatabase -u cloud -pyourDBpassword -a
> > (long process, you can follow on console and see in Web UI the status of 
> > system vms/VR)
> > 
> > 9/ Start Usage service
> > /etc/init.d/cloudstack-usage start
> > (and start all mgnt service if needs)
> > 
> > 
> > Milamber
> 
> Milamber, thanks for writing this down, I am copying Kelcey who hit this 
> issue and Travis who has been working on docs.
> 
> > 
> > 
> > Le 15/10/2013 22:10, Adam a ecrit :
> >> I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
> >> version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
> >> workstations (box 1 has two nics, the cloudstack-management server, the
> >> mysql db, primary and secondary storage and a cloudstack-agent, and boxes 2
> >> - 4 each just have primary storage and a cloudstack-agent). It's nothing
> >> fancy, but it's been working perfectly now for months. All seemed to go
> >> very smoothly except for the very last step:
> >> 
> >> {code}
> >> nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a > sysvm.log
> >> 2>&1 &
> >> {code}
> >> 
> >> 
> >> It could not restart the system vms for some reason. (I do not have the
> >> original sysvm.log as I've now been playing with this failed upgrade for
> >> two days). However, here is the sysvm.log from the very last attempt:
> >> 
> >> {code}
> >> nohup: ignorin

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-18 Thread kel...@backbonetechnology.com
Hi,

This post might help you out.

http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/

If you need further help, please let me know.

-Kelcey

Sent from my HTC

- Reply message -
From: "sebgoa" 
To: , "Kelcey Jamison Damage" 
, "Travis Graham" 
Subject: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll 
back to 4.1.1 backup initially failed: could not start management service. 
Finally got it to work. I hope this helps someone else avoid my same pain...
Date: Fri, Oct 18, 2013 12:12 AM

On Oct 16, 2013, at 10:48 AM, Milamber  wrote:

> Hello,
> 
> > curl: (7) couldn't connect to host
> Your "integration.api.port" isn't define in your global settings. See below.
> 
> 
> Current instructions to upgrade from 4.1.1 to 4.2 are not complete on 
> official docs (missing 2 important steps before upgrade).
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.0-to-4.1
>  (4.2 in reality)
> 
> Please note too, in 4.2.0, currently the HA with CLVM (iscsi) don't works for 
> Guest VM (works for VR/SystemVM). This have been fixed for 4.2.1
> See : https://issues.apache.org/jira/browse/CLOUDSTACK-4627
> https://issues.apache.org/jira/browse/CLOUDSTACK-4777
> You can cherry-pick the commit to fix on 4.2.0 tag.
> 
> 
> Here my process for Centos 6.4, KVM, CS 4.1.1 to CS 4.2.0:
> 
> BEFORE UPGRADE
> 
> 1/ With the Web UI, add a new template with the name "systemvm-kvm-4.2" 
> (exactly) from this link:
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
> Don't start upgrade until the status change to "Download complete"
> 
> 2/ With the Web UI, go to Global settings, check if the 
> "integration.api.port" is sets to 8096. If not, set up. Restart Mngt service.
> (you can revert this change after upgrade for security reasons)
> 
> For point 1/ ant 2/, See in the docs, the start of process 3.0.x to 4.2
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> 
> 
> START OF UPGRADE
> 
> 3/ Stop CS services (all mgnt, all nodes)
> /etc/init.d/cloudstack-management stop
> /etc/init.d/cloudstack-usage stop
> /etc/init.d/cloudstack-agent stop
> 
> mysqldump -u root -p cloud > cloudstack-cloud-backup.sql
> mysqldump -u root -p cloud_usage > cloudstack-cloud_usage-backup.sql
> mysqldump -u root -p cloudbridge > cloudstack-cloudbridge-backup.sql
> 
> 4/ Modify /etc/yum.repos.d/cloudstack.repo to change to 4.2 repo
> 
> 
> 5/ Upgrade (all mgnt, all nodes)
> yum clean all && yum update   (full update CS + CentOS)
> 
> OR for only CS update
> yum upgrade cloudstack-management
> yum upgrade cloudstack-agent
> 
> See step 10.a-e in docs
> 
> 
> 6/ Start agent (all nodes)
> /etc/init.d/cloudstack-agent start
> 
> 7/ Start Mngt (first mgnt)
> /etc/init.d/cloudstack-management start
> 
> Check log
> tail -f /var/log/cloudstack/management | grep -v DEBUG
> 
> 8/ Restart (recreate) the SSVM, Console proxy and all VR
> cloudstack-sysvmadm -d ipOfDatabase -u cloud -pyourDBpassword -a
> (long process, you can follow on console and see in Web UI the status of 
> system vms/VR)
> 
> 9/ Start Usage service
> /etc/init.d/cloudstack-usage start
> (and start all mgnt service if needs)
> 
> 
> Milamber

Milamber, thanks for writing this down, I am copying Kelcey who hit this issue 
and Travis who has been working on docs.

> 
> 
> Le 15/10/2013 22:10, Adam a ecrit :
>> I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
>> version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
>> workstations (box 1 has two nics, the cloudstack-management server, the
>> mysql db, primary and secondary storage and a cloudstack-agent, and boxes 2
>> - 4 each just have primary storage and a cloudstack-agent). It's nothing
>> fancy, but it's been working perfectly now for months. All seemed to go
>> very smoothly except for the very last step:
>> 
>> {code}
>> nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a > sysvm.log
>> 2>&1 &
>> {code}
>> 
>> 
>> It could not restart the system vms for some reason. (I do not have the
>> original sysvm.log as I've now been playing with this failed upgrade for
>> two days). However, here is the sysvm.log from the very last attempt:
>> 
>> {code}
>> nohup: ignoring input
>> 
>> Stopping and starting 1 secondary storage vm(s)...
>> curl: (7) couldn't connect to host
>> ERROR: Failed to stop secondary storage vm with id 14
>> 
>> Done stopping and starting secondary storage vm(s)
>> 
>> Stopping and starting 0 console proxy vm(s)...
>> No running console proxy vms found
>> 
>> 
>> Stopping and starting 1 running routing vm(s)...
>> curl: (7) couldn't connect to host
>> 2
>> Done restarting router(s).
>> {code}
>> 
>> 
>> As I mentioned above, I've been playing around with this for 2 days now and
>> actually got the 4.2 mana

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-18 Thread sebgoa

On Oct 16, 2013, at 10:48 AM, Milamber  wrote:

> Hello,
> 
> > curl: (7) couldn't connect to host
> Your "integration.api.port" isn't define in your global settings. See below.
> 
> 
> Current instructions to upgrade from 4.1.1 to 4.2 are not complete on 
> official docs (missing 2 important steps before upgrade).
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.0-to-4.1
>  (4.2 in reality)
> 
> Please note too, in 4.2.0, currently the HA with CLVM (iscsi) don't works for 
> Guest VM (works for VR/SystemVM). This have been fixed for 4.2.1
> See : https://issues.apache.org/jira/browse/CLOUDSTACK-4627
> https://issues.apache.org/jira/browse/CLOUDSTACK-4777
> You can cherry-pick the commit to fix on 4.2.0 tag.
> 
> 
> Here my process for Centos 6.4, KVM, CS 4.1.1 to CS 4.2.0:
> 
> BEFORE UPGRADE
> 
> 1/ With the Web UI, add a new template with the name "systemvm-kvm-4.2" 
> (exactly) from this link:
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
> Don't start upgrade until the status change to "Download complete"
> 
> 2/ With the Web UI, go to Global settings, check if the 
> "integration.api.port" is sets to 8096. If not, set up. Restart Mngt service.
> (you can revert this change after upgrade for security reasons)
> 
> For point 1/ ant 2/, See in the docs, the start of process 3.0.x to 4.2
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> 
> 
> START OF UPGRADE
> 
> 3/ Stop CS services (all mgnt, all nodes)
> /etc/init.d/cloudstack-management stop
> /etc/init.d/cloudstack-usage stop
> /etc/init.d/cloudstack-agent stop
> 
> mysqldump -u root -p cloud > cloudstack-cloud-backup.sql
> mysqldump -u root -p cloud_usage > cloudstack-cloud_usage-backup.sql
> mysqldump -u root -p cloudbridge > cloudstack-cloudbridge-backup.sql
> 
> 4/ Modify /etc/yum.repos.d/cloudstack.repo to change to 4.2 repo
> 
> 
> 5/ Upgrade (all mgnt, all nodes)
> yum clean all && yum update   (full update CS + CentOS)
> 
> OR for only CS update
> yum upgrade cloudstack-management
> yum upgrade cloudstack-agent
> 
> See step 10.a-e in docs
> 
> 
> 6/ Start agent (all nodes)
> /etc/init.d/cloudstack-agent start
> 
> 7/ Start Mngt (first mgnt)
> /etc/init.d/cloudstack-management start
> 
> Check log
> tail -f /var/log/cloudstack/management | grep -v DEBUG
> 
> 8/ Restart (recreate) the SSVM, Console proxy and all VR
> cloudstack-sysvmadm -d ipOfDatabase -u cloud -pyourDBpassword -a
> (long process, you can follow on console and see in Web UI the status of 
> system vms/VR)
> 
> 9/ Start Usage service
> /etc/init.d/cloudstack-usage start
> (and start all mgnt service if needs)
> 
> 
> Milamber

Milamber, thanks for writing this down, I am copying Kelcey who hit this issue 
and Travis who has been working on docs.

> 
> 
> Le 15/10/2013 22:10, Adam a ecrit :
>> I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
>> version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
>> workstations (box 1 has two nics, the cloudstack-management server, the
>> mysql db, primary and secondary storage and a cloudstack-agent, and boxes 2
>> - 4 each just have primary storage and a cloudstack-agent). It's nothing
>> fancy, but it's been working perfectly now for months. All seemed to go
>> very smoothly except for the very last step:
>> 
>> {code}
>> nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a > sysvm.log
>> 2>&1 &
>> {code}
>> 
>> 
>> It could not restart the system vms for some reason. (I do not have the
>> original sysvm.log as I've now been playing with this failed upgrade for
>> two days). However, here is the sysvm.log from the very last attempt:
>> 
>> {code}
>> nohup: ignoring input
>> 
>> Stopping and starting 1 secondary storage vm(s)...
>> curl: (7) couldn't connect to host
>> ERROR: Failed to stop secondary storage vm with id 14
>> 
>> Done stopping and starting secondary storage vm(s)
>> 
>> Stopping and starting 0 console proxy vm(s)...
>> No running console proxy vms found
>> 
>> 
>> Stopping and starting 1 running routing vm(s)...
>> curl: (7) couldn't connect to host
>> 2
>> Done restarting router(s).
>> {code}
>> 
>> 
>> As I mentioned above, I've been playing around with this for 2 days now and
>> actually got the 4.2 management server to finally start, but none of the
>> System VMs worked. I even restarted all of the CentOS hosts (which was a
>> huge hassle), but that didn't seem to help at all. I eventually found this
>> bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which seemed
>> similar to my issues.
>> 
>> I was also experiencing a strange issue where all 10 of my private
>> Management IP Addresses were used for some reason. Every time I restarted
>> the cloudstack-management service, 2 more IPs were taken up, but none ever
>> got released. Also since the Sys

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-16 Thread Adam
Correct
On Oct 16, 2013 1:54 AM, "Daan Hoogland"  wrote:

> H Adam,
>
> forgive me if I missed some important clue in your report. It seems to
> me that it is the upgrade procedure that is not production ready, not
> version 4.2. Am I right?
>
> regards,
> Daan
>
> On Tue, Oct 15, 2013 at 11:10 PM, Adam  wrote:
> > I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
> > version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
> > workstations (box 1 has two nics, the cloudstack-management server, the
> > mysql db, primary and secondary storage and a cloudstack-agent, and
> boxes 2
> > - 4 each just have primary storage and a cloudstack-agent). It's nothing
> > fancy, but it's been working perfectly now for months. All seemed to go
> > very smoothly except for the very last step:
> >
> > {code}
> > nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a >
> sysvm.log
> > 2>&1 &
> > {code}
> >
> >
> > It could not restart the system vms for some reason. (I do not have the
> > original sysvm.log as I've now been playing with this failed upgrade for
> > two days). However, here is the sysvm.log from the very last attempt:
> >
> > {code}
> > nohup: ignoring input
> >
> > Stopping and starting 1 secondary storage vm(s)...
> > curl: (7) couldn't connect to host
> > ERROR: Failed to stop secondary storage vm with id 14
> >
> > Done stopping and starting secondary storage vm(s)
> >
> > Stopping and starting 0 console proxy vm(s)...
> > No running console proxy vms found
> >
> >
> > Stopping and starting 1 running routing vm(s)...
> > curl: (7) couldn't connect to host
> > 2
> > Done restarting router(s).
> > {code}
> >
> >
> > As I mentioned above, I've been playing around with this for 2 days now
> and
> > actually got the 4.2 management server to finally start, but none of the
> > System VMs worked. I even restarted all of the CentOS hosts (which was a
> > huge hassle), but that didn't seem to help at all. I eventually found
> this
> > bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which seemed
> > similar to my issues.
> >
> > I was also experiencing a strange issue where all 10 of my private
> > Management IP Addresses were used for some reason. Every time I restarted
> > the cloudstack-management service, 2 more IPs were taken up, but none
> ever
> > got released. Also since the System VMs would not start, my secondary
> > storage wouldn't start up either.
> >
> > About an hour ago I gave up on 4.2 and I decided to roll back to 4.1.1 on
> > all 4 workstations. First I shutdown all VM instances, then stopped all
> > cloudstack-* services on all 4 workstations, and then ran a "yum
> downgrade
> > cloudstack-*" on all 4 workstations:
> >
> > {code}
> > [root@cs-east-dev1 yum.repos.d]# yum downgrade cloudstack-*
> > Loaded plugins: fastestmirror, refresh-packagekit, security
> > Setting up Downgrade Process
> > Loading mirror speeds from cached hostfile
> >  * base: centos.mirror.nac.net
> >  * extras: mirror.trouble-free.net
> >  * rpmforge: mirror.us.leaseweb.net
> >  * updates: mirror.cogentco.com
> > Resolving Dependencies
> > --> Running transaction check
> > ---> Package cloudstack-agent.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-agent.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-awsapi.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-awsapi.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-cli.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-cli.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-common.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-common.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-management.x86_64 0:4.1.1-0.el6 will be a
> downgrade
> > ---> Package cloudstack-management.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-usage.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-usage.x86_64 0:4.2.0-1.el6 will be erased
> > --> Finished Dependency Resolution
> >
> > Dependencies Resolved
> >
> >
> =
> >  Package   Arch
> >   Version   Repository
> >  Size
> >
> =
> > Downgrading:
> >  cloudstack-agent  x86_64
> >   4.1.1-0.el6   cloudstack
> >  37 M
> >  cloudstack-awsapi x86_64
> >   4.1.1-0.el6   cloudstack
> >  56 M
> >  cloudstack-clix86_64
> >   4.1.1-0.el6   cloudstack
> >  32 k
> >  cloudstack-common 

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-16 Thread Milamber

Hello,

> curl: (7) couldn't connect to host
Your "integration.api.port" isn't define in your global settings. See below.


Current instructions to upgrade from 4.1.1 to 4.2 are not complete on 
official docs (missing 2 important steps before upgrade).
https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.0-to-4.1 
(4.2 in reality)


Please note too, in 4.2.0, currently the HA with CLVM (iscsi) don't 
works for Guest VM (works for VR/SystemVM). This have been fixed for 4.2.1

See : https://issues.apache.org/jira/browse/CLOUDSTACK-4627
https://issues.apache.org/jira/browse/CLOUDSTACK-4777
You can cherry-pick the commit to fix on 4.2.0 tag.


Here my process for Centos 6.4, KVM, CS 4.1.1 to CS 4.2.0:

BEFORE UPGRADE

1/ With the Web UI, add a new template with the name "systemvm-kvm-4.2" 
(exactly) from this link:

http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
Don't start upgrade until the status change to "Download complete"

2/ With the Web UI, go to Global settings, check if the 
"integration.api.port" is sets to 8096. If not, set up. Restart Mngt 
service.

(you can revert this change after upgrade for security reasons)

For point 1/ ant 2/, See in the docs, the start of process 3.0.x to 4.2
https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0


START OF UPGRADE

3/ Stop CS services (all mgnt, all nodes)
/etc/init.d/cloudstack-management stop
/etc/init.d/cloudstack-usage stop
/etc/init.d/cloudstack-agent stop

mysqldump -u root -p cloud > cloudstack-cloud-backup.sql
mysqldump -u root -p cloud_usage > cloudstack-cloud_usage-backup.sql
mysqldump -u root -p cloudbridge > cloudstack-cloudbridge-backup.sql

4/ Modify /etc/yum.repos.d/cloudstack.repo to change to 4.2 repo


5/ Upgrade (all mgnt, all nodes)
yum clean all && yum update   (full update CS + CentOS)

OR for only CS update
yum upgrade cloudstack-management
yum upgrade cloudstack-agent

See step 10.a-e in docs


6/ Start agent (all nodes)
/etc/init.d/cloudstack-agent start

7/ Start Mngt (first mgnt)
/etc/init.d/cloudstack-management start

Check log
tail -f /var/log/cloudstack/management | grep -v DEBUG

8/ Restart (recreate) the SSVM, Console proxy and all VR
cloudstack-sysvmadm -d ipOfDatabase -u cloud -pyourDBpassword -a
(long process, you can follow on console and see in Web UI the status of 
system vms/VR)


9/ Start Usage service
/etc/init.d/cloudstack-usage start
(and start all mgnt service if needs)


Milamber


Le 15/10/2013 22:10, Adam a ecrit :

I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
workstations (box 1 has two nics, the cloudstack-management server, the
mysql db, primary and secondary storage and a cloudstack-agent, and boxes 2
- 4 each just have primary storage and a cloudstack-agent). It's nothing
fancy, but it's been working perfectly now for months. All seemed to go
very smoothly except for the very last step:

{code}
nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a > sysvm.log
2>&1 &
{code}


It could not restart the system vms for some reason. (I do not have the
original sysvm.log as I've now been playing with this failed upgrade for
two days). However, here is the sysvm.log from the very last attempt:

{code}
nohup: ignoring input

Stopping and starting 1 secondary storage vm(s)...
curl: (7) couldn't connect to host
ERROR: Failed to stop secondary storage vm with id 14

Done stopping and starting secondary storage vm(s)

Stopping and starting 0 console proxy vm(s)...
No running console proxy vms found


Stopping and starting 1 running routing vm(s)...
curl: (7) couldn't connect to host
2
Done restarting router(s).
{code}


As I mentioned above, I've been playing around with this for 2 days now and
actually got the 4.2 management server to finally start, but none of the
System VMs worked. I even restarted all of the CentOS hosts (which was a
huge hassle), but that didn't seem to help at all. I eventually found this
bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which seemed
similar to my issues.

I was also experiencing a strange issue where all 10 of my private
Management IP Addresses were used for some reason. Every time I restarted
the cloudstack-management service, 2 more IPs were taken up, but none ever
got released. Also since the System VMs would not start, my secondary
storage wouldn't start up either.

About an hour ago I gave up on 4.2 and I decided to roll back to 4.1.1 on
all 4 workstations. First I shutdown all VM instances, then stopped all
cloudstack-* services on all 4 workstations, and then ran a "yum downgrade
cloudstack-*" on all 4 workstations:

{code}
[root@cs-east-dev1 yum.repos.d]# yum downgrade cloudstack-*
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Downg

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-16 Thread Nux!

On 16.10.2013 09:06, benoit lair wrote:

Hi Guys !

Do you mean that Cloudstack 4.2 is not production ready ? Or is it the
upgrade procedure towards a cs 4.2 that is not production ready ?


It depends how you look at it and how much "product upgradability" 
counts towards its "production readiness".
For example RedHat's RHEL major versions cannot be upgraded, but this 
does not make it any less production ready.


I think a good rule of thumb for any software if you are serious about 
production is to stay away from .0 releases.


Lucian

PS: Looking forward to 4.2.1 :)

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-16 Thread benoit lair
Hi Guys !

Do you mean that Cloudstack 4.2 is not production ready ? Or is it the
upgrade procedure towards a cs 4.2 that is not production ready ?

Thanks for your responses.

Regards, Benoit.


2013/10/16 Andrija Panic 

> Hi Adam,
>
> I also run into the same issues with upgrade from 4.0.0 to 4.2, on test CS
> installation, and was really pissed off, because of documentation is kind
> of incomplete on the upgrade process, or at least really buggy.. but I
> managed to resolve it for now !
>
> 1.
> So if you want to try upgrade again (system-vm not starting issue), do it
> as per docs, and then folow this guy's post, I did, and it worked very
> well...
>
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/(I
> used method 2)
>
> 2.
> BTW, than problem of management IP being used  (assigned to system VM that
> were not working/starting, and after you destroy the system VM, it's IP
> addresses still show in database) - the solution for this is the following:
> Inside "cloud" database, table "op_dc_ip_address_alloc", delete values
> (reset to NULL) of the folowing fields (for all rows/IP addresses that you
> know are NOT assigned to any VM): "nic_id", "reservation_id" "taken". After
> that, restart management server, and it will be all fine.
>
> I do agree that CS 4.2 on CentOS is not production ready...
>
>
> On 16 October 2013 07:53, Daan Hoogland  wrote:
>
> > H Adam,
> >
> > forgive me if I missed some important clue in your report. It seems to
> > me that it is the upgrade procedure that is not production ready, not
> > version 4.2. Am I right?
> >
> > regards,
> > Daan
> >
> > On Tue, Oct 15, 2013 at 11:10 PM, Adam  wrote:
> > > I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64
> from
> > > version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z
> 600
> > > workstations (box 1 has two nics, the cloudstack-management server, the
> > > mysql db, primary and secondary storage and a cloudstack-agent, and
> > boxes 2
> > > - 4 each just have primary storage and a cloudstack-agent). It's
> nothing
> > > fancy, but it's been working perfectly now for months. All seemed to go
> > > very smoothly except for the very last step:
> > >
> > > {code}
> > > nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a >
> > sysvm.log
> > > 2>&1 &
> > > {code}
> > >
> > >
> > > It could not restart the system vms for some reason. (I do not have the
> > > original sysvm.log as I've now been playing with this failed upgrade
> for
> > > two days). However, here is the sysvm.log from the very last attempt:
> > >
> > > {code}
> > > nohup: ignoring input
> > >
> > > Stopping and starting 1 secondary storage vm(s)...
> > > curl: (7) couldn't connect to host
> > > ERROR: Failed to stop secondary storage vm with id 14
> > >
> > > Done stopping and starting secondary storage vm(s)
> > >
> > > Stopping and starting 0 console proxy vm(s)...
> > > No running console proxy vms found
> > >
> > >
> > > Stopping and starting 1 running routing vm(s)...
> > > curl: (7) couldn't connect to host
> > > 2
> > > Done restarting router(s).
> > > {code}
> > >
> > >
> > > As I mentioned above, I've been playing around with this for 2 days now
> > and
> > > actually got the 4.2 management server to finally start, but none of
> the
> > > System VMs worked. I even restarted all of the CentOS hosts (which was
> a
> > > huge hassle), but that didn't seem to help at all. I eventually found
> > this
> > > bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which
> seemed
> > > similar to my issues.
> > >
> > > I was also experiencing a strange issue where all 10 of my private
> > > Management IP Addresses were used for some reason. Every time I
> restarted
> > > the cloudstack-management service, 2 more IPs were taken up, but none
> > ever
> > > got released. Also since the System VMs would not start, my secondary
> > > storage wouldn't start up either.
> > >
> > > About an hour ago I gave up on 4.2 and I decided to roll back to 4.1.1
> on
> > > all 4 workstations. First I shutdown all VM instances, then stopped all
> > > cloudstack-* services on all 4 workstations, and then ran a "yum
> > downgrade
> > > cloudstack-*" on all 4 workstations:
> > >
> > > {code}
> > > [root@cs-east-dev1 yum.repos.d]# yum downgrade cloudstack-*
> > > Loaded plugins: fastestmirror, refresh-packagekit, security
> > > Setting up Downgrade Process
> > > Loading mirror speeds from cached hostfile
> > >  * base: centos.mirror.nac.net
> > >  * extras: mirror.trouble-free.net
> > >  * rpmforge: mirror.us.leaseweb.net
> > >  * updates: mirror.cogentco.com
> > > Resolving Dependencies
> > > --> Running transaction check
> > > ---> Package cloudstack-agent.x86_64 0:4.1.1-0.el6 will be a downgrade
> > > ---> Package cloudstack-agent.x86_64 0:4.2.0-1.el6 will be erased
> > > ---> Package cloudstack-awsapi.x86_64 0:4.1.1-0.el6 will be a downgrade
> > > ---> Package cloudstack-awsapi.x86_64 0:4.2.0-1

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-16 Thread Andrija Panic
Hi Adam,

I also run into the same issues with upgrade from 4.0.0 to 4.2, on test CS
installation, and was really pissed off, because of documentation is kind
of incomplete on the upgrade process, or at least really buggy.. but I
managed to resolve it for now !

1.
So if you want to try upgrade again (system-vm not starting issue), do it
as per docs, and then folow this guy's post, I did, and it worked very
well...
http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/(I
used method 2)

2.
BTW, than problem of management IP being used  (assigned to system VM that
were not working/starting, and after you destroy the system VM, it's IP
addresses still show in database) - the solution for this is the following:
Inside "cloud" database, table "op_dc_ip_address_alloc", delete values
(reset to NULL) of the folowing fields (for all rows/IP addresses that you
know are NOT assigned to any VM): "nic_id", "reservation_id" "taken". After
that, restart management server, and it will be all fine.

I do agree that CS 4.2 on CentOS is not production ready...


On 16 October 2013 07:53, Daan Hoogland  wrote:

> H Adam,
>
> forgive me if I missed some important clue in your report. It seems to
> me that it is the upgrade procedure that is not production ready, not
> version 4.2. Am I right?
>
> regards,
> Daan
>
> On Tue, Oct 15, 2013 at 11:10 PM, Adam  wrote:
> > I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
> > version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
> > workstations (box 1 has two nics, the cloudstack-management server, the
> > mysql db, primary and secondary storage and a cloudstack-agent, and
> boxes 2
> > - 4 each just have primary storage and a cloudstack-agent). It's nothing
> > fancy, but it's been working perfectly now for months. All seemed to go
> > very smoothly except for the very last step:
> >
> > {code}
> > nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a >
> sysvm.log
> > 2>&1 &
> > {code}
> >
> >
> > It could not restart the system vms for some reason. (I do not have the
> > original sysvm.log as I've now been playing with this failed upgrade for
> > two days). However, here is the sysvm.log from the very last attempt:
> >
> > {code}
> > nohup: ignoring input
> >
> > Stopping and starting 1 secondary storage vm(s)...
> > curl: (7) couldn't connect to host
> > ERROR: Failed to stop secondary storage vm with id 14
> >
> > Done stopping and starting secondary storage vm(s)
> >
> > Stopping and starting 0 console proxy vm(s)...
> > No running console proxy vms found
> >
> >
> > Stopping and starting 1 running routing vm(s)...
> > curl: (7) couldn't connect to host
> > 2
> > Done restarting router(s).
> > {code}
> >
> >
> > As I mentioned above, I've been playing around with this for 2 days now
> and
> > actually got the 4.2 management server to finally start, but none of the
> > System VMs worked. I even restarted all of the CentOS hosts (which was a
> > huge hassle), but that didn't seem to help at all. I eventually found
> this
> > bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which seemed
> > similar to my issues.
> >
> > I was also experiencing a strange issue where all 10 of my private
> > Management IP Addresses were used for some reason. Every time I restarted
> > the cloudstack-management service, 2 more IPs were taken up, but none
> ever
> > got released. Also since the System VMs would not start, my secondary
> > storage wouldn't start up either.
> >
> > About an hour ago I gave up on 4.2 and I decided to roll back to 4.1.1 on
> > all 4 workstations. First I shutdown all VM instances, then stopped all
> > cloudstack-* services on all 4 workstations, and then ran a "yum
> downgrade
> > cloudstack-*" on all 4 workstations:
> >
> > {code}
> > [root@cs-east-dev1 yum.repos.d]# yum downgrade cloudstack-*
> > Loaded plugins: fastestmirror, refresh-packagekit, security
> > Setting up Downgrade Process
> > Loading mirror speeds from cached hostfile
> >  * base: centos.mirror.nac.net
> >  * extras: mirror.trouble-free.net
> >  * rpmforge: mirror.us.leaseweb.net
> >  * updates: mirror.cogentco.com
> > Resolving Dependencies
> > --> Running transaction check
> > ---> Package cloudstack-agent.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-agent.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-awsapi.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-awsapi.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-cli.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-cli.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-common.x86_64 0:4.1.1-0.el6 will be a downgrade
> > ---> Package cloudstack-common.x86_64 0:4.2.0-1.el6 will be erased
> > ---> Package cloudstack-management.x86_64 0:4.1.1-0.el6 will be a
> downgrade
> > ---> Package cloudstack-management.x86_64 0:4.2.0-1.el6 will be erased
>

Re: Upgrade from 4.1.1 to 4.2 failed. Could not restart system VMs. Roll back to 4.1.1 backup initially failed: could not start management service. Finally got it to work. I hope this helps someone el

2013-10-15 Thread Daan Hoogland
H Adam,

forgive me if I missed some important clue in your report. It seems to
me that it is the upgrade procedure that is not production ready, not
version 4.2. Am I right?

regards,
Daan

On Tue, Oct 15, 2013 at 11:10 PM, Adam  wrote:
> I upgraded my 4 host private dev cloud running on CentOS 6.4 x86_64 from
> version 4.1.1 to 4.2 per upgrade instructions. I'm just using 4 HP Z 600
> workstations (box 1 has two nics, the cloudstack-management server, the
> mysql db, primary and secondary storage and a cloudstack-agent, and boxes 2
> - 4 each just have primary storage and a cloudstack-agent). It's nothing
> fancy, but it's been working perfectly now for months. All seemed to go
> very smoothly except for the very last step:
>
> {code}
> nohup cloudstack-sysvmadm -d cs-east-dev1 -u root -p support -a > sysvm.log
> 2>&1 &
> {code}
>
>
> It could not restart the system vms for some reason. (I do not have the
> original sysvm.log as I've now been playing with this failed upgrade for
> two days). However, here is the sysvm.log from the very last attempt:
>
> {code}
> nohup: ignoring input
>
> Stopping and starting 1 secondary storage vm(s)...
> curl: (7) couldn't connect to host
> ERROR: Failed to stop secondary storage vm with id 14
>
> Done stopping and starting secondary storage vm(s)
>
> Stopping and starting 0 console proxy vm(s)...
> No running console proxy vms found
>
>
> Stopping and starting 1 running routing vm(s)...
> curl: (7) couldn't connect to host
> 2
> Done restarting router(s).
> {code}
>
>
> As I mentioned above, I've been playing around with this for 2 days now and
> actually got the 4.2 management server to finally start, but none of the
> System VMs worked. I even restarted all of the CentOS hosts (which was a
> huge hassle), but that didn't seem to help at all. I eventually found this
> bug: https://issues.apache.org/jira/browse/CLOUDSTACK-4826 which seemed
> similar to my issues.
>
> I was also experiencing a strange issue where all 10 of my private
> Management IP Addresses were used for some reason. Every time I restarted
> the cloudstack-management service, 2 more IPs were taken up, but none ever
> got released. Also since the System VMs would not start, my secondary
> storage wouldn't start up either.
>
> About an hour ago I gave up on 4.2 and I decided to roll back to 4.1.1 on
> all 4 workstations. First I shutdown all VM instances, then stopped all
> cloudstack-* services on all 4 workstations, and then ran a "yum downgrade
> cloudstack-*" on all 4 workstations:
>
> {code}
> [root@cs-east-dev1 yum.repos.d]# yum downgrade cloudstack-*
> Loaded plugins: fastestmirror, refresh-packagekit, security
> Setting up Downgrade Process
> Loading mirror speeds from cached hostfile
>  * base: centos.mirror.nac.net
>  * extras: mirror.trouble-free.net
>  * rpmforge: mirror.us.leaseweb.net
>  * updates: mirror.cogentco.com
> Resolving Dependencies
> --> Running transaction check
> ---> Package cloudstack-agent.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-agent.x86_64 0:4.2.0-1.el6 will be erased
> ---> Package cloudstack-awsapi.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-awsapi.x86_64 0:4.2.0-1.el6 will be erased
> ---> Package cloudstack-cli.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-cli.x86_64 0:4.2.0-1.el6 will be erased
> ---> Package cloudstack-common.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-common.x86_64 0:4.2.0-1.el6 will be erased
> ---> Package cloudstack-management.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-management.x86_64 0:4.2.0-1.el6 will be erased
> ---> Package cloudstack-usage.x86_64 0:4.1.1-0.el6 will be a downgrade
> ---> Package cloudstack-usage.x86_64 0:4.2.0-1.el6 will be erased
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
> =
>  Package   Arch
>   Version   Repository
>  Size
> =
> Downgrading:
>  cloudstack-agent  x86_64
>   4.1.1-0.el6   cloudstack
>  37 M
>  cloudstack-awsapi x86_64
>   4.1.1-0.el6   cloudstack
>  56 M
>  cloudstack-clix86_64
>   4.1.1-0.el6   cloudstack
>  32 k
>  cloudstack-common x86_64
>   4.1.1-0.el6   cloudstack
>  92 M
>  cloudstack-management x86_64
>   4.1.1-0.el6   cloudstack
>  55 M
>  cloudstack-usage  x86_64
>   4.1.1-0.el6