Hi Alessandro,

Sorry to step in late, did you follow current upgrade instruction [1] ? I
think we still have to recopy 4 scripts from the cloudstack-management
server to xenserver recently upgraded.

/opt/xensource/sm/NFSSR.py
/opt/xensource/bin/setupxenserver.sh
/opt/xensource/bin/make_migratable.sh
/opt/xensource/bin/cloud-clean-vlan.sh

I don't see any wrong steps in your process, execpt the copy of the 4
files. Since you upgraded from 6.2 to 6.5 , I'm wondering if iptables from
dom0 would have been changed and CloudStack would have lost connectivity to
the freshly upgraded xenserver ?

Also, once the pool-master upgraded, did you perform a "Force Reconnect" in
cloudstack and look into the management-server.log to see what's wrong?


I agree with you Davide, placing a node in maintenance mode in CloudStack
must not place the xenserver in maintenance in xenserver pool because it
could trigger a pool-master change which is not wanted during a maintenance
such as applying hotfixes.

[1]
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.6/hypervisor/xenserver.html#upgrading-xenserver-versions


Regards,


On Fri, Jan 8, 2016 at 5:20 AM, Alessandro Caviglione <
c.alessan...@gmail.com> wrote:

> Hi Yiping,
> yes, thank you very much!!
> Please share the doc so I can try again the upgrade process and see if it
> was only a "unfortunate coincidence of events" or a wrong upgrade process.
>
> Thanks!
>
> On Fri, Jan 8, 2016 at 10:20 AM, Nux! <n...@li.nux.ro> wrote:
>
> > Yiping,
> >
> > Why not make a blog post about it so everyone can benefit? :)
> >
> > Lucian
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > ----- Original Message -----
> > > From: "Yiping Zhang" <yzh...@marketo.com>
> > > To: users@cloudstack.apache.org, aemne...@gmail.com
> > > Sent: Friday, 8 January, 2016 01:31:21
> > > Subject: Re: A Story of a Failed XenServer Upgrade
> >
> > > Hi, Alessandro
> > >
> > > Late to the thread.  Is this still an issue for you ?
> > >
> > > I went thru this process before and I have a step by step document that
> > I can
> > > share if you still need it.
> > >
> > > Yiping
> > >
> > >
> > >
> > >
> > > On 1/2/16, 4:43 PM, "Ahmad Emneina" <aemne...@gmail.com> wrote:
> > >
> > >>Hi Alessandro,
> > >>Without seeing the logs, or DB, it will be hard to diagnose the issue.
> > I've
> > >>seen something similar in the past, where the XenServer host version
> isnt
> > >>getting updated in the DB, as part of the XS upgrade process. That
> caused
> > >>CloudStack to use the wrong hypervisor resource to try connecting back
> to
> > >>the XenServers... ending up in failure. If you could share sanitized
> > >>versions of your log and db, someone here might be able to give you the
> > >>necessary steps to get your cluster back under CloudStack control.
> > >>
> > >>On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
> > >>c.alessan...@gmail.com> wrote:
> > >>
> > >>> No guys,as the article wrote, my first action was to put in
> Maintenance
> > >>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> > XenServer
> > >>> Pool Master first before any of the Slaves.  To do so you need to
> > empty the
> > >>> Pool Master of all CloudStack VMs, and you do this by putting the
> Host
> > into
> > >>> Maintenance Mode within CloudStack to trigger a live migration of all
> > VMs
> > >>> to alternate Hosts"
> > >>>
> > >>> This is exactly what I've done and after the XS upgrade, no hosts was
> > able
> > >>> to communicate with CS and also with the upgraded host.
> > >>>
> > >>> Putting an host in Maint Mode within CS will trigger MM also on
> > XenServer
> > >>> host or just will move the VMs to other hosts?
> > >>>
> > >>> And again.... what's the best practices to upgrade a XS cluster?
> > >>>
> > >>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> > rberg...@schubergphilis.com>
> > >>> wrote:
> > >>>
> > >>> > CloudStack should always do the migration of VM's not the
> Hypervisor.
> > >>> >
> > >>> > That's not true. You can safely migrate outside of CloudStack as
> the
> > >>> power
> > >>> > report will tell CloudStack where the vms live and the db gets
> > updated
> > >>> > accordingly. I do this a lot while patching and that works fine on
> > 6.2
> > >>> and
> > >>> > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > >>> >
> > >>> > Regards, Remi
> > >>> >
> > >>> >
> > >>> > Sent from my iPhone
> > >>> >
> > >>> > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeter...@acentek.net
> > <mailto:
> > >>> > jpeter...@acentek.net>> wrote:
> > >>> >
> > >>> > I don't use XenServer maintenance mode until after CloudStack has
> > put the
> > >>> > Host in maintenance mode.
> > >>> >
> > >>> > When you initiate maintenance mode from the host rather than
> > CloudStack
> > >>> > the db does not know where the VM's are and your UUID's get jacked.
> > >>> >
> > >>> > CS is your brains not the hypervisor.
> > >>> >
> > >>> > Maintenance in CS.  All VM's will migrate.  Maintenance in
> XenCenter.
> > >>> > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> > hypervisor
> > >>> if
> > >>> > needed and then CS and move on to the next Host.
> > >>> >
> > >>> > CloudStack should always do the migration of VM's not the
> Hypervisor.
> > >>> >
> > >>> > Jeremy
> > >>> >
> > >>> >
> > >>> > -----Original Message-----
> > >>> > From: Davide Pala [mailto:davide.p...@gesca.it]
> > >>> > Sent: Friday, January 1, 2016 5:18 PM
> > >>> > To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org
> >
> > >>> > Subject: R: A Story of a Failed XenServer Upgrade
> > >>> >
> > >>> > Hi alessandro. If u put in maintenance mode the master you force
> the
> > >>> > election of a new pool master. Now when you have see the upgraded
> > host as
> > >>> > disconnected you are connected to the new pool master and the host
> > (as a
> > >>> > pool member) cannot comunicate with a pool master of an earliest
> > version.
> > >>> > The solution? Launche the upgrade on the pool master without enter
> in
> > >>> > maintenance mode. And remember a consistent backup!!!
> > >>> >
> > >>> >
> > >>> >
> > >>> > Inviato dal mio dispositivo Samsung
> > >>> >
> > >>> >
> > >>> > -------- Messaggio originale --------
> > >>> > Da: Alessandro Caviglione <c.alessan...@gmail.com<mailto:
> > >>> > c.alessan...@gmail.com>>
> > >>> > Data: 01/01/2016 23:23 (GMT+01:00)
> > >>> > A: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>
> > >>> > Oggetto: A Story of a Failed XenServer Upgrade
> > >>> >
> > >>> > Hi guys,
> > >>> > I want to share my XenServer Upgrade adventure to understand if I
> did
> > >>> > domething wrong.
> > >>> > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
> > VRs
> > >>> > has been upgraded I start the upgrade process of my XenServer hosts
> > from
> > >>> > 6.2 to 6.5.
> > >>> > I do not already have PoolHA enabled so I followed this article:
> > >>> >
> > >>> >
> > >>>
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > >>> >
> > >>> > The cluster consists of n° 3 XenServer hosts.
> > >>> >
> > >>> > First of all I added manage.xenserver.pool.master=false
> > >>> > to environment.properties file and restarted cloudstack-management
> > >>> service.
> > >>> >
> > >>> > After that I put in Maintenance Mode Pool Master host and, after
> all
> > VMs
> > >>> > has been migrated, I Unmanaged the cluster.
> > >>> > At this point all host appears as "Disconnected" from CS interface
> > and
> > >>> > this should be right.
> > >>> > Now I put XenServer 6.5 CD in the host in Maintenance Mode and
> start
> > a
> > >>> > in-place upgrade.
> > >>> > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
> > again.
> > >>> > At this point I expected that, after click on Manage Cluster on CS,
> > all
> > >>> > the hosts come back to "UP" and I could go ahead upgrading the
> other
> > >>> > hosts....
> > >>> >
> > >>> > But, instead of that, all the hosts still appears as
> "Disconnected",
> > I
> > >>> > tried a couple of cloudstack-management service restart without
> > success.
> > >>> >
> > >>> > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
> > and it
> > >>> > appear in Maintenance Mode, so I tried to Exit from Maint Mode but
> I
> > got
> > >>> > the error: The server is still booting
> > >>> >
> > >>> > After some investigation, I run the command "xe task-list" and this
> > is
> > >>> the
> > >>> > result:
> > >>> >
> > >>> > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > >>> > name-label ( RO): VM.set_memory_dynamic_range name-description (
> RO):
> > >>> > status ( RO): pending
> > >>> > progress ( RO): 0.000
> > >>> >
> > >>> > I tried a couple of reboot but nothing changes.... so I decided to
> > shut
> > >>> > down the server, force raise a slave host to master with emergency
> > mode,
> > >>> > remove old server from CS and reboot CS.
> > >>> >
> > >>> > After that, I see my cluster up and running again, so I installed
> XS
> > >>> > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > >>> >
> > >>> > So after an entire day of work, I'm in the same situation! :D
> > >>> >
> > >>> > Anyone can tell me if I made something wrong??
> > >>> >
> > >>> > Thank you very much!
> > >>> >
> >
>

Reply via email to