Hello all!

 

 

II. Using Solaris Live Upgrade from Veritas.

 

 

 

It may seem to you rhat this is the only practical way to upgrade Veritas & 
Solaris. But  this is not quite simple as you may think of.

 

There are no problems with "high level steps". It is always right  with them. 
But  the devil is in details.

 

1.       Local zone root is on zfs. Zone has some vxfs f.s. which are in zone 
configuration file.

 

If  you switch SG to second node all your vxfs f.s. are switched as well. It 
means that all veritas diskgroups are deported on the  first node.

 

After that this is not possible to mount ABE to patch it. Error messages look 
like this:

 

ERROR: unable to mount zones. Error message follows.

ERROR: Unable to mount zone <mymtr-zone> in </.alt.tuta>.

could not verify fs /oracle: could not access /dev/vx/rdsk/mymtrdg/mymtrvol: No 
such file or directory

zoneadm: zone mymtr-zone failed to verify

ERROR: unmounting partially mounted boot environment file systems

ERROR: cannot mount boot environment by name <tuta>

 

 

2.         Global zone is on ZFS as well.

 

Vxlustart demands to use the  whole second disk. This is natural for SVM. But 
ZFS uses snapshots , not mirrors to create ABE.  If you want to use vxlustart  
in that case   you must first split   your zfs  root pool  (usually mirror ) 
into  two  different  pools. I don't see any practical reason to do it.  
Because it takes much space and much time to create ABE.   

 

 

    Summary:

 

1.       Do not use vxfs inside zones.

2.       Do not use vxlustart if you use ZFS on global root.

 

 

  What to do ?

 

To be continued on Part  III .....

 

 

 

 

 

 

 

From: Venkata Reddy Chappavarapu 
[mailto:[email protected]] 
Sent: Monday, April 16, 2012 2:51 PM
To: Цветков Павел Анатольевич
Cc: [email protected]; [email protected]
Subject: RE: [Veritas-ha] upgrade VCS 5.1 with many zones

 

Hi Pavel,

Here are the high level steps which you can use to perform live upgrade with 
zones to upgrade your cluster from 5.1RP1 to 6.0 with OS upgrade from Solaris 
10 Update 9 to Solaris 10 Update 10. 

 

1.       Switch the SG to the second node in the cluster and Freeze the SG so 
that it does not failover during upgrade of first node.

2.       Upgrade the SUNWlucfg SUNWluu SUNWlur packages to match the target OS 
version.

3.       Set the ChkZFSMounts attribute of the Zpool resource to false. This is 
because during the LU Process, it creates snapshot of the zfs file system which 
are not mounted and the Zpool VCS resource monitor throws warning/erros when 
any ZFS file system is not mounted with ChkZFSMount attribute is set to True.

4.       Begin the Live upgrade process on the first node using vxlustart. This 
is the process where the OS is upgraded in the ABE.

5.       Unfreeze the Application SG. Switch the Application SG from the second 
node in the cluster to the first node in the cluster where the OS LU upgrade 
process has been completed. Freeze the APP SG again and begin the LU process on 
the second node.

6.       Start the OS upgrade on ABE for second node using vxlustart

7.       Once the OS upgrade process is completed for all the cluster nodes, 
upgrade the Veritas (SFHA) packages onto the ABE from 5.1RP1 to 6.0. Ensure 
that the ABE is mounted on the same location on all the nodes in the cluster.

8.       Once the upgrade is completed on the ABE, check whether the Veritas 
packages are upgraded in the ABE on both nodes in the cluster.

9.       Now complete the upgrade process by running the vxlufinish script on 
both (all) the nodes in the cluster.

10.   Take a maintenance windows to shutdown all the running applications and 
the cluster and then reboot all the nodes in the cluster.

The Nodes will now boot from the upgraded OS (Solaris 10 Update 10) disk 
partitions. Check the OS release version once the node comes up.

11.   Check the Zone path of the zones after the upgrade. If they are changed 
to a new path, modify the zone path to reflect the old zone path.

12.   Ensure that VCS is running and cluster membership has been formed.

13.   Bring the required VCS resources to mount the Zone root on one of the 
system. Perform the below steps on the nodes where the resources to mount the 
zone root  are brought online.

a.       Upgrade the local zone to bring it in sync with the global zone

b.      Ignore the AMF error if any

c.       Bring the zone into running state manually.

d.      The zone may not go into multi-user/multi-user-server state as the 
vxfsdlic service that has been added in SF 6.0 is in disabled state. Enable the 
vxfsdlic SMF service by logging into the zone.

#svcadm enable vxfsldlic

e.      Probe the zone resource in the cluster and it will come online now.

14.   Offline the zone and related resources. Perform step 13 on the other 
node(s) in the cluster.

15.   Now bring the APP SG online on any node.

16.   You may see some AMF related messages logged into the logs. You can 
safely ignore them.

 

Thanks & Regards,

Venkata Reddy Chappavarapu

 

 

_______________________________________________
Veritas-ha maillist  -  [email protected]
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

Reply via email to