Kyle McDonald writes:
> I'd go with maybe 8G at the most. SNV is comfortable in 4-5G today so 
> that should cover any growth of solaris itself.
[...]
> Ditto with blastwave. That will update at a much different rate than 
> Solaris, and won't need to change during a LU. Why have 2 copies of it 
> around when 1 could be shared among BE's?

I mostly agree, and that matches how I manage all of my systems, and
as I've done for quite some time now.

The big caveat here is that you need to be very careful with your
packaging database, or you end up with a pile of hash.  There's only
one packaging database on the system, but when you have packaged
software in a common location, you've got the potential for trouble.

To make this work, you must _NOT_ revert to a previous environment if
you've changed (pkgadd, pkgrm, pkg-get -u) the software in the common
areas.  If you do, you'll have the old packaging database but new
package contents -- which is a recipe for disaster.

So, the usage model (at upgrade time) looks like this:

        pkg-get -U
        yes | pkg-get -u
        lurename -e oldone -n newone
        lumake -n newone
        yes | pkgrm SUNWlur SUNWluu SUNWlucfg
        yes | pkgadd -d /path/to/image/S*/P* SUNWlur SUNWluu SUNWlucfg
        luupgrade -u -s /path/to/image -n newone
        luactivate -n newone
        /etc/init.d/lu stop
        bootadm update-archive -v
        uadmin 2 1

(OK; just kidding about those last three lines.  Don't try that part
at home.)

Then, in the new environment, kick the tires.  Make sure things work.
Don't try to upgrade those shared things in /opt until you're _sure_
you're not going to need to revert.

(Yeah, the whole thing is like constructing an illuminated manuscript.
Things will be far better when there's a ZFS root and ZFS-based
upgrade.)

-- 
James Carlson, Solaris Networking              <james.d.carlson at sun.com>
Sun Microsystems / 1 Network Drive         71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677

Reply via email to