Hi Kyle, > James Carlson wrote: >> Better questions would be: >> >> - Can you use Live Upgrade? >> >> > I can, but I don't. I don't generally do 'upgrades' at all. Maybe it's > me but I just don't trust them. > With the totally repeatable Jumpstart I use, It's much easier, and > faster to just re-install fresh, and be able to trust that the machine > is in a totally known state. > > When I had a large compute farm to take care of, I would not make > changes to the machines, instead I make > changes to the jumpstart, test, and then reinstall all the machines. > Flash archives would make this even faster. > With JumpStart I regularly install in a 'BE' that is less than 4GB. I > don't worry abotu future installs, or upgrades needing more space. > It's so easy to repartition when I re-Jumpstart. And while disk is > cheap, I don't want to waste 1.5GB waiting around for an upgrade... I > usually don't have more than 256MB free on my BE's. I don't add > packages after the install is done. And with some exceptions I don't > add patches either (they are installed when the next jumpstart > happens.) - I do make sure I have room for patches though. > > As James said 5GB is more than enough for a BE, I get away with 3 > sometimes since swap is shared. I make it apoint (almost a religion) > to not customize or install anything on any of the 'OS' filesystems. > All my data and apps are kept on a separate partition (if not a > separate disk in a separate enclosure... usually on a separate > machine. :) ). The most that gets changed in the OS are some config > files in /etc (auto_master, sendmail.cf, /etc/default/*, etc.) and > some softlinks are created. All of that is done by the Jumpstart. It's > so simple. > > I don't always but I have had JS preserve /export when there was local > data or apps on a machine. When I have used LU I leave /export out of > the BE, and anything I have installed local to the machine's boot disk > is there wating for me. >> - If not, then what specific issues are blocking you from using it? >> > As I said above I just don't trust upgrades. Upgrading from one build > of NV to the next might be ok since it's a 'test' machine anyway, and > I'll do a clean install when it's done.But I can't imagine taking > some machine that had had 2.5.1 on it an upgrading it to 2.6 then to 7 > then to 8... 9... 10??? And possibly upgrading to update release in > the middle. > > There are just too many questions about the state of the machines. I f > I had edited a config file, are the changes overwritten during the > install? or are they left, and the newer version of that config file > just isn't installed at all? > What am I missing out on? What am I losing that I had? > > To make LU work for me: > > 1. I'd personally like to see a 'LiveJumpStart'. Where the ABE is > wiped, and a fresh install is done using all the jumpstart logic on a > running machine. Then I can have the speed and known state of JS, with > the limited downtime of LU. Certainly, this is doable. We do plan to enhance jumpstart to do more things with the BE's. And more things in general that don't even require an install or upgrade command. Things like installing software packages from a remote repository only. > > All that said my Jumpstarts do partition the disk for LU. One thing I > noticed in NV is the ability to create an ABE during Jumpstart. I > found it too limiting though. so: > > 2. I'd like to be able to initialize the current Jumpstart install > location as BE in the JS profile. Basically, I'd like to be able to > define no 'filesystems', and only multiple BE's, and 'select' one of > them for the current install. This is a good idea. > > 3. I'd like to be able to use the SVM 'mirror' keyword in the BE > definitions. Well.. actually, this brings up another point that needs to be made clear. With the Caiman installer we are proposing to support only ZFS as the root filesystem , not UFS/SVM. The reasons for this are that ZFS offers us so many things that we cannot get with UFS/SVM. The live upgrade process becomes much more manageable with ZFS. We get the ZFS data and metadata consistency guarantees. We get rollback in the event adding patches to the system has gone bad, even without live upgrade(via a snapshot).
Using ZFS as the root fs which implies a ZFS root pool, really helps us reduce the complexity of disk partitioning for most installs. Within a ZFS root pool all upgrades are live upgrades since we can take a snapshot of the existing operating environment, clone it, promote this and do the upgrade. All within the same pool. The obvious restriction is there has to be enough space. But... once the root pool is setup and ready to go a user doesn't need to worry about modifying the underlying partitions to achieve live upgrade. This is partly why we have decided that in place upgrades won't be supported. ZFS makes it very straightforward to do a live upgrade. It is important to keep in mind that live upgrade as we know it today isn't the live upgrade we are talking about with Caiman. We know there are issues with the current live upgrade environment, such as running the live upgrade process on an OS version that is older than the one we want to upgrade to. The reason upgrade works better in this regard is that the current design of the Solaris installer assumes that the install/upgrade will run in the same version of the OS as we are installing or upgrading to. The current live upgrade breaks this assumption and is generally one of the causes of the bugs people see. It is also a maintenance nightmare for the install team. > > Maybe instead of creating partitions when defining a BE, I could > create all my partitions with the 'filesystem' keyword (using 'mirror' > if I like) and then build the BE from the already defined filesystems. > I currently create /, /lu, /var, and /lu/var. But maybe my filesystems > could instead be created as "[BE1]/" , "[BE2]/", "[BE1]/var", and > "[BE2]/var" and the BE's wouldn't need to be defined seperately? This would work in a ZFS root pool environment. we wouldn't be creating partitions when defining a BE, it would be a ZFS filesystem inside a root pool. sarah ****
