Here's my opinion on binary updates. I considered two scenarios: (1) Distribute the object tree and use the normal make installkernel, installworld, and make upgrade mechanisms to update.
(2) Distribute the binaries and write a custom installkernel and installworld mechanism, but keep using 'make upgrade' to deal with the final clean up. I think option #2 is probably our best bet as for the most part everything done by instalkernel or installworld can be packaged together into a single tar file that one just extracts, with only some minor renames of critical library files to avoid blowing the system up. * The binary dist creates a tar file. The name of the tar file includes the rev and date... basically a unique naming convention that can be used to programmatically glue binary diffs together. * Binary diffs would be relative to older tar files and use a self identifying naming scheme so a script or application can verify that the correct binary diffs are available and applicable to the correct base file(s). * The tar files can be gzipped for distribution (but diffs are based on the non-gzipped version). The gzipped tar file would remain on the target host as a basis for future binary patches. The installed binaries would NOT be used as a basis for future binary patches, only the tar file. * A streaming patch to apply the binary diffs would be the best thing, so the tar file can be left on-disk in gzipped format. I don't know if the binary diff mentioned in the thread can do that. e.g. some sort of 'gunzip < tar | binary_patch <patchfile> | tar xvpf -' sequence to apply a binary patch and extract at the same time. A similar sequence could be used to generate the new tar file. Again, the binary patch has to be against the NON gzipped tar file or it will never work. * Patches would contain the MD5 of the final result for validation (ungzipped MD5). * The tar file is installed on the machine. Certain critical files (main library files like libc.so) are named differently in the tar file. The next-to-last step after application of the tar file renames them to the official names, replacing the current system library files as a final step. e.g. in the tar libc.so would be 'libc.so.new' and then as a second-to-last step after extraction libc.so.new would be renamed to libc.so. * Enough of /usr/src is included in the patch to make 'make upgrade' work as it does now. 'make upgrade' is run as a final step. We don't need the whole /usr/src tree, just a small part of it. I don't want to have a separate upgrade mechanism for the cleanup step. I want us to use 'make upgrade' even for this system. I see many advantages to this mechanism. * The upgrade can be reapplied to a system with no harm done if it was botched the first time... we don't want to be left with a non-working system no matter what. * Low impact on developers. 'make upgrade' is used for the final step either way. * Easy management of base tar's and patches. * Easy to generate the tar file(s) using existing buildworld & installworld mechanisms (something similar to how we build releases with nrelease). e.g. a /usr/src/binarydists build system that leverages off of existing systems. KERNEL UPDATES Best way to do these is to have named kernel builds and maintain a base dist and patch system for each named build you want to distribute. Official sites would distribute maybe half a dozen different kernel builds, but more importantly any administrator could set up a 'build' machine with all of his custom builds and do his own binary dists given the build infrastructure we would include in e.g. /usr/src/binarydists or /usr/src/binarybuild or whatever we want to call it. As a final note I would say that we have used our /usr/src/ based build system to great advantage over the years. We need to leverage it as much as possible when introducing a binary update option. -Matt