On Thu, Sep 26, 2024 at 12:30 AM Christian Schulte <c...@schulte.it> wrote:
>
> On 9/26/24 07:15, Otto Moerbeek wrote:
> > On Thu, Sep 26, 2024 at 06:38:00AM +0200, Christian Schulte wrote:
...
> >> Accidentally ran make in /usr/src and got an unexpected Permission denied
> >> error. I would have expected to be able to run make in /usr/src without any
> >> permission issues. Is this a bug in the build system?

The build system makes some assumptions about permissions and starting
state.  From what you describe, your setup violated those assumptions
in some way, probably by having /usr/obj contain files from a previous
build or release process.


> >>  There must be a way to build base incrementally without having to rebuild 
> >> everything.

Does there exist a documented, maintained, and tested way to "build
base incrementally"?  No.

Would it be *possible* for someone(s) to spend the time to develop,
document, and maintain going forward a way to build base
incrementally?  It's software, anything is possible with enough
resources

Will the project do that or distribute such a thing provided by
others?  Short answer: no.  Longer answer: doing so would absolutely
require understanding the project's development process and the
consequences of that process and how the change would affect the
incentives.  I think it is *extremely* unlikely to happen, being a
gigantic effort that would create new, more complex failure modes that
would be a net negative for the project.  Given all the things to
spend our time on, it seems like an unwise choice.


...
> I am keen on knowing how those snapshots are build. Do they really wipe
> out everything and then do a fresh build - lasting nearly 24h here for
> me. I doubt it.

Almost all (99+%) snapshot builds are by the book, starting with
clearing /usr/obj/ and rebuilding the symlinks.  On modern amd64
hardware that takes a few hours for base + release; for landisk it
takes days.

The exceptions are the weird ABI breaks where parts of the system have
to be built and installed in a non-standard order to cross over the
change.  For example, when I drove the 64bit time_t transition I wrote
up a careful sequence of build steps to have a kernel that supported
old+new, then get in a new libc and other affected libs, then enough
utilities that used the new libs so that you could then switch back to
the normal "make build" sequence and have it complete.  I did those
and created snaps for everyone to use for some archs and other people
did them for the remaining archs, but immediately after we all went
back to "make clean && make build".  Computer time is cheaper than our
personal time.

Building the system is one of the basic regression and perf tests of
the changes that are made.  If you work on the system for a little bit
you'll get a feel of how you can incrementally build stuff to test
your changes as you work on them, but as the scope of the change grows
it becomes more important to do full builds to catch the interactions
that you didn't expect.  Sometimes that's due to build process kludges
(e.g., the various 'reach-arounds' between some directories) that no
one has spent the brains and time to clean up, others it's because the
change breaks program invocations in the build itself.  Precise
incremental builds 'could' take the former into account but the latter
is all about the variety of operations in a full build, the very thing
incremental builds are trying to avoid.

tl;dr: no really, snaps are full builds; if you take short cuts and it
breaks you get to keep both pieces but you'll just get told to follow
the normal process to get back and not waste people's time.


Philip Guenther

Reply via email to