On Sat, July 19, 2008 4:52 pm, Tracy R Reed wrote:
> Andrew Lentvorski wrote:
>> There are lots of things which feed into a build.  Compiler versions,
>> tools versions, os versions, etc.  *All* of these need to be tracked and
>> linked to a version number in order to create a build.
>
> And this is one of the other problems I am pondering: We have a machine
> which I call "the build machine". When we compile the SRPM's into RPM's
> for installation on the appliance a great big chunk of the build machine
> becomes part of those RPM's and thus the appliance. Libraries get linked
> in, the gcc on the build machine affects the binaries, etc. Every so
> often we have to apply security fixes or other changes to the build
> machine. So the build machine that built the code today won't be the one
> that build the code from before.
>
> Is this a problem? How big of a problem? I haven't decided yet.
>
> We have all of the SRPM's in the VCS (whichever VCS it ends up being) so
> we could theoretically re-create the software we shipped at any point in
> time except for the fact that this software has to be built somewhere
> and that somewhere will affect the end result. The current solution to
> this in the old build system is to keep the iso's generated for each
> version of the OS. On one hand this seems silly since the VCS should be
> able to reproduce it and it takes up disk space and results in these big
> .iso files we have to manage but on the other hand it seems to be the
> only guaranteed way to be able to reproduce the system and disk space is
> cheap.
>
> Here is how I am currently thinking the build process will work:
>
>

The more ya think about this stuff, the more it spreads out. As Andy said,
who will version those selfsame compilers and libraries?

Most places, the M$ compilers are bought on the development team's budget,
and they "control" the media (can you say "bottom drawer"?). Couple of
years in, your chance of recreating the original environment are close to
nil.

A lot of shops are going to VMWare as an approach to this. Build the build
machine, abstract it, name and archive the image, make that a labeled part
of each build. I applaud this but have a sneaking suspicion there are some
unrecognized assumptions (I'll always be able to get another Asus i686)
lurking in there.

Do you keep track of compatibilities (known good, known bad, untested) of
your various modules? Do you track distribution of the products?
Technically this is release engineering, but SCM usually gets it assigned
to them (or it's just ignored). And then there are HW changes (a new
keypad requires a code change branch, but half your fielded machines still
have the old keypad).

All this stuff gets hierarchical (code lines forking at branches, modules
mixing and matching, another cascade as mixed HW and modules go out to
customers, customers with more than one release and/or HW config). And at
90% of the shops where I've worked, some yahoo in marketing is responsible
for tracking all this in an Excel spreadsheet of his own devising.

Bring this to anyone's attention and they either punish you or dump it on
you (or both).

This is depressing -- I'm going back to bed.

-- 
Lan Barnes

SCM Analyst              Linux Guy
Tcl/Tk Enthusiast        Biodiesel Brewer


-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to