Back in January there was a discussion about cross compilers. For various
reasons I seem to be back in the mode of maintaining cross compilers, and
I'm refreshing the Coyotos/CapROS cross tools in order to get my head back
into the cross tool build middens. I'm not resuming work on Coyotos, so I'm
not going to have time for much ongoing maintenance, but we all want to
have a set of working tools. After talking with Charlie Landau, we have
settled on a course of action.

I want to describe what is being done, but I also want to explain why we
are *not* adopting the "builld it yourself" approach that Matt Rice
suggested back in January.


Building a cross-tool chain is a seriously hairy process. Some steps have
to be done multiple times (e.g. GCC). It's delicate, finicky, error-prone,
cranky, not friendly to tool version updates, and generally a pain in the
ass. By the time I stopped, I had 20 virtual machines running different OS
versions and architectures, each dedicated to building the tools for one of
the many possible development host environments. Revising the cross tools
involved a 40 hour compilation process. It was *ridiculous*.

*Packaging* the cross tools was very helpful. It provided a way to ensure
that all tool dependencies were satisfied, and it also meant that every
developer was using bit-identical compilers with bit-identical results.
Those two things, taken together, go a long way toward eliminating bugs
arising from the cross environment itself. The fact that you could add the
tool repository with a single RPM command, and then install the cross tools
for your target with a single YUM command, and get the updates conveyed to
you automatically, made things awfully convenient. It's a lot of work to
get all of that right, and having people build the cross tools by hand is
just begging for trouble.


The last time I reviewed this process was in 2010, and some of the
constraints have changed considerably. At this point, I'm prepared to make
three assumptions about development host machines that were problematic in
2010:

   1. All dev machines are now x86_64
   2. All dev machines now have hardware virtualization support.
   3. Disk space is a lot less constrained than it used to be.

This is true for OSX machines, and also for nearly all Windows desktops and
laptops. If you bought a netbook you may be out of luck.

What this means in practical terms is that it is now reasonable to declare
that Coyotos/CapROS development should happen on *one* operating system
version, and that you should be prepared to install a modest-sized virtual
machine for that purpose. In my experience, a virtual machine for Coyotos
or CapROS development can fit comfortably in 40G, and can *probably* be
done in 25G.

We have been going crazy chasing the Fedora Core revision cycle, and given
the ubiquitous availability of virtualization today, it just doesn't make
sense to keep doing that. The packages that I built for CentOS 5.1 in 2010
work today, unchanged on CentOS 5.9. That beats the heck out of rebuilding
things every six months, which I just don't have time to do.

So the plan from this point forward is that Coyotos/CapROS development tool
packages will *only* be built for CentOS, and *only* for development from
an x86_64 development host. I'm just now finishing the rebuild for CentOS5,
and I'll be starting on the build for CentOS6 shortly.  Once that is done,
I will go through and try to bring the tools up to more current versions of
binutils and GCC, and make those available through the test repository.


Jonathan
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
CapROS-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/capros-devel

Reply via email to