2012-03-23 00:51, David Holmes skrev:

> Not wanting to go too OT here but I see the build-deps server as
> something to be used at most per machine rather than per developer. We
> have build servers internally that can be used by dozens of developers
> and we don't want multiple copies of toolsets.

And this is exactly how the configure script is setup. If you run
configure --help
you see:
  --with-builddeps-dir    store downloaded build dependencies here
                          [default=/localhome/builddeps]

Ie. the default installation is directory is supposed to be shared on
the build server,
and to do it correctly, you have also:
  --with-builddeps-group  chgrp the downloaded build dependencies to
this group
it even takes  precautions to make it work even when to concurrent
configures try to download the same builddeps.

> Even in the new build
> system I would expect to see the toolsets (for cross-compilation)
> installed on shared NFS mounts for use by these build servers. But at
> worse I would expect to have one installation per machine.

The builddeps supports using an NFS-mount as well. Please have a look
in builddeps.java.conf which is useful to verify the old way of building
with
an active nfs-mount.

Using an NFS-mount is so 1990:ish. At least, when it has no support for
versioning the builddeps and no automated assistance to install >only< the
necessary builddeps on the build machine.

> I firmly believe that openjdk build files should only contain
> instructions for building openjdk source code. The alt-src mechanism is
> a simple mechanism to let us override an open source file with a
> "closed" one. This mechanism is available to anyone who wants to
> customize their OpenJDK without hacking the main OpenJDK sources.

And the configure equivalent to alt-src is called
--with-add-source-root=/--with-override-source-root=

> My concern, hence my question about needing to read/understand this
> file, is what happens if it doesn't work on a system? How do we debug
> the issue? Sure we can just run autoconf to generate a new (and
> hopefully working) version, but how do we determine what needs to get
> checked back into the repo?

You debug it, usually by adding echo debugme=$VARTODEBUG
in configure.ac, regenerating configure, and checking the output.

The complexity of the generated configure script is that it is
cross-shell compatible.
I.e. it runs on many different versions of shells, not just bash and
if you use macros like AC_LINK_IFELSE, then clearly you are happy
that you did not have to write the expanded code yourself.

We should standardize on a version of autoconf to use, to minimize
the changes to the configure script, when an update is made. However
it is unnecessaru to worry too much about the configure diff, since a small
change in configure.ac can cause a large change in the configure
scripts, it is after all compiled code.

But the configure.ac program itself, is not complex, as programs go.
Just one test after another and some if statements. However there are
a lot of tests to do, which reflects of course the complexity of setting
up the OpenJDK build on different platforms.

//Fredrik

Reply via email to