Hi,
Something we want to do soon is to replace the buildSrc project with a regular
project. There are a few motivations for this:
* To improve the user experience for those builds that need dedicated build
logic. For example, currently the buildSrc project's 'build' target is used.
But this runs all the tests and checks, whereas for 95% of the time, the user
is only interested in compiling the classes. Or, currently we need to clean the
buildSrc project when the Gradle version changes, whereas for regular projects
we don't need to do this. Or, currently the buildSrc project does not end up in
the IDE model, but would be included if it were a regular project.
* To allow build logic to both be published and used in the same build (but not
in the same project, for now). This will mean that you can use your enterprise
plugins in the same build that produces them. For example, you can use your
custom release plugin to release your custom release plugin. We may use this in
Gradle, too, when we add a plugin dev plugin.
* To detangle project configuration from the project hierarchy. In particular,
this required for parallel execution, so that projects can be configured in an
arbitrary order, and across multiple JVMs and/or threads.
DSL-wise, there are 3 main use cases:
1. Declare that a given script depends on the build logic from some a project.
2. Declare that every script depends on the build logic from some project. Or
there might be a convention for this, so that you give a project a particular
name or put it in a particular directory, and it is automatically picked up as
a build logic project.
3. Inject configuration to all projects, including those projects that are
built during configuration time.
Use case 1
-------------
I think this is as simple as being able to add project dependencies to the
build script's classpath configuration:
buildscript {
dependencies { classpath project(':buildLogic') }
}
When we simplify the DSL for applying plugins, this might become something like:
apply project: ':buildLogic', plugin: 'my-custom-plugin'
Implementation-wise, the configuration phase would look something like this:
1. Queue up the configuration of each project, in parent-first order (like we
do now).
2. For each project, if not already configured, then execute the project's
build script.
3. For each script that is executed:
* Execute the buildscript { } section of the build script.
* For each project dependency in the build script classpath, recursively
configure and build the target project. Fail if the target project is currently
being configured.
* Resolve the build script classpath and execute the script.
* For each call to evaluationDependsOn(), recursively configure the target
project. Fail if the target project is currently being configured.
4. For each project that is built during configuration:
* Configure the project as above
* For each project dependency required to build the project, recursively
configure the target project. Fail if the target project is currently being
configured.
* Add the tasks that build the runtime class path for the project to the
DAG.
* Execute the tasks.
I think this boils down to some changes to dependency resolution:
During the configuration of a project:
1. When a Configuration is resolved, for each project dependency we trigger
configuration of the target project and building of its artefacts.
2. When a Configuration's buildDependencies are queried, for each project
dependency we trigger configuration of the target project.
At other times (e.g. task execution):
1. When a Configuration is resolved, for each project dependency assert that
the target project has been configured and the artefacts built. It's an error
if not.
2. When a Configuration's buildDependencies are queried, for each project
dependency assert that the target project has been configured. It's an error if
not.
And the same kind of thing for task dependencies:
* When a task's dependencies are resolved during configuration, trigger the
configuration of the target project.
* When a task's dependencies are resolved at other times, assert that the
target project has been configured.
Some open issues:
* Currently, the buildSrc classes are available in the settings script. This
would not be the case if a regular project is used. Some possible solutions:
- Use an external script for any shared logic.
- Allow the settings script to add projects in it's settingsscript { }
section, and resolve configurations as above.
- Move the logic to an external project, and allow plugins to be applied to
the Settings object.
- Allow build scripts to add projects.
- Chop your settings script into 2: one which defines the build logic
projects, and a second one that declares a dependency on that project and uses
it to define the remaining projects.
* Tasks can be executed before the DAG is fully populated, and before the 'DAG
ready' event has been fired. This means that some conditional configuration may
not have been executed when these tasks are executed. Introducing build types
might be an option here, so that the conditional stuff is applied much earlier
in the configuration phase.
* Projects can be configured and tasks executed before the parent project has
had a chance to do configuration injection. More on this below.
Use case 2
------------
I like the idea behind the buildSrc project: you just put your build logic in a
certain place, and it is just made available. It would be a shame to lose this.
I wonder, however, if we really need this, assuming we can reduce the
boilerplate for adding a project dependency to a build script classpath down to
a single statement. We might also tackle this by making script 'plugins' work
more like plugins, so that something like:
apply plugin: 'my-plugin'
might come from a compiled class from another project, or might apply
$rootDir/gradle/my-plugin.gradle (or whatever).
This way, plugins are provided by the environment and the consuming script
doesn't care where they come from. What is currently in buildSrc would turn
into one of the following:
* A regular project in some external build, with plugins published to a
repository.
* A regular project in the same build, with plugins built locally.
* A script in some conventional (or declared) location.
Use case 3
------------
The current approach of using allprojects {} and friends for configuration
injection isn't going to work, as the build logic project will potentially have
been configured and built before the injecting script has a chance to execute.
There are a couple of existing approaches that would work (but are a bit
awkward):
* Move the shared logic to a script, and apply it from various locations
* Move the shared logic to a plugin in a second build logic project, and depend
on it from various locations.
The existing configuration injection methods have some other problems. First,
these methods guarantee that the code is called for every project, and that
every project is configured. However, this stops us doing some things:
* Skip the configuration of projects that aren't relevant to the current build.
Eg in the Gradle build, don't configure all the plugin projects if I'm running
the unit tests for core.
* Short-circuit the configuration of projects whose outputs are up to date. Eg
in the Gradle build, when I'm working on the c++ plugin, don't configure all
the core projects when none of their source or configuration has changed.
* Use compatible pre-built artefacts from a binary repository, rather than
configuring the projects and building their artefacts. Eg in the Gradle build,
when I'm working on the c++ plugin, just get the rest of the binaries from the
CI server (not a great example, but you get the idea).
Second, these methods guarantee that the code is always called in the same
context. This stops us doing some of these things:
* Building separate chunks of the model concurrently.
* Building the model across multiple JVMs or machines.
So, I think we need a new DSL here. Some options:
1. Just change the injection methods, so that they drop these guarantees.
2. Change the injection methods so that they have 2 modes. Allow a build script
to declare which mode it needs.
3. Add new injection methods, with different names to the existing methods.
4. Use scripts in conventional locations. So, perhaps
$rootDir/gradle/allprojects.gradle is applied to each project before it is
configured.
5. Allow configuration to be injected from the settings script (with the new
semantics).
6. Add a new type of build script, with injection methods that have the same
names as the existing ones, but with the new semantics.
Option 1) is not really an option. Options 2), 3) and 6) don't solve the build
logic project problem. Personally, I like 5), because it detangles the build
configuration from the root project. What is interesting about this option is
that it allows you to have a single .gradle file for an entire multi-project
build, that both defines the projects and injects configuration into them.
An open issue is exactly what the semantics of the injection methods would be.
They're going to have to deal with the fact that the configuration code may end
up running in various different JVMs. This has some implications as to how
values are shared across projects, e.g. a calculated version.
Migration
----------
I think eventually we want to get rid of buildSrc altogether. The plan would be
to implement the above use cases as experimental features, leaving buildSrc
alone. Then, we should shake out the new configuration mechanism further with
some of the parallel execution and partial configuration features. Once we're
fairly happy with how this looks, we would deprecate the buildSrc project, and
later remove it.
--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com