On 28/08/2012, at 12:20 PM, Adam Murdoch wrote:

> Hi,
> 
> One thing that the new build comparison/migration stuff needs to do is run a 
> Gradle build and then assemble a description of what the build produced, in 
> order to compare it.
> 
> We're currently using the tooling API for this, but it's a little awkward. 
> First we run the build using a BuildLauncher. Then we ask for the 
> ProjectOutcomes model. One problem with this is that it's not entirely 
> accurate, as the model builder can only guess what the build would have done. 
> Another problem is that it's potentially quite slow, as we have to configure 
> the build model twice.
> 
> Both these problems would be addressed if we had some way to run the build 
> and assemble the model in one operation. We have a few options about how we 
> might model this.
> 
> Here are some use cases we want to aim for (for the following, 'build model' 
> means the version-specific Gradle model):
> 
> * Request the Eclipse model: configure the build model, apply the eclipse 
> plugin, assemble the Eclipse model.
> * Request the project tasks model: configure the build model, assemble the 
> project tasks model.
> * Run a build from the IDE: run the selected tasks, assemble a build result 
> model (successful/failed).
> * Run a build for comparison: run the selected tasks, assemble the outcomes 
> model from the DAG (outcomes model extends build result model above).
> * Run a build from an integration test: inject classpath from test process, 
> run the selected tasks, assemble a test build result model.
> * Configure a build from an integration test: inject classpath from test 
> process, configure the model, make some assertions, assemble a test build 
> result model.
> * Newer consumer fills out missing pieces of model provided by older 
> provider: inject classpath from consumer process, invoke client provided 
> action around the existing behaviour, client action decorates the result.
> * Create a new Gradle project from the IDE: configure the build model, apply 
> the build-initialize plugin, run some tasks, assemble a build result model.
> * Tooling API client builds its own model: inject classpath from client 
> process, invoke a client provided action, serialise result back. This allows, 
> for example, an IDE to opt in to being able to ask any question of the Gradle 
> model, but in a version specific way.
> 
> What we want to sort out for the 1.2 release is the minimum set of consumer 
> <-> provider protocol changes we can make, to later allow us to evolve 
> towards supporting these use cases. Clearly, we don't want all this stuff for 
> the 1.2 release. 
> 
> Something else to consider is how notifications might be delivered to the 
> client. Here are some use cases:
> 
> * IDE is notified when a change to the Eclipse model is made (either by a 
> local change or a change in the set of available dependencies).
> * IDE is notified when an updated version of a dependency is available.
> * For the Gradle 'keep up-to-date' use case, the client is notified when a 
> change to the inputs of the target output is made.
> 
> Another thing to consider here is support for end-of-life for various 
> (consumer, producer) combinations.
> 
> There's a lot of stuff here. I think it pretty much comes down to a single 
> operation on the consumer <-> provider connection: build request comes in, 
> and build result comes out.
> 
> The build request would specify (most of this stuff is optional):
> - Client provided logging settings: log level, stdin/stdout/stderr and 
> progress listener, etc.
> - Build environment: Java home, JVM args, daemon configuration, Gradle user 
> home, etc.
> - Build parameters: project dir, command-line args, etc.
> - A set of tasks to run. Need to distinguish between 'don't run any tasks', 
> 'run the default tasks', and 'run these tasks'.
> - A client provided action to run. This is probably a classpath, and a 
> serialised action of some kind. Doesn't matter exactly what.
> - A listener to be notified when the requested model changes.
> 
> The build result would return:
> - The failures, if any (the failure might be 'this request is no longer 
> supported').
> - The model of type T.
> - whether the request is deprecated, and why.
> - Perhaps some additional diagnostics.
> 
> So, given that we only want a subset of the above for 1.2, we need to come up 
> with a strategy for evolving. The current strategy is probably sufficient. We 
> currently have something like this:
> 
> <T> T getTheModel(Class<T> type, BuildOperationParametersVersion1 
> operationParameters);
> 
> The provider dynamically inspects the operationParameters instance. So, for 
> example, if it has a getStandardOutput() method, then the provider assumes 
> that it should use this to get the OutputStream to write the build's standard 
> output to.
> 
> This means that an old provider will ignore the build request features that 
> it does not understand. To deal with this, the consumer queries the provider 
> version, and uses this to decide whether the provider supports a given 
> feature (the consumer knows which version a given feature was added).

I wonder if this is sufficient. Here are some of the changes we'd want to make:

1. Adding a new parameter to the build request.
2. Changing the semantics for a parameter - changing the return type, etc.
3. Removing a parameter from the build request.
4. Adding some information to the build result that is important to present to 
the user - e.g. deprecation information.
5. Adding some information to the build result that can be ignored by the 
consumer.
6. Changing the semantics for something in the build result - changing the 
return type, etc.
7. Removing some information from the build result.

For 1, the strategy works fine, as the consumer knows which provider version 
the parameter was added in, and can either fail or ignore as appropriate. And 
the provider can dynamically check if the parameter is provided by the consumer.

For 2, we'd probably add a new parameter and prefer it over the old one in the 
provider. Or, the provider can inspect the parameter type dynamically and 
decide how to deal with it. The consumer knows which version of the provider 
the change happened in and can handle appropriately.

For 3, the consumer does not know whether the provider will silently ignore the 
parameter or not. So far, we don't have any parameters that we'd like to 
remove, but it's going to happen at some point.

For 4, the provider can make the information available, but it does not know 
whether the consumer will ignore it or not. Might not be a problem, as the 
provider can add some warnings to the logging output delivered to the consumer.

For 5, the strategy works fine. The provider just adds the information and the 
consumer can ignore or use.

For 6, the provider can add new information, and the consumer can prefer it 
over the old information. We can't change the return type as the provider has 
no idea if the consumer is capable of handling the new type or not.

For 7, the provider has no idea if the consumer uses the information or not.

So, I think it would make sense to add some way for the provider to understand 
the capabilities of the consumer. I can see 2 options:

1. The consumer tells the provider its version.
2. Have the consumer and provider declare their capabilities in some way. They 
might still exchange information about versions and so on, but that would be 
for diagnostic reasons.

Option 1 is a reasonable option, I think. Not necessary for 1.2, though.


--
Adam Murdoch
Gradle Co-founder
http://www.gradle.org
VP of Engineering, Gradleware Inc. - Gradle Training, Support, Consulting
http://www.gradleware.com

Reply via email to