On 28/08/2012, at 9:25 AM, Szczepan Faber wrote:

>> One thing that the new build comparison/migration stuff needs to do is run a 
>> Gradle build and then assemble a description of what the build produced, in 
>> order to compare it.
>> 
>> We're currently using the tooling API for this, but it's a little awkward. 
>> First we run the build using a BuildLauncher. Then we ask for the 
>> ProjectOutcomes model. One problem with this is that it's not entirely 
>> accurate, as the model builder can only guess what the build would have 
>> done. Another problem is that it's potentially quite slow, as we have to 
>> configure the build model twice.
> 
> Can you elaborate on the inaccuracy concern?  It feels that some 'guessing' 
> is unavoidable even if we have a single-operation model in the internals of 
> the tooling api.

Consider…

task configureZip << {
        zip {
                destinationDir = file("foo")
        }
}

task zip(type: Zip, dependsOn: configureZip) {

}

Because the model can be changed during execution, the model can be inaccurate 
if you don't execute.

>> Both these problems would be addressed if we had some way to run the build 
>> and assemble the model in one operation. We have a few options about how we 
>> might model this.
>> 
>> Here are some use cases we want to aim for (for the following, 'build model' 
>> means the version-specific Gradle model):
>> 
>> * Request the Eclipse model: configure the build model, apply the eclipse 
>> plugin, assemble the Eclipse model.
>> * Request the project tasks model: configure the build model, assemble the 
>> project tasks model.
>> * Run a build from the IDE: run the selected tasks, assemble a build result 
>> model (successful/failed).
>> * Run a build for comparison: run the selected tasks, assemble the outcomes 
>> model from the DAG (outcomes model extends build result model above).
>> * Run a build from an integration test: inject classpath from test process, 
>> run the selected tasks, assemble a test build result model.
>> * Configure a build from an integration test: inject classpath from test 
>> process, configure the model, make some assertions, assemble a test build 
>> result model.
>> * Newer consumer fills out missing pieces of model provided by older 
>> provider: inject classpath from consumer process, invoke client provided 
>> action around the existing behaviour, client action decorates the result.
>> * Create a new Gradle project from the IDE: configure the build model, apply 
>> the build-initialize plugin, run some tasks, assemble a build result model.
>> * Tooling API client builds its own model: inject classpath from client 
>> process, invoke a client provided action, serialise result back. This 
>> allows, for example, an IDE to opt in to being able to ask any question of 
>> the Gradle model, but in a version specific way.
>> 
>> What we want to sort out for the 1.2 release is the minimum set of consumer 
>> <-> provider protocol changes we can make, to later allow us to evolve 
>> towards supporting these use cases. Clearly, we don't want all this stuff 
>> for the 1.2 release. 
>> 
>> Something else to consider is how notifications might be delivered to the 
>> client. Here are some use cases:
>> 
>> * IDE is notified when a change to the Eclipse model is made (either by a 
>> local change or a change in the set of available dependencies).
>> * IDE is notified when an updated version of a dependency is available.
>> * For the Gradle 'keep up-to-date' use case, the client is notified when a 
>> change to the inputs of the target output is made.
>> 
>> Another thing to consider here is support for end-of-life for various 
>> (consumer, producer) combinations.
>> 
>> There's a lot of stuff here. I think it pretty much comes down to a single 
>> operation on the consumer <-> provider connection: build request comes in, 
>> and build result comes out.
>> 
>> The build request would specify (most of this stuff is optional):
>> - Client provided logging settings: log level, stdin/stdout/stderr and 
>> progress listener, etc.
>> - Build environment: Java home, JVM args, daemon configuration, Gradle user 
>> home, etc.
>> - Build parameters: project dir, command-line args, etc.
>> - A set of tasks to run. Need to distinguish between 'don't run any tasks', 
>> 'run the default tasks', and 'run these tasks'.
>> - A client provided action to run. This is probably a classpath, and a 
>> serialised action of some kind. Doesn't matter exactly what.
>> - A listener to be notified when the requested model changes.
>> 
>> The build result would return:
>> - The failures, if any (the failure might be 'this request is no longer 
>> supported').
>> - The model of type T.
>> - whether the request is deprecated, and why.
>> - Perhaps some additional diagnostics.
> 
> I like this model much better. It's not only more flexible but also feels 
> cleaner than separate api for model / build. It should make the 
> implementation simpler (at some point in future, when we stop supporting old 
> providers)
> 
> I'm not entirely convinced this refactoring is a must-have for 1.2. I'm 
> trying to think if there is a risk if we stick with what we have in the 
> tooling api at the moment to deliver the migration feature in 1.2. It feels 
> that we should be able to execute this refactoring in 1.3 without last-minute 
> rush. We still need to support 'old implementation' of the tooling api 
> compatibility, anyway (e.g. requesting the model separately, running the 
> build separately, etc).
> 
> I may be wrong so above is more like an open question discuss rather than an 
> opinion :)

This would mean having different code paths in the migration plugin for 1.2 and 
everything that follows.

>> So, given that we only want a subset of the above for 1.2, we need to come 
>> up with a strategy for evolving. The current strategy is probably 
>> sufficient. We currently have something like this:
>> 
>> <T> T getTheModel(Class<T> type, BuildOperationParametersVersion1 
>> operationParameters);
>> 
>> The provider dynamically inspects the operationParameters instance. So, for 
>> example, if it has a getStandardOutput() method, then the provider assumes 
>> that it should use this to get the OutputStream to write the build's 
>> standard output to.
>> 
>> This means that an old provider will ignore the build request features that 
>> it does not understand. To deal with this, the consumer queries the provider 
>> version, and uses this to decide whether the provider supports a given 
>> feature (the consumer knows which version a given feature was added).
>> 
>> To implement this, I think we want to add a new interface, detangled from 
>> the old interfaces, perhaps something like:
>> 
>> interface ProviderConnection extends InternalProtocolInterface {
>>     <T> BuildResult<T> build(Class<T> type, BuildRequest request)
>> }
> 
> Where 'type' is the model we want to build? In case the build request does 
> not build any model we supply null?
>  
>> On the provider side, DefaultConnection would implement both the old and new 
>> interface. On the consumer side, AdaptedConnection would prefer the new 
>> interface over the old interface.
> 
> Makes perfect sense.
> 
>> For BuildResult and BuildRequest, we could go entirely dynamic, so that 
>> these interfaces have (almost) no methods. Or we could go static with method 
>> for the stuff we need now and dynamic for new stuff. I'm tempted to go 
>> dynamic.
> 
> I somewhat got used t static for 'current' and dynamic for 'new' but I think 
> purely dynamic would be cleaner.

-- 
Luke Daley
Principal Engineer, Gradleware 
http://gradleware.com


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply via email to