Nicola Ken Barozzi wrote:


Stephen McConnell wrote: ...

Going the direction of multiple gump files means invoking a build multiple time. This is massive inefficiency - do a build to generate the classes directory and a jar file, do another build to run the testcase,


You call it inefficiency, I call it *safe* separation of builds. I don't see the problem here. Note that I'm talking about *Gump*, not about a build system, that uses also Gump metadata.

but then when you need the above information for generation of a build artifact - well - you sunk. You cannot do it with gump as it is today.


I don't understand this last sentence.

Sorry ... s/you/your

Basically the issue is that a gump descriptor is designed around the notion of a single path (path as in ant path concept used for classpath construction). When dealing with the construction of information for a plugin scenario you need to run a test case using a different classpath to the build cycle. The test scenario will use information generated from information about API, SPI and implementation classpaths - but hang on - gump is only providing us with a single classpath. All of a sudden you faced with the problem of building artifacts bit by bit across successive gump runs.


The solution is to do to gump what Sam did to Ant community .. he basically said - "hey .. there is an application that knows more about the classpath information than you do" and from that intervention ant added the ability to override the classloader definition that ant uses.

Apply this same logic to gump - there is a build system that knows more about the class loading requirements than gump does - and gump needs to delegate responsibility to that system - just as ant delegates responsibility to gump.


It doesn't make sense. You mean that one should delegate

buildsystem -> CI system -> buildsystem

I'm saying that products like magic and maven know more about the classloader criteria than maven does. Just as ant delegates the responsibility of classpath definition to gump, so should gump delegate responsibility to applications that know more about the context than gump does.


E.g.

|------------|      |---------------|     |-------------|
| gump       | ---> | magic         | --> | project     |
|            | <--- |               |     |-------------|
|------------|      |---------------|
|            |
|            |      |---------------|     |-------------|
|            | ---> | ant           | --> | project     |
|            | <--- |---------------|     |-------------|
|------------|

... and the only difference here between ant and magic is that magic knows about mult-staged classloaders (see below) and multi-mode classpath policies (where multi-mode means different classloaders for build, test and runtime).

?

Gump took away the responsibility from the build system, why should he give it back?

Because just as gump knows more about the context than ant, magic (or maven) knows more about the context than gump.



I.e. gump is very focused on the pure compile scenarios and does not deal with the realities of test and runtime environments that load plugins dynamically.


You cannot create fixed metadata for dynamically loaded plugins (components), unless you decide to declare them, and the above is sufficient.


Consider the problem of generating the meta data for a multi-staged classloader


What's a 'multi-staged classloader'?

|-----------------------| | bootstrap-classloader | |-----------------------| ^ | |-----------------------| | api-classloader | |-----------------------| ^ | |-----------------------| | spi-classloader | |-----------------------| ^ | |-----------------------| | impl-classloader | |-----------------------|

The api classloader is constructed by a container and is typically supplied as a parent classloader for a container. The spi classloader is constructed as a child of the api loader and is typically used to load privileged facilities that interact with a container SPI (Service Provider Interface). The impl classloader is private to the application managing a set of pluggable components.


containing API, SPI and IMPL separation based on one or multiple gump definitions ..


A classloader containing 'separation'?

Sure - think of it in terms of:

   * respectable
   * exposed
   * naked

The API respectable, an SPI is exposed, the impl - that's getting naked.

you could write a special task to handle phased buildup of data,


'Phased buildup'?

Using gump as it is today on a project by project basis would require successive gump runs to build up "staged" classpath information - because of the basics of gump - a project is a classpath definition. A staged classloader is potentially three classloader definitions (in gump terms). In magic terms its just one. Mapping gump to magic requires three gump projects to generate one of multiple artifacts created in a magic build. I.e. gump does not mesh nicely with the building and testing of plug-in based systems.


Plugin based systems absolutely need good repository system.


and another task to consolidate this and progressively - over three gump build cycles you could produce the meta-data. Or, you could just say to magic - <artifact/> and if gump is opened up a bit .. the generated artifact will be totally linked in to gump generated resources


'Linked in to gump generated resources'?

Gump generates stuff .. to build the meta-data to run tests I need to know the addressed of gump generated content. I.e. I need to link to gump generated resource.



- which means that subsequent builds that are using the plugin are running against the gump content.


You totally lost me here.

Image you have a project that has the following dependencies:

   * log4j (runtime dependency)
   * avalon-framework-api
   * avalon-framework-impl (test-time dependency)
   * avalon-meta-tools (plugin)

Imagine also that this project generates a staged classloader descriptor using within the testcase for the project. To do a real gump assessment, the avalon-meta-tools meta-data descriptor needs to be generated to reference gump generated jar paths. The avalon-meta-tools jar itself is not a compile, build or runtime dependency ... its just a tool used to generate some meta-information as part of the build process. The avalon-framework-impl dependency is not a runtime dependency because this is provided by the container that will run the artifact produced by this build - but it is needed to compile and execute unit tests. When the test system launches - it loads meta data created by the avalon-meta-tools plugin, and loads the subject of this build as a plugin. All in all there are something like six different classpath definitions flying around here.

I.e. getting lost is a completely reasonable feeling!

;-)


The point is that gump build information is not sufficiently rich when it comes down to really using a repository in a productive manner when dealing with pluggable artifacts (and this covers both build and runtime concerns). How this this effect Depot? Simply that gump project descriptors should be considered as an application specific descriptor - not a generic solution.


Sorry, I don't understand.

The thing is that a repository "to me" is the source of deployment solutions. The definitions of those solutions can be expressed in meta-data (and the avalon crew have got this stuff down-tap). The source of that meta-data can be through published meta-data descriptors or descriptors that are dynamically generated in response to service requests. Either way - the underlying repository is a fundamental unit in the deployment equation - and the language between the client is by far a classloader subject.


Hope that helps.

Cheers, Steve.


--

|---------------------------------------|
| Magic by Merlin                       |
| Production by Avalon                  |
|                                       |
| http://avalon.apache.org              |
|---------------------------------------|

Reply via email to