Simon Kitching wrote:

On Tue, 2004-05-04 at 04:53, Craig McClanahan wrote:


robert burrell donkin wrote:



On 26 Apr 2004, at 00:26, Simon Kitching wrote:



On Sun, 2004-04-25 at 15:00, Gary Gregory wrote:



Or:

You release commons-all.jar with a pruner. This pruner would run just
like base it's input on Clover output or manual input to create a
smaller what-my-app-needs-out-of-commons.jar.


Like some other posters, I understand that people have some problems
with commons being composed of a dozen separate projects. However I
don't see the "one commons jar" approach as being feasable. I can't
imagine how releases would be synchronized, how unit tests would be
applied to the combined code, etc.


there's nothing technically unfeasible about this. (gump does this and more :)

one of the good rules we have is that each commons release should only depend on previously released code so it should just be a case of re-rolling the big jar and creating a new big-commons release each time any component was released.



If you want to play with exactly this concept, check out the build.xml script in [combo]. It was designed to make a "pick the latest published release of all the constituent libraries" policy very easy to accomplish -- all you need is to update the CVS tag to pull as new releases are made. In addition, someone who wanted a different set of versions could easily override the CVS tags themselves.

There are some wrinkles yet to be worked out, and I don't have time at the moment to focus on it, but there's a starting point for someone interested. If for no other reason, consolidated Javadocs for all of commons (one of the outputs of this script) is very useful.



IMHO the main obstacle is organizational. people have enough difficulty with the current release process without adding to it. unless someone's willing to step forward and volunteer to manage the organizational side of the big commons jar (including release management) then it's not really worth wasting time on. (craig started something similar i'd guess over a year ago now but no one was really interested enough in the idea to push it forward.) i'd be happy to give a hand on the integration side but i don't cut jakarta releases any more (so i'm not willing to take this one on).



I think the mechanical aspects of building a combination JAR can be dealt with. What I don't think we have in place at all is any sort of unit or system tests to ensure that version X of library FOO works with version Y of library BAR. Therefore, I'd be cautious about declaring anything about a release of [combo] other than "here is a particular combination of Commons libraries, conveniently packaged in one JAR for you."



That's my concern. If collections releases 3.0, we can't just roll a new "commons-all" jar without testing that projects like beanutils or digester work with this new collections release. But how on earth do we do that? It seems to me that the unit tests for every project that depends upon the newly released project would need to be run - at the very least - in order to provide any guarantees that the "commons-all" release is stable. Does the "combo" project do this? If not, is it really the kind of release that Jakarta wants to put its name to?



I'd worry about this more if I believed that projects that ship multiple commons JARs today (like Tomcat or Struts) do any interoperability testing beyond making sure that their own uses of the underlying libraries was successful. It doesn't seem unreasonable to offer a convenience package whose contents is a specifically listed set of commons JARs ... the functional result of using this JAR should match the functional result of using all the individual JAR files themselves.

If we're still concerned about compatibility testing, another way to look at [combo] is a tool for people to build their own custom combinations, rather than something we use to ship a combined package ourselves.

Gary Gregory's "pruner" suggestion seems like a possible way forward to
me. The pruner could takes a configuration file that specifies a set of
classes. It would then run against a set of jars and extract the
specified classes plus all their dependencies into a new jar file.
Jakarta could publish a set of configuration files for various purposes.
Running the tool against a single jar, or against a project's jars + its
officially-supported dependencies would guarantee a functioning result.
Running the tool against the complete set of latest releases of all
projects might not result in a jar file that is valid (eg collections
3.0 + classes that depend on collections-2.x). The one issue it does
*not* address is generation of javadoc for the resulting jar. However
such a tool should be simple for users to use. We *could* potentially
automate its use to build a set of customised releases, but I would
certainly not be in favour of this for the "cross-project" bundles, due
to the inability to run the necessary unit tests as described above.


What do people think about the "pruner" suggestion?



I can see how pruner might be useful for static dependencies (i.e. import statements). What do you do about dynamic dependencies (for example, a class loaded by ClassLoader.loadClass() whose name is only known because it's in a configuration file)? And how does this do any better at dealing with inter-library compatibility testing? Compiles-clean isn't really good enough -- and Gump can already tell us that.

Regards,

Simon



Craig


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to