On 11/8/06, Steve Loughran <[EMAIL PROTECTED]> wrote:
<snip>

I understand that everyone wants stable, flawless metadata, but it
doesnt happen.


It depends. For a private repository limited to a company use cases, you can
have stable metadata. And it may be a strict requirement.

With what we have today, the tools' caches stay frozen
the moment they do their first fetch, so if things change then old
machines never pick up the problem. Case in point, some of our machines
here had a commons-logging that included log4j and logkit, even though
they are now marked as optional in the pom. as a result, different
machines build differently.


As soon as you accept metadata change without restriction (or even artifact
change) and do not check the repository each time, two machines may result
in a different build. The only (major) difference is how often it occurs. In
my opinion (which I had the opportunity with several Ivy users), build
reproducibility is so important that it's better to deal with flaws instead
of fixing them directly (by releasing a new version for instance, as Antoine
suggests). Unless we find a way to ensure that some kind of metadata updates
do not impact build reproducibility (and I think it's possible).

We need to recognise that infallibility-of-metadata is an unrealistic
ideal and adapt to it.


I strongly agree, the difference of my point of view is in the way to adapt
to it.

Xavier

Reply via email to