On Thu, Sep 23, 2010 at 6:52 PM, Peter Donald <pe...@realityforge.org>wrote:

> On Fri, Sep 24, 2010 at 9:08 AM, Alex Boisvert <alex.boisv...@gmail.com>
> wrote:
> > Hi Donald,
>
> It is Peter :)
>

Ah!  Damn.  Sorry about that.  My fingers sometimes have a life of their own
:-\


> > 2) adding download_from to an Artifact;  this could be useful if only
> > one/some of many artifacts come from a different repo and we don't want
> to
> > pay the latency tax by querying this repo for all artifacts.
>
> Precisely the use case. Several of the dependencies I am working with
> are located in only one repository so some of the projects I have
> written with buildr have ~14 repository definitions.
>

Looks like we're good here.


> > As for mirror_to, it seems to be largely duplicated by
> repositories.remoteor
> > download_from (assuming we add it).   Given the download information is
> > typically in the same buildfile, I'm not sure when this would be useful.
>
> The main thing it is useful for is automating mirroring. So we tend to
> have a local web server repository that has all the artifacts mirrored
> from the internet. I agree it is probably not as useful as the other
> features given that some people manage their repositories using things
> like nexus.
>

Ok, let's see if groups can give us this easily (see below), otherwise let's
rediscuss afterwards.


> > Stepping back a little bit, I'm wondering if adding metadata to artifacts
> is
> > a good approach.  The alternative is to place artifacts in arrays or
> hashes
> > and manage these as sets, e.g.,
> >
> > public_artifacts = [ list, of, artifacts, to, publish, to, public, repos
> ]
>
> While this is possible I tend to store all dependencies in the
> build.yaml file. This format is much more amenable to machine reading
> and processing. So if there was a way to easily define groups of sets
> of artifacts in this file then I could definitely be convinced to use
> this approach.
>

I haven't looked into this but I agree we should try to find a way to define
additional information on artifacts via build.yaml.   Do you want to look
into it and suggest something?

> # This seemed to be another of your use-case
> > task :replicate_artifacts do
> >  artifacts.each do |a|
> >    a.download :repository => 'http://bigco.com/repo'  # not available
> today
> >    a.upload :url => 'http://example.com/repo', :username => 'foo'
> >  end
> > end
>
> I could imagine an approach like this being useful for packages - not
> so sure about artifacts if they are managed in build.yaml. My instinct
> would be to add them to the artifact base class or mixin (IIRC
> ActsAsArtifact) so you could do something like the following
>
> artifact(:mydep).download_from('http://example.com/internal')
>
> project 'foo' do
>   compile.with :mydep
>   package(:jar).upload_to('http://example.com/repo')
> end
>

Yep.

And just to be clear, we need both:  1) a method to upload now and 2) a
method to tell the artifact where to upload (later).  Both are useful.


>
> > Have you considered this approach?  I'd be curious to hear if/why you
> think
> > using metadata is a better way to go.
>
> I would be reasonably happy with that approach if you could define
> groups of artifacts in build.yaml
>
> I guess the main reason I was looking at metadata attributes is that
> there is lots of other information that I want to store against an
> artifact so I could automate other parts of the build process.
>

I'm open to adding metadata to artifacts (e.g. groups or tags) although I
would prefer to standardize common idioms.  At this point download_from(),
upload_to() seem like they should be first-class idioms on artifacts.

I would also prefer to keep snapshot_to as a global setting, similar to
release_to.  I'm not opposed to using arrays where possible, to allow
multiple repositories, although the methods should also accept a single repo
as a convenience.

How does that sound?

alex

Reply via email to