On Sun, Mar 16, 2008 at 10:35 PM, Jason van Zyl <[EMAIL PROTECTED]> wrote:
>
>  On 16-Mar-08, at 1:25 PM, Nigel Magnay wrote:
>
>  > Have you got a description of how you think it ought to work?
>  >
>
>  I will do a demo sometime this week at EclipseCon, and I'm happy to
>  share the configuration I have. But it should be as simple as
>  described. One place to go, at least in a corporate environment with
>  100+ users it's the only way that's workable.
>
>
>  > I quite like the ability of downloading projects that rely on 3rd
>  > party repos, and having them magically work without having to do
>  > anything (which is why I have a distaste for having to go through a
>  > validate-my-settings-and-proxy-don't-break-external-users step when
>  > pushing project changes to outside users).
>
>  Most corporate IT people don't like Maven scurrying off to some
>  unknown repository fetching stuff. I have had users walk up to me and
>  go "what the hell is Maven doing?".
>
>  It is possible to make Maven do pure delegation (the mirrorOf is still
>  doesn't work well for snapshots and plugin repositories) and then you
>  can do what Tamas and I call build discovery: while a build is
>  executing the repository manager can collect every request to a
>  repository. The build could block while you approve, automatically
>  adding it to the list of proxied repositories, or you could just cycle
>  through the build, collect them all and then audit them. You could
>  then find the pieces in each of those repositories, download them to
>  your own if you don't want to proxy them and then completely lock down
>  the outside connections. This stuff needs to be dead simple as a lot
>  of people don't like Maven crawling around all over the place. So we
>  effectively I would encourage no repos in POMs, but we have what we
>  have now and you need to identify repositories in the POMs flying
>  through and contain them.
>
>

I get the "central place to go", but I'm still having a hard time
getting why a repository manager couldn't do all that, today, by
acting as an HTTP proxy for all requests. It can look a the URL it's
being requested, and say 'hmm, I cache that repo', or 'sorry, thats
locked down so you can't scurry over there' or even discovering new
repos at build time and adding/denying them as per config. I don't
need a repository id, a mirrorOf, or any more magic than a set of
URLs; HTTP and DNS already have a well-defined architecture for naming
and redirection.

We have an internal archiva instance. Every time a new repository gets
added to a project, I have to mail out to everyone 'hey, update your
settings.xml with this mirror if you are internal and you want stuff
to run anything like fast'. BUT we also have external users, working
from home. If someone just hacks in another repo into the /internal
set in archiva, it breaks all the external users unless they make
completely sure it's specified in the right place in the project
pom.xmls. This is why I dislike the single URL (/internal) mapping to
several external repos, it's just a recipe for failed builds.

Currently, the pom file is the master record of everything about the
build. You seem to be suggesting (if I'm understanding correctly) that
there'd need to be a secondary, parallel configuration (file) stored
in the repo manager to make your builds able to download from 3rd
parties. This seems like a big retrograde step to me..

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to