Basically we need some sort of handling of local repositories such
that any number of processes can co-exist and read/write to the local
repo.
some related issues are:
http://jira.codehaus.org/browse/MNG-2802
http://jira.codehaus.org/browse/MNG-3379
One proposal was:
http://docs.codehaus.org/display/MAVEN/Local+repository+separation
This starts to talk about concurrent access, and in some ways helps to
limit the problem, but the basic semantic of locking against the local
repo is necessary whether you separate the contents of the local repo
into sub-sections or not. I'm frankly just as happy to have a
little .lock file (or files) that's written to and checked where
appropriate, with a cleanup command for stale locks.
I suppose another option would be a very lightweight local repo server
so all activity against the local repo is "managed", but this is
probably overkill.
regards,
Christian.
On 9-Sep-08, at 11:40 , Oleg Gusakov wrote:
Looks like this problem has two use cases - multiple builds
interacting with local repo:
1). writing new artifacts
2). downloading remote artifacts
In either case they race for metadata - to be addressed, in #2 they
may race for actual artifacts, which is addressable by Jetty
transactional client + solution for metadata.
I have been researching various approaches to metadata (http://jira.codehaus.org/browse/MERCURY-5
), and am biased towards creating a local index for storing local
metadata. In this case Mercury client is immune to the problem,
because of Lucene involvement, but due to the necessity to be
backward compatible, Mercury still needs to deal with metadata.xml
updates. The straightforward solution seem to be GA-level locking +
utility to clean all the locks in case client crashes. More
sophisticated might be a local queue, but I don't know if it's worth
the effort.
Does anyone know about any unit or integration tests associated with
this problem? I will definitely add it to mercury ITs.
Thank for raising this!
Oleg
Brian E. Fox wrote:
Oleg, the issue with with the local repo, not with the remote ones.
Basically there is no locking on the reads/writes to local so if
you have multiple builds that potentially touch the same metadata,
you've got a problem. Mercury could potentially deal with the race
condition where 2 builds ask for the same file to be downloaded,
but it won't be able to help issues where things are installed at
the same time.
-----Original Message-----
From: Oleg Gusakov [mailto:[EMAIL PROTECTED] Sent:
Tuesday, September 09, 2008 1:32 AM
To: Maven Developers List
Subject: Re: wagon write locking / synchronization in 2.1 or 2.2?
Christian,
I am glad to see this question as I have the answer :)
Fine people from Jetty community helped us with the Mercury project
(http://docs.codehaus.org/display/MAVEN/Mercury) and provided Jetty
client with transactional support. The client accepts a set of
files to upload or download and guarantees that all of them (or
none) are processed. And Mercury Repository implementation relies
on this client for all the up/down interactions.
The only problem will be to verify that I use this client properly
to address the issue. Can you, please, provide any links on this
"big fat hairy"? Do you know if there are any unit/it tests that
check for it? I will definitely search for it, but if you are
asking, you probably know where to find the ends.
Thanks a lot,
Oleg
Christian Edward Gruber wrote:
Hi all,
Looking at the release plans for 2.1 and 2.2 I don't see
anything about addressing the big fat hairy race condition that
prevents multiple maven builds on the same local repository from
risking corruption of the local repo. There have been a few
proposals and several JIRAs, and it's sort of keeps parallel
execution of unrelated builds from working well without doing
things like specifying tons of local-repositories. Any chance one
of those proposals could make it into 2.1 or 2.2?
If not, would it at least be possible to shorten the race by
having the wagon download things into a unique temporary directory
and then doing a quick rename once the download is in place. That
should reduce the problem, if not solve it anyway.
regards,
Christian.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]