OK.

First use case: every developer has its own private cache.
Can work well but everyone has to download (at least once) all the artifacts
he needs.
In addition, it duplicates artifacts and waste disk space. Usually people
say: space is not an issue today. Maybe. :-/

Second use case: a group of people share a cache.
The truth is that I don't remember if you suggested in the past not to share
the cache. If so - use the first use case and ignore the rest of the email.

Otherwise, if the cache could be shared, let's imagine the "race condition":

(Race condition means two or more people are trying to do some tasks on the
same resources, and the results depends on the exact (random) order they did
it. For example: writing to the same unlocked file. The results are
according to the last writer of course. See also:
http://en.wikipedia.org/wiki/Race_condition )

Think would would happen if two people updates the cache in parallel. Many
situations could be here. For example: one person downloading an artifact.
ivy.xml has been downloaded. The jar - not yet. The another people is trying
to use this artifacts. What he see? an ivy.xml without  the jar. What will
the second people's ivy do? Will also try downloading the same jar?

This is just an example. There are probably many other examples.

The idea is simple: when two people share the same resource without a
locking mechanism, you cannot guarantee data integrity (that's why DB came
to the world).

I was talking about the second use case on my previous mail.

The first use case (one person = one cache) is fine but wasting time and
disk space.


Have I explained my previous mail?

easyproglife.

On 11/15/06, Xavier Hanin <[EMAIL PROTECTED]> wrote:

On 11/15/06, easyproglife <[EMAIL PROTECTED]> wrote:
>
> Xavier,
>
> As I wrote to Steve, the HTTP ideas are great! Servlet is also a great
> idea
> comparing to current parsing of Apache web server list response.
>
> But the price!
>
> The price is in the cache. You still would have to manage the cache with
> client-side logic, different ivy versions and race conditions.


I'm sorry but I do not see what you mean. I don't even know what race
conditions means :) Forgive my ignorance, could you please give more
details
on what you mean.

Xavier

Isn't this an inherent issue while trying to combine dumb file system
(with
> client logic) and web server (with internal logic)?
>
> easyproglife.
>
> On 11/15/06, Xavier Hanin <[EMAIL PROTECTED]> wrote:
> >
> > On 11/15/06, Steve Loughran <[EMAIL PROTECTED]> wrote:
> > >
> > > If you want a repository with write access,
> > >
> > > 0. Stick with URLs. You get caching from proxies, easy browsing and
> the
> > > ability to fetch from anything with <get>
> > > 1. use WebDAV.
> > > 2. use a PUT operation to put something new up
> > > 3. use GET to retrieve
> > > 4. use DELETE to delete
> > > 5. use HEAD to poll for changes, or GET with etag header
> > > 6. provide an atom feed of changes that a client can poll to
> invalidate
> > > the local cache.
> >
> >
> > This is a very clean approach, I would only add something for
searching
> > (I'm
> > not familiar with GET with etag header, but it should fit the search
> need
> > too).
> >
> > BTW, do you guys have heard of archiva
http://maven.apache.org/archiva/?
> > There are a lot of good ideas there, even if it's just too maven
focused
> > to
> > be really useful for us directly.
> >
> > Another point, FYI, we have developed at jayasoft a very simple
> > Servlet/DependencyResolver pair, to avoid using apache directory
listing
> > (which is very slow) to find the last version of a module, but instead
> ask
> > the servlet (with a simple GET) which one it is. It's a very basic
> > implementation of part of what you're suggesting, and I think I could
> > easily
> > share its code with you if some are interested (but it was the very
> > servlet
> > development done by the person who did it, two years ago, so the code
is
> > not
> > very clean, but it works and can be a basis for this kind of
> development.
> >
> > Xavier
> >
> > -Steve
> > >
> > >
> >
> >
>
>


Reply via email to