On 11/15/06, Steve Loughran <[EMAIL PROTECTED]> wrote:


>
> So client-server approach with well defined interface (e.g. WSDL) is a
good
> idea but something is missing: we don't use files any more but URLs.

1. I've never seen a well defined WSDL interface.
2. You only get TX with WS-ReliableMessaging and WS-TX, and nobody does
those.


Forget about the SOAP and WSDL. They were just an example. Maybe a wrong
one.

Well-defined URLs are fine to me. (Well defined means you know exactly what
is the search URL, what parameters it takes and what response it gives.)



> Personally, I prefer using file-system repositories for internal
> organization use for their speed and ease of management. The 'useOrigin'
> feature of Ivy 1.4 also helps. Ideally, if I wasn't see Maven in the
> past, I
> would not have been thinking about cache at all. Instead, I would have
just
> creating a tool to build a classpath above file-system 'repositories'
that
> creates paths, and the cache would not have been needed at all.
>
> Why I am telling you all of this story?
>
> What I am trying to say is that we have inherent conflict in the
concepts:
> 'smart' client-server approach vs. 'simple/dumb' file-system structure
with
> the login in clients.
>
> The problem with file-system approach is that the logic is in the
> client, as
> I wrote above.

Its a bit like the original visual source safe tool and early databases,
where everything was on a shared FS. It doesnt scale to many clients or
long haul connectivy. But HTTP does because Proxies can cache GET calls,
and because most clients do not try and make the remote server look like
something it isnt: a local filesystem.


HTTP and URLs are fine. Just remember the price: handling client-cache. You
need client-side logic to invalidate and refresh the cache. Still not ideal
client-server design. (because you probably can never do ideal client-server
design where you use raw file system files; the cache in this case.)



> A possible solution (not trivial at all) is to combine the approaches:
> write
> client-server system where the 'server' is not HTTP server but SMB
(Samba)
> based server. SMB exposes file-system interface but may (I am not sure.
> Need
> to check this) implement internal logic like transactions, locking,
search
> (using dummy paths/files like the proc in Linux) and so on, as you can
do
> with an HTTP server and servlets.

SMB is mediocre for long haul connectivity, doesnt have a search API
(AFAIK) or offer a transacted API.  Vista FS has TX support; I dont know
if this is exposed over SMB.

If you want a repository with write access,

0. Stick with URLs. You get caching from proxies, easy browsing and the
ability to fetch from anything with <get>
1. use WebDAV.
2. use a PUT operation to put something new up
3. use GET to retrieve
4. use DELETE to delete
5. use HEAD to poll for changes, or GET with etag header
6. provide an atom feed of changes that a client can poll to invalidate
the local cache.



See my comment above about cache invalidate (= client side logic,  =
incompatibility between versions, = race conditions unless every developer
has it's own cache that wastes disk space and build time).

easyproglife.

-Steve


Reply via email to