In a recent project, I built a distributed postgres database using transactions
and a rather interesting InvocationHandler implementation that allows a mesh
network to exist between all participants so that everyone sees every change.
From a participants perspective, there are zero or more client displays that
show and dynamically update the display of data, there are database host, and
there are external servers that use the API calls to change incore memory
versions of the persistent data.
The transaction rate, because all participants are on a local network, is quite
performant. But, there are some issues with how the transaction manager
failures work out (including some of the bugs Patricia has found that we had not
managed to see the cause of) that make it a bit fragile for continued use.
I'd personally have a great desire to have TransactionManager be a focus of some
effort to try and finish getting its behavior to be dependable and consistent
for a single process service.
When you go to have a distributed view of the same data across multiple systems,
you get all the problems of partial failure being an impediment to successful
operations on each transaction. Dan Creswell and I have had many a discussion
about how it is often easier to build a "better piece of hardware" than to
"distribute a software system". Many times better hardware is a better choice.
When you look at Hadoop and Googles use of "cheap hardware", you can see how the
line can be drawn in the sand at some point to just provide limited
functionality and use "guesses" to move forward.
Gregg Wonderly
On 2/9/2011 5:28 PM, Jeff Ramsdale wrote:
+1. The lack of partitioning and fault-tolerance is exactly what's
keeping my current employer from using Outrigger, though they'd love
to. They do use Jini, though, so they'd be an easy sell if such a
thing were available.
-jeff
On Wed, Feb 9, 2011 at 3:13 PM, Patricia Shanahan<[email protected]> wrote:
What do others think of this general idea, as a development direction for
River?