Some questions to clarify my mind.

How I think about this is we have a "clustering" component that allows for the configuration, communication and failover components of the clustering. This way a user can set up a cluster of servers that basically will be interoperating. This is independent of the actual users of the clustering solution. In your example, the base "clustering or group management" is a low level clustering / communication mechanism that provides an API to plug into it.

The client would be any number of exploiters of the cluster and would include things like HTTPSession Management, perhaps RMI load balancing, data caching like jCache, etc. These "clients" would delegate teh low level communication and failover to the clustering component and register appropriate callbacks so if a failure occurred of a peer they could take some action if needed.

Is my interpretation correct? Or are you collapsing the users of the cluster service and the clustering service into one piece?

I'll have some time tonight to look at Gianny's code and perhaps a peek at eCache so forgive my naive comments if I'm in a ditch :)

On Sep 12, 2006, at 12:19 PM, Jeff Genender wrote:

I wanted to go over a high level design on a gcache cache component and get some feedback, input and invite folks who are interested to join in.
..so here it goes...

The gcache will be one of several cache/clustering offerings...but
starting off with the first one...

The first pass I want to go with the master/slave full replication
implementation.  What this means is a centralized caching server which
runs a cache implementation (likely will use ehcache underneath), and
this server is known as a master. My interest in ehcache is it provides
the ability to persist session state from a configuration if full
failure recovery is needed (no need to reinvent the wheel on a great
cache).  The master will communicate with N number of slave servers,
also running a gcache implementation.

   +--------+   +---------+  +---------+
   |        |   |         |  |         |
   | MASTER |   | SLAVE 1 |  | SLAVE 2 | ... n-slaves
   |        |   |         |  |         |
   +--------+   +---------+  +---------+
      |   |            |           |
      |   |            |           |
      |   |____________|           |
      |                            |
      |____________________________|



We then have client component(s) that "plugs in" and communicates with
the server. The configuration for the client should be very light where it will only really be concerned with the master/slave/slave/nth- slave.
 In other words, it communicates only with the master.  The master is
responsible for "pushing" anything it receives to its slaves and other
nodes in the cluster. The slaves basically look like clients to the master.

   +--------+   +---------+  +---------+
   |        |   |         |  |         |
   | MASTER |---| SLAVE 1 |  | SLAVE 2 |
   |        |   |         |  |         |
   +--------+   +---------+  +---------+
       |  |                       |
       |  +-----------------------+
       |
   ,-------.
  ( CLIENT  )
   `-------'

In the event the master goes down, the client notes the timeout and then automatically communicates with slave #1 as the new master. Since slave #1 is also a client of the MASTER, it can determine either by itself, or
by the first request that comes in asking for data, that it is the new
master.

   +--------+   +---------+  +---------+
   |  OLD   |   |NEW MSTER|  |         |
   | MASTER |   |   WAS   |--| SLAVE 2 |
   |        |   | SLAVE 1 |  |         |
   +--------+   +---------+  +---------+
       |           _,'
       X         ,'
       |      ,-'
   ,-------.<'
  ( CLIENT  )
   `-------'

I think this is a fairly simple implementation, yet fairly robust.
Since we are not doing the heart beat and mcast, we cut down on a lot of
network traffic.

Communication will be done by TCPIP sockets and would probably like to
use NIO.

I would like to see this component be able to run on its own...i.e. no
Geronimo needed. We can build a Geronimo gbean and deployer around it,
but I would like to see this component usable in many other areas,
including outside of Geronimo. Open source needs more "free" clustering
implementations.  I would like this component to be broken down into 2
major categories...server and client.

After a successful implementation of master/slave, I would like to make pluggable strategies, so we can provide for more of a distributed cache,
partitioning, and other types of joins, such as mcast/heart beat for
those who want it.

Thoughts and additional ideas?

Thanks,

Jeff



Matt Hogstrom
[EMAIL PROTECTED]



Reply via email to