Leo, These are great points your making and I see what you're talking about in terms of having a naming system not just an identification mechanism. I'm going to think about this stuff for a while ...
Alex > -----Original Message----- > From: Leo Sutic [mailto:[EMAIL PROTECTED] > Sent: Wednesday, April 28, 2004 5:23 PM > To: 'Avalon Developers List' > Subject: RE: Requirements: scalability, stability and multiple JVMs with > Merlin? > > > From: Alex Karasulu [mailto:[EMAIL PROTECTED] > > > > > From: Leo Sutic [mailto:[EMAIL PROTECTED] > > > > > > Seriously, though, I have found it easier to put in the > > > container: > > > > > > 1. The ability to uniquely name each component. > > > > 1. Uniquely identify (name adds human connotation)? > > Not neccessarily. In order to replicate state from a component in one > JVM to a component in another JVM, you must be able to name components > in the sense that you must be able to find the peer in the other JVM > of a component in the first JVM in order to replicate anything. > > For example, if you have two VMs - one main, one failover. The container > > replicates state. Now, if the component changes state, the container > must > somehow be able to tell the other VM that *this* component has changed > in > *that* way. In short, the containers must be able to somehow identify > *what* component they are talking about, before they can start agreeing > on what the state of that component is. > > (One way of doing this is to use the ROLE or some other component id.) > > The name need not be human-understandable, but as you can see, there > must be some concept of naming in order for us to associate state with > *something*. > > > I think generalized failover out of the box can be achieved. > > It is possible, but makes it difficult to code. A way out is to only > replicate certain state. For example, if component A reaches a state > where its rules say it should send an email to the administrator, you > don't want the failover to send an email of its own. (Or have each node > in the cluster send an email.) > > I have solved this with a message-passing style where certain method > calls get reliably replicated across the cluster (if a node is down > it will play catch-up and so on). So I have to keep track of whether > something is executing on the local node, or all over the cluster. > > Fortunately, it isn't that hard to keep track of such things. > > /LS > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
