Hey Sanjaya, It is indeed turning out to be a good conversation. comments inline.
Regards, Rajith On 2/7/07, Sanjaya Karunasena < [EMAIL PROTECTED]> wrote:
[SK] So why not use Synapse? Of course there is an option of embedding a
simple but a fast load balancer as the default load balancer. It is always good to have different configurations available for different requirements when it comes to application development. Only thing required is some good documentation. [RA] Syanpse is certainly an option
[SK] Certainly, starting with small steps is always important and work.
But let's have the discussion going so that we keep an eye on the final goal while doing that. [RA] Totally agree.
Message ordering provided by the communication framework to work, it
should be notified on all the dependant events. However there is a cost associate with this. The question is where do you invest? Which approach handle concurrency with the least cost?
Let me explain how total ordering going to work. In total ordering message
m+1 is delivered to all the recipients before message m is delivered. When event > execution is synchronous, the event will be automatically blocked until the message is delivered.
This way if a write happen at time t and if a read start concurrently at
time t+1 the event will be automatically blocked until the write is delivered to all the recipients. Which event occurred first (happen before) can be determined using the Lamport algorithm. [RA] If we block for reads and to be sure that nobody is writing to it while we are reading it then we need to wait till we have the "virtual token", since no node can write until they aquire the token. This will be very slow. Isn't it ? This maybe acceptable for writes, but for obvious performance reasons, we will have to live with dirty reads. Also to block the service from reading or writing cannot be done w/o modifications/impact to the kernal, which is going to be shot down for sure :) I am already getting a beating for performance :)
A relaxed approach is to use causal ordering of messages, if the causal
order of events can be determined. There, events for which the order cannot be determined, is treated as independent and does not enforce any ordering. [RA] The paper on TOTEM claims same performance as casual ordering or even FIFO delivery. But not sure how accurate that claim is.
Sounds very expensive ha.... :-) But if you really look at it, locking
techniques essentially does the same with giving you the additional overhead of tackling distributed deadlocks. [RA] Well the research paper says so :) This approach is good if we replicate attributes as and when a change occurs. But if a service does too many writes during a invocation it will be a big performance issue and increase network chatter considerably. If they update the same variable several times during an operation it would be waste of resources. If we replicate at the end of an invocation the chances of conflicts go up. In such a case, distributable locking maybe a more viable solution.
[SK] OK I think I got your point. But then it nullify the ability make use
of the real power provided by the underneath messaging infrastructure. It will be only used as a multicasting channel and we have to come up with techniques to tackle every thing else. [RA] Not sure I understand you here (as to how it nullifies the ability to leverage ...). Can you explain this a bit more ?
[SK] Have you checked Appia and stuffed developed at Cornell? As I told you
we may get away with causal ordering too. [RA] We talked with Prof Ken Birman and looked at Ricochet Thats what they recommended us. The problem with Ricochet is that it doesn't have membership. But it does have some interesting guarantees about performance especially when the no of nodes go up. But this was a year ago. I am thinking about restarting the discussion. They may have added membership to Ricochet. I am actually interested in doing another clustering impl with Ricochet. (now that we have some ground work in place) Regards, Rajith.