gianny DAMOUR wrote:
Jules Gosnell wrote: [...]
As for the amount of state that each node carries, there should not be a problem in each node specifying a higher or lower threshold at which to migrate or passivate sessions (i.e. number of active sessions it can carry). Although, as discussed in the previous posting, you need to be able to tell your lb about migration.
I see. I understand why you need to tell your lb about migrations and, if I understand it correctly, this is mainly an optimization. However, you are not compeled to keep the lb aware of migrations:
?? - If you don't, it will continue to deliver requests to the node that the session used to be at, not the node that it is currently at. This is bad news. You will either have to forward/redirect (clumsy), migrate the session back to this node (wasted cycles/bandwidth/time) or continue without the session (failure).
I don't understand your point...
I assume that in the cookies or in the URL will be embedded the identifiers of the replica. If the primary node fails, the lb can try any available servers. This server is part of the domain and by using the cookies or the URL of the incoming request, it can request to the replica the session attached to the incoming request.
If you had a loadbalancer that did this sort of thing, then there is no reason why you shouldn't encode a 'failover-list' into the session id, or extra cookie/url-param etc.... - I believe that this is a strategy use by WebLogic, but they have a custom Apache module... You might be able to extend e.g. mod_backhand to do this, or perhaps more major surgery on another module might achieve it...
Many loadbalancers, however, do not have this fn-ality and with these we adopt the strategies that I have outlined.
The important thing is that we simply provide an infrastructure so that anyone out there with a wierd loadbalancer can use an existing plugin, or cobble themselves together one reasonably easily and plug that in.
What is really interesting is that this server becomes now the primary and the other replica do not need to be updated.
True, the secondary server is effectively a hot-standby. However if a node has been lost from a replication-group, it will need to be replaced and a full cop of the current state issued to the new recruit (even if this is done on a low priority thread as a background task).
Hope that helps,
Jules
My idea here is to provide a pluggable 'sort()' strategy, which the deployer can provide.
Cool.
- Is it not an overhead to have b-1 replica? AFAIK, a single secondary should be enough.
This will be a deployment-time decision. I simply chose b=3 as a common example - I am sure b=2 will also be widely used.
OK.
Thanks, Gianny
_________________________________________________________________ Trouvez l'�me soeur sur MSN Rencontres http://g.msn.fr/FR1000/9551
-- /************************************* * Jules Gosnell * Partner * Core Developers Network (Europe) * http://www.coredevelopers.net *************************************/
