Hi Dain,
let me address the location, and show you how the location is completely transparent.

The way the LazyReplicatedMap works is as follows:
1. Backup node fails -> primary node chooses a new backup node
2. Primary node fails -> since Tomcat doesn't know which node the user will come to their
  next http request, nothing is done.
When the user makes a request, and the session manager says LazyMap.getSession(id) and that session is not yet on the server, the lazymap will request the session from the backup server, load it up, set this node as primary. that is why it is called lazy, cause it wont load the session until it is actually needed, and because it doesn't know what node will become primary, this is decided by the load balancer. remember, that each node knows where the session with Id=XXXX is located. they all carry the same map, but only two carry the data (primary secondary).

on a false positive, the new primary node will cancel out the old one. so you can have as many false positives as you want, but the more you have the worse your performance will get :). that is why sticky lb is important, but false positive is handled the same way as a crash except that the old primary gets cancelled out.

the rest is inlined

1. Requirements to be implemented by the Session.java API
  bool isDirty - (has the session changed in this request)
  bool isDiffable - is the session able provide a diff
  byte[] getSessionData() - returns the whole session
byte[] getSessionDiff() - optional, see isDiffable, resets the diff data void setSessionDiff(byte[] diff) - optional, see isDiffable, apply changes from another node

To throw you arguments back on you, why should my code be exposed to this level of detail :) From my perspective, I get a session and it is the Session API implementation's problem to figure out how to diff it, back it up, and migrate it.

exactly. the methods above is what is required from the servlet container, not the webapp developer. so if you are a jetty developer, you would implement the above methods. This way, the jetty developer can optimize the serialization algorithm, and locking (during diff creation), and your session will never be out of date. in tomcat, we are making the getSessionDiff() a pluggable algorithm, but it is implemented in the container, otherwise, just serialization is too slow.

2. Requirements to be implemented by the SessionManager.java API
void setSessionMap(HashMap map) - makes the map implementation pluggable

3. And the key to this, is that we will have an implementation of a LazyReplicatedHashMap
  The key object in this map is the session Id.
  The map entry object is an object that looks like this
  ReplicatedEntry {
     string id;//sessionid
     bool isPrimary; //does this node hold the data
     bool isBackup; //does this node hold backup data
     Session session; //not null values for primary and backup nodes
     Member primary; //information about the primary node
     Member backup; //information about the backup node
  }

  The LazyReplicatedHashMap overrides get(key) and put(id,session)

Why would anyone need to know this level of detail?
you don't and you will not, I just giving you some architectural insight on how it works under the hood :)


So all the nodes will have the a sessionId,ReplicatedEntry combinations in their session map. But only two nodes will have the actual data. This solution is for sticky LB only, but when failover happens, the LB can pick any node as each node knows where to get the data. The newly selected node, will keep the backup node or select a new one, and do a publish to the entire cluster of the locations.

I don't see anyway to deal with locking or the fact that servlet sessions are multi threaded (overlaping requests). How do you know when the session is not being used by anyone so you have a stable state for replication.
in tomcat we have an access counter, gets incremented when the request comes in, and decremented when the request leaves. if the counter is 0, lock the session and suck out the diff. or just lock it at the end of each request on a periodic basis, regardless of what the counter is.


As you can see, all-to-all communications only happens when a Session is (created|destroyed|failover). Other than that it is primary-to-backup communication only, and this can be in terms of diffs or entire sessions using the isDirty or getDiff. This is triggered either by an interceptor at the end of each request or by a batch process for less network jitter but less accuracy (but adequate) for fail over.

As you can see, access time is not relevant here, nor does the Session API even know about clustering.

How do you deal with access-time? I agree that your API doesn't know about clustering, but you also can't do a client side or server side redirect to the correct node; you must always migrate the session to your request.
it doesn't, there is no reason to. only the primary node can expire it, and when the primary manager, without knowing it is primary, does a sessionmap.remove() the LazyReplicatedMap removes it across the cluster. remember, when the session manager does, sessionmap.entrySet().iterator() it only gets session from this node, not the other nodes.
so the implementation is completely transparent to the jetty programmer.


In tomcat we have separated out group communication into a separate module, we are implementing the LazyReplicatedHashMap right now just for this purpose.

Cool.  I'm interested to see what you come up with.
I will keep you posted, maybe we could share the code/experience.

Filip

Reply via email to