----- Original Message -----
From: "Pier Fumagalli" <[EMAIL PROTECTED]>
To: "Tomcat Developers List" <[EMAIL PROTECTED]>
Sent: Tuesday, November 13, 2001 8:56 AM
Subject: Re: Tomcat: Distributed Session Management revisited


| On 13/11/2001 04:38 pm, "Mika Goeckel" <[EMAIL PROTECTED]> wrote:
|
| > SNMP, ah ja. I've got no knowledge at all 'bout that, so fight with some
| > other lobbyists :-)
|
| Same here...

Didn't mean to take a left turn. Sorry I mentioned it.

|
| > SessionManager/ServletContainer dualism....:
| > If we don't create a separate SessionManager residing in it's own JVM,
but
| > make it an integral capability of TC, we have the following benefits:
| > - we save one copy:
| > When a new session is created and we have a separate network of SMs, it
| > needs to be copied to at least two SMs, if we have it in TC, it only
needs
| > to be copied to one other TC. (If we aim single redundance)
|
| Indeed it would save bandwidth...

I want a distributed session store, where all sessions are known (or
are knowable) by all members of the cluster, with a built-in
fail-over mechanism?

I want to be able to scale my web server by simply adding
more standalone Tomcats and possibly session managers.
I want to be able to use a brand-x HTTP load-balancer that
redirects web-traffic on a request-by-request basis to the tomcat
webserver that it thinks can best handle the request. I also want
to be able to bring down individual Tomcats without destroying
any user sessions.

Apache's 'smart' approach (that remembers which JSESSIONID's
are hosted by which Tomcat Servers) doesn't let me bring down
individual Tomcat servers without losing sessions. This means
that Tomcat servers are simply not 'hot-swappable' in this configuration.

ASFAICT, minimal redundance is all that is required. There's simply
no need to keep a gratuitous number of session copies around.

|
| > - if one TC is the owner and the other the escrow, the owner never needs
to
| > ask if the session is uptodate or invalid, and it can't get stale. The
| > replication of changes can be done after the request, so no time burden
| > within the request itself.
| > If the escrow want's to use the session, it only needs to inform the
owner
| > and they change roles (or if possible the escrow passes the request back
to
| > the owner)
| > Frequently all servers ping their known escrows and owners to ensure
they're
| > still present.
|
| The only problem I could see with that is synchronization of accesses from
| different points, but I believe that is a solvable problem...
|
| > - deserialisation should not be a problem, because in that ClassLoader
| > Context, the user-session objects are known. (correct me if I'm wrong
here)
|
| Nope, you're right on that.

This is only an issue in the event that the Session Manager is a seperate
entity.

|
| > AutoConf.... what about JNDI to register cluster nodes? It is around
anyway.
| > In that case an upcoming TC would just search the JNDI service for
| > registered nodes with his own ClusterName, and register itself with it.
| > Getting back a NamingEnumeration, it could decide itself, which of the
| > others to link with.
|
| One thing that can be done with my approach of multicasting is automatic
| load balancing... To any request of "who can hold this session", each
| manager can return a load index, and the decision on where the session
| should be stored primarily and in replica should be based on that. Using
| JNDI that can be done, but I don't want to end up in a situation where a
| single host holds 80% of the sessions while the others are free... If the
| managers could update their JNDI registrations with a "load" factor every
X
| seconds, that would be acceptable...
|

One thing to remember here is that the number of 'clients'
in our discussion is always fixed - it is the number of Tomcat
web servers in the 'cluster'. The load of the session managers
is a direct function of the load on it's clients. Hopefully, the load
balancer on the front end (either Apache round-robin, or some
firmware solution) is doing a 'reasonable' job of spreading the
load across web servers / tomcats. Therefore, as long as the
number of Tomcats served by each Session manager is
approximately the same, we can deduce that the load placed
on the session managers will ALSO be reasonably well balanced.
If my deduction is correct, then there should be no need for
posting load factors, and continual switching back and forth
between session managers.

Lets create some more examples:

1) 10 Tomcat webservers (1-10). Servers 1 and 2 happen to be
    identifed as 'Session Managers' as well as web servers.
    Servers 3-10 are just plain web servers, not session managers.

    In this scenario, Tomcat servers 1 and 2 are burdened by satisfying
    session requests (queries & updates) from the other 9 servers, as well
    as handling their own web-traffic. They must also initiate communication
    to the other 9 servers whenever a session is invalidated (due to update,
    maxAge, or on demand). They must also communicate all session deltas
    to the 'other' session manager.

2) 10 Tomcat webservers, all 10 are identified as 'Session Managers'

    In this scenario each Tomcat must communicate session deltas to each
    of the other 9 servers. All servers must perform significant extra work
    in order to keep their Session store up-to-date.

3) 10 Tomcat webservers, 2 separate Session Managers.
     Tomcats 1-5 point to SM1, Tomcats 6-10 point to SM 2.

    In this scenario, each Tomcat only communicates with 1
    session manager. Each session manager communicates
    session deltas with the other SM, and with only the Tomcat
    servers that it connect themselves to it (5 in this example)
    on an as-needed basis (e.g. when the Tomcat instance asks
    for the session data). Each SM must also send tell all it's clients
    when sessions are invalidated.

Fail-over could be handled in a similar manner in all scenarios.
Addition of a new SessionManager (or SessionManager capable Tomcat)
could be handled in a similar manner in all scenarios.

Tom


--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to