> On Jan 22, 2020, at 9:51 AM, Alberto Bustamante Reyes 
> <alberto.bustamante.re...@est.tech> wrote:
> 
> Thanks Naba & Jacob for your comments!
> 
> 
> 
> @Naba: I have been implementing a solution as you suggested, and I think it 
> would be convenient if the client knows the memberId of the server it is 
> connected to.
> 
> (current code is here: https://github.com/apache/geode/pull/4616 
> <https://github.com/apache/geode/pull/4616> )
> 
> For example, in:
> 
> LocatorLoadSnapshot::getReplacementServerForConnection(ServerLocation 
> currentServer, String group, Set<ServerLocation> excludedServers)
> 
> In this method, client has sent the ServerLocation , but if that object does 
> not contain the memberId, I dont see how to guarantee that the replacement 
> that will be returned is not the same server the client is currently 
> connected.
> Inside that method, this other method is called:


Given that your setup is masquerading multiple members behind the same host and 
port (ServerLocation) it doesn’t matter. When the pool opens a new socket to 
the replacement server it will be to the shared hostname and port and the 
Kubenetes service at that host and port will just pick a backend host. In the 
solution we suggested we preserved that behavior since the k8s service can’t 
determine which backend member to route the connection to based on the member 
id.


> LocatorLoadSnapshot::isCurrentServerMostLoaded(currentServer, groupServers)
> 
> where groupServers is a "Map<ServerLocationAndMemberId, LoadHolder>" object. 
> If the keys of that map have the same host and port, they are only different 
> on the memberId. But as you dont know it (you just have currentServer which 
> contains host and port), you cannot get the correct LoadHolder value, so you 
> cannot know if your server is the most loaded.

Again, given your use case the behavior of this method is lost when a new 
connection is establish by the pool through the shared hostname anyway. 

> @Jacob: I think the solution finally implies that client have to know the 
> memberId, I think we could simplify the maps.

The client isn’t keeping these load maps, the locator is, and the locator knows 
all the member ids. The client end only needs to know the host/port 
combination. In your example where the wan replication (a client to the remote 
cluster) connects to the shared host/port service and get randomly routed to 
one of the backend servers in that service.

All of this locator balancing code is unnecessarily in this model where 
something else is choosing the final destination. The goal of our proposed 
changes was to recognize that all we need is to make sure the locator keeps the 
shared ServerLocation alive in its responses to clients by tracking the members 
associated and reducing that set to the set of unit ServerLocations. In your 
case that will always reduce to 1 ServerLocation for N number of members, as 
long as 1 member is still up.

-Jake


Reply via email to