Hi, Why not use cluster singleton service? You should establish your connections in the service init method call. Please check [1] for details Does it work for you?
[1] https://apacheignite.readme.io/docs/cluster-singletons#cluster-singleton 2016-04-26 14:44 GMT+03:00 Vladimir Ozerov <[email protected]>: > Hi Ralph, > > Yes, this is how we normally respond to node failures - by listening > events. However, please note that you should not perform heavy and blocking > operations in the callback as it might have adverse effects on nodes > communication. Instead, it is better to move heavy operations into separate > thread or thread pool. > > Vladimir. > > On Mon, Apr 25, 2016 at 2:54 PM, Ralph Goers <[email protected]> > wrote: > >> Great, thanks! >> >> Is listening for that the way you would implement what I am trying to do? >> >> Ralph >> >> On Apr 25, 2016, at 4:22 AM, Vladimir Ozerov <[email protected]> >> wrote: >> >> Ralph, >> >> EVT_NODE_LEFT and EVT_NODE_FAILED occur on local node. They essentially >> mean "I saw that remote node went down". >> >> Vladimir. >> >> On Sat, Apr 23, 2016 at 5:48 PM, Ralph Goers <[email protected]> >> wrote: >> >>> Some more information that may be of help. >>> >>> Each user of a client application creates a “session” that is >>> represented in the distributed cache. Each session has its own connection >>> to the third party application. If a user uses multiple client applications >>> they will reuse the same session and connection with the third party >>> application. So when a single node goes down all the user’s sessions need >>> to become “owned” by different nodes. >>> >>> In the javadoc I do see IgniteEvents.localListen(), but the description >>> says it listens for “local” events. I wouldn’t expect EVT_NODE_LEFT or >>> EVT_NODE_FAILED to be considered local events, so I am a bit confused as to >>> what the method does. >>> >>> Ralph >>> >>> On Apr 23, 2016, at 6:49 AM, Ralph Goers <[email protected]> >>> wrote: >>> >>> From what I understand in the documentation client mode will mean I will >>> lose high availability, which is the point of using a distributed cache. >>> >>> The architecture is such that we have multiple client applications that >>> need to communicate with the service that has the clustered cache. The >>> client applications expect to get callbacks when events occur in the third >>> party application the service is communicating with. If one of the service >>> nodes fail - for example during a rolling deployment - we need one of the >>> other nodes to re-establish the connection with the third party so it can >>> continue to monitor for the events. Note that the service servers are >>> load-balanced so they may each have an arbitrary number of connections with >>> the third party. >>> >>> So I either need a listener that tells me when one of the nodes in the >>> cluster has left or a way of creating the connection using something ignite >>> provides so that it automatically causes the connection to be recreated >>> when a node leaves. >>> >>> Ralph >>> >>> >>> On Apr 23, 2016, at 12:01 AM, Владислав Пятков <[email protected]> >>> wrote: >>> >>> Hello Ralph, >>> >>> I think the correct way is to use client node (with setClientMode - >>> true) for control of cluster. Client node is isolated from data processing >>> and not subject fail of load. >>> Why are you connect each node with third party application instead of to >>> do that only from client? >>> >>> On Sat, Apr 23, 2016 at 4:10 AM, Ralph Goers <[email protected] >>> > wrote: >>> >>>> I have an application that is using Ignite for a clustered cache. Each >>>> member of the cache will have connections open with a third party >>>> application. When a cluster member stops its connections must be >>>> re-established on other cluster members. >>>> >>>> I can do this manually if I have a way of detecting a node has left the >>>> cluster, but I am hoping that there is some other recommended way of >>>> handling this. >>>> >>>> Any suggestions? >>>> >>>> Ralph >>>> >>> >>> >>> >>> -- >>> Vladislav Pyatkov >>> >>> >>> >>> >> > -- Best regards, Alexei Scherbakov
