First of all, starting node in local JVM (process) is the fastest and most
reliable way to access the cluster.

As far as behavior you see, it can be observed in two cases:
1. Node you start is a data node. So, when you stop the node data can be
lost if no backups are configured for the cache (see
org.apache.ignite.configuration.CacheConfiguration#setBackups). If you start
node for short period of time, no point of making it a data node. You can
start it as a client, so it will not carry data. Just call: 
Ignition.setClientMode(true); 
Ignition.start(..);

2. The second possible reason is that class definitions of the objects you
put into cache are not available on remote nodes and when master node (node
initiating put and holding the definitions) exits all other nodes have to
cleanup caches. To fix this you can make your classes available on all the
nodes - just build a jar and put it to classpath of each jvm involved.
Please also turn off peer deployment which is recommended for performance -
org.apache.ignite.configuration.IgniteConfiguration#setPeerClassLoadingEnabled




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Spring-issue-with-Connect-to-Ignite-Server-Runing-on-Diff-Server-tp366p369.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to