Hi,
With default configurations in Ignite 1.9,

Netstat shows about 5000 like following logs. And all sockets closes in about 1 
min. And this happens in every 5 mins.  This behavior started happening ONLY 
after a NODE_FAILED event received and we restarted the Ignite(Ignition.stop, 
Ignition.start) without killing the JVM.

tcp 0 0 1.2.3.4:47500 5.6.7.8:54968 TIME_WAIT

where 1.2.3.4 = node1 IP and 5.6.7.8 = node2 IP.

Just wanted to know whether there's any process known, that can lead to this 
type of behavior. Can this be because we restart Ignite in the same JVM ?

Reply via email to