Hi,
Thread dumps look healthy. Please share full logs at that time when you took
that thread dumps or take a new ones (thread dumps + logs).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It is ok if you kill client node. Grid will wait for
failureDetectionTimeout before drop failed node from topology.
All topology operations will stuck during that time as ignite nodes will
wait for answer from failed node until they detected failure.
On Thu, Jun 7, 2018 at 8:22 AM, Sambhaji
An issue occurred when we abnormally stop Spark Java application which
having Ignite client running inside that Spark context.So when we kill
spark application its abnormally stop Ignite client and then when we
restart our application and client try to connect with Ignite cluster then
it getting
Hi,
It's hard to get what's going wrong from your question.
Please attach full logs and thread dumps from all server nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have 3 node cluster with 20+ client and it's running in spark
context.Initially it working fine but randomly get issue whenever new node
i.e. client try to connect with cluster.The cluster getting inoperative.I
have got following logs when its stuck.If I restart any Ignite server
explicitly then