It's very strange NPE. It looks like cache is closed or destoryed. Is it possible that you start and close/destroy caches dynamically?
On Fri, Mar 24, 2017 at 5:43 PM, bintisepaha <[email protected]> wrote: > Hey Andrey, Thanks a lot for getting back. > > These errors were a result of a bad client connected to grid. > > We have been running clients that leave and join the cluster constantly in > order to see if we can reproduce this issue. Last night we saw this issue > again. Here is one of the errors that a sys thread has on a client node that > initiates a transaction. The client node was not restarted or disconnected. > It kept working fine. > We do not restart these clients but there are some otehr clietns that leave > and join the cluster. > > Do you think this is helpful in locating the cause? > > Exception in thread "sys-#41%DataGridServer-Production%" > java.lang.NullPointerException > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxKey.finishUnmarshal(IgniteTxKey.java:92) > at > org.apache.ignite.internal.processors.cache.transactions.TxLocksResponse.finishUnmarshal(TxLocksResponse.java:190) > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$DeadlockDetectionListener.unmarshall(IgniteTxManager.java:2427) > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$DeadlockDetectionListener.onMessage(IgniteTxManager.java:2317) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866) > at > org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106) > at > org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > [20:02:18] Topology snapshot [ver=7551, servers=16, clients=53, CPUs=217, > heap=740.0GB] > [20:02:22] Topology snapshot [ver=7552, servers=16, clients=52, CPUs=213, > heap=740.0GB] > [20:02:28] Topology snapshot [ver=7553, servers=16, clients=53, CPUs=217, > heap=740.0GB] > [20:02:36] Topology snapshot [ver=7554, servers=16, clients=54, CPUs=217, > heap=740.0GB] > [20:02:40] Topology snapshot [ver=7555, servers=16, clients=53, CPUs=217, > heap=740.0GB] > [20:02:41] Topology snapshot [ver=7556, servers=16, clients=54, CPUs=217, > heap=740.0GB] > [20:02:48] Topology snapshot [ver=7557, servers=16, clients=53, CPUs=217, > heap=740.0GB] > > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11433.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
