P2P class loading is disabled and works properly.
Thank you. :)
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello, Valentin.
I try to take a look at this bug.
В Чт, 01/02/2018 в 12:35 -0700, vkulichenko пишет:
> Well, then you need IGNITE-3653 to be fixed I believe. Unfortunately, it's
> not assigned to anyone currently, so apparently no one is working on it. Are
> you willing to pick it up and contri
Well, then you need IGNITE-3653 to be fixed I believe. Unfortunately, it's
not assigned to anyone currently, so apparently no one is working on it. Are
you willing to pick it up and contribute?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I tried that, and it does work with it disabled but i needed the
peerClassLoading enabled. we have a microservice setup and so lots of
different things are interacting with our ignite cluster to get data. we
have stuff making continuous queries and regular sql queries. so multiple
different apps ar
This looks like this issue: https://issues.apache.org/jira/browse/IGNITE-3653
Do you have P2P class loading enabled? If yes, can you try to disable it?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I am also experiencing this issue. I'm running ignite in a kubernetes cluster
and I am trying to do a rolling update. so I have 2 ignite nodes running and
I am using K8's rolling update api in a deployment. eg. I am running an
application that starts up the 2 nodes. the nodes cluster and I then bui
Looking at the source code of the master I couldn’t get how this NPE can
happen. Please upgrade to 2.3.0 and let us know if you still observe the bug.
—
Denis
> On Nov 12, 2017, at 4:27 AM, dark wrote:
>
> Another Ignite node logs here.
>
> Nodes are currently under GC for less than a second
Another Ignite node logs here.
Nodes are currently under GC for less than a second.
[19:23:31,416][ERROR][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
in order to prevent cluster wide instability.
java.lang.NullPointe
Hi team,
I have a problem about Ignite Cluster.
Nodes die in 10 hour increments, leaving the following logs: And, when a
cluster is configured, only one node is used at a high rate. This part seems
to have some influence. The log when an issue occurs is shown below.
[08:11:15,903][ERROR][tcp-dis