When check connections,Many nio socker will be create(one socker per node)
,Then direct memory will grow up with the node count?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Not really. The amount of direct memory needed doesn’t grow with the node count
nor the amount of data you store.
Stan
From: wangsan
Sent: 12 января 2019 г. 9:30
To: user@ignite.apache.org
Subject: RE: Failed to read data from remote connection
Yeath, set a larger MaxDirectMemorySize .
But, I
Yeath, set a larger MaxDirectMemorySize .
But, I am afraid of when nodes size be more larger.The directmemory will be
larger with node sizes.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
“OOME: Direct buffer memory” means that MaxDirectMemorySize is too small.
Set a larger MaxDirectMemorySize value.
Stan
From: wangsan
Sent: 18 декабря 2018 г. 5:08
To: user@ignite.apache.org
Subject: Re: Failed to read data from remote connection
Now the cluster have 100+ nodes, when 'Start
Now the cluster have 100+ nodes, when 'Start check connection process'
happens,
Some node will throw oom with Direct buffer memory (java nio).
When check connections,Many nio socker will be create ,Then oom happens?
How to fix the oom except larger xmx?
Thanks.
--
Sent from:
Do you shut down C++ node properly prior killing the process?
Yeath, c++ node was killed by kill -9 .not sighup. It is a wrong ops,And
I will use kill ops.
Does this exceptions impacts cluster's functionality anyhow?
I am not sure about the exceptions. My cluster will crash with oom
Do you shut down C++ node properly prior killing the process?
Does this exceptions impacts cluster's functionality anyhow?
Best Regards,
Igor
On Wed, Nov 28, 2018 at 8:53 AM wangsan wrote:
> As I restart cpp client many times concurrently ,may be zkcluster(ignite)
> has some node path has
As I restart cpp client many times concurrently ,may be zkcluster(ignite)
has some node path has been closed.
>From cpp client logs ,
I can see zkdiscovery watch 44 first,but the node has been closed
I have a cluster with zookeeper discovery, Eg java server node s1,java client
node jc1 and cpp client node cpp1
Sometimes when cpp1 restart ,s1 and jc1 will throw this exception many
times
Failed to process selector key [ses=GridSelectorNioSessionImpl
And cpp1 with have many messages
Can you explain your case in more detail? I'm not quite
understand what the problem is.
Best Regards,
Igor
On Tue, Nov 27, 2018 at 1:27 PM wangsan wrote:
> When client (c++ node) restart mulit times,
> The server and other client will throw this excption
>
> ERROR
10 matches
Mail list logo