Hi,
We have a cluster of Ignite 2.8.1 server nodes and have recently started
looking at the individual cache metrics for primary keys
org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl.OffHeapPrimaryEntriesCount
In our configuration we have a replicated cache with 2 backups.
Hello!
Indeed it seems like a bug. I have filed a ticket:
https://issues.apache.org/jira/browse/IGNITE-13961
Regards,
--
Ilya Kasnacheev
ср, 6 янв. 2021 г. в 14:18, siva :
> Hi,
> please find the below given cache config and models class details.
>
> cache
> config:
>
Hello!
Do you also have logs from other server nodes?
Here, I don't see anything particularly suspicious. Maybe there indeed were
some short-term network problems?
Regards,
--
Ilya Kasnacheev
ср, 6 янв. 2021 г. в 15:04, BEELA GAYATRI :
> Dear Team,
>
>
>
> We are running 16 Ignite nodes,
Hello!
I think it's a sensible explanation.
Regards,
--
Ilya Kasnacheev
ср, 6 янв. 2021 г. в 14:32, Raymond Wilson :
> I checked our code that creates the primary data region, and it does set
> the minimum and maximum to 4Gb, meaning there will be 1,000,000 pages in
> that region.
>
> The
Hello!
This will happen when this file is deleted while the instance is running.
Not sure who deleted it. Maybe you tried to start another node with the
same consistent id in the background?
You should avoid calling setActive() every time since it will lead to data
loss.
Regards,
--
Ilya
These are the full set of logs, if it helps-
[10:10:56,860][WARNING][main][G] Ignite work directory is not provided,
automatically resolved to: /home/dsudev/ignite-master/work
[10:10:56,873][WARNING][main][G] Consistent ID is not set, it is recommended
to set consistent ID for production clusters
I am also getting below error on my ignite logs-
[20:00:50,515][SEVERE][db-checkpoint-thread-#54][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler
Hello!
This seems like a valid change, however, there are issues with it: if you
don't have persistence settings files on some nodes, it will cause failure
on these nodes. It may also re-read these files too eagerly in its current
form.
I can totally see how this may be an option to have in the
It shouldn’t cause a crash, but since you don’t need to activate an already
active cluster maybe it’s not well tested.
Sending the node a TERM signal (press ^C) is good way to stop a node.
> On 7 Jan 2021, at 09:26, rakshita04 wrote:
>
> can SetActive() cause the crash?
> is this way okay to
can SetActive() cause the crash?
is this way okay to terminate the process by kill or there is some better
way?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Not that it excuses the crash, but why are you calling activate every time the
node starts? It should be called once, the first time all the nodes are
present. The cluster will auto-activate every time after that.
Regards,
Stephen
> On 7 Jan 2021, at 08:56, rakshita04 wrote:
>
> it works,
Issuing a command like "kill process_id" doesn't work?
regards.
On Thu, Jan 7, 2021 at 4:14 PM rakshita04
wrote:
> Hi Team,
>
> We are using apache-ignite for our applications running on 2 machines and
> connected over network.
> We are facing some issue where if kill is performed on running
Hi Team,
We are using apache-ignite for our applications running on 2 machines and
connected over network.
We are facing some issue where if kill is performed on running application,
it somehow corrupts the node and then node never comes up and keep on
rebooting.
Is there a way to handle this
13 matches
Mail list logo