We have recently been seeing the following error:
SEVERE: Failed to serialize object
[typeName=com.bloomberg.aim.wingman.cachemgr.Ts3DataCache$Ts3CalcrtKey]
class org.apache.ignite.binary.BinaryObjectException: Failed to write field
[name=calcrtType]
at
in the first
place, and how to handle it when it happens. Is there a way to detect and log
this error while avoiding crashing the process?
From: user@ignite.apache.org At: 02/19/21 14:18:44To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
Subject: Re: Corrupted B+ Tree
Actually, we were using 2.7.5 when the data was corrupted. We upgraded to 2.9.1
without clearing the corrupted data and got the error that was posted in the
first message.
From: user@ignite.apache.org At: 02/19/21 14:18:44To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
2.9.1
From: user@ignite.apache.org At: 02/19/21 14:18:44To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
Subject: Re: Corrupted B+ Tree Causing Repeated Crashes
Hello! What version of Apache Ignite are you using?
19.02.2021, 22:07, "Mitchell Rathbun (BLOOMBERG/ 73
We are encountering the following error repeatedly, which causes our node to
crash:
2021-02-19 13:30:38,175 ERROR STDIO [pool-32-thread-5] {} Feb 19, 2021 1:30:38
PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Critical system error detected. Will be handled accordingly to
configured
an
issue then file JIRA.
BR,
Andrei
9/23/2020 7:36 PM, Mitchell Rathbun (BLOOMBERG/ 731 LEX) пишет:
Here is the exception:
Sep 22, 2020 7:58:22 PM java.util.logging.LogManager$RootLogger log
SEVERE: Critical system erro
Here is the exception:
Sep 22, 2020 7:58:22 PM java.util.logging.LogManager$RootLogger log
SEVERE: Critical system error detected. Will be handled accordingly to
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler
We currently have a cache that is structured with a key of record type and a
value that is a map from field id to field. So to update this cache, which has
persistence enabled, we need to atomically load the value map for a key, add to
that map, and write the map back to the cache. This can be
In the documentation, Ignite is described as a pure in-memory solution when
persistence is disabled. However, even with persistence disabled, I am noticing
that the Ignite work directory must be set to an existing directory, and that
this directory has the 'marshaller/' directory created with
We have a bunch of tests that are run for our caching layer. In between each
test, we want to clear all of the data in the cache. As part of some of the
tests, there are background writes that occur to the cache. We have some sleeps
to account for this, but that is error prone and doesn't
We have seen the following happen a couple of time recently during periods of
high load/gc pauses in our system:
2020-03-02 11:38:56,803 ERROR STDIO
[tcp-disco-msg-worker-#2%ignite_wingman_2931%] {} Mar 02, 2020 11:38:56 AM
org.apache.ignite.logger.java.JavaLogger error
SEVERE: Blocked
I have gotten this to work where the IGNITE appender is being used for Ignite
logs. However, I am having a lot of issues related to ActLogAppender, which is
a custom appender, working with Ignite. If I set the gridLogger to use
Log4J2Logger at all, I continually see:
2020-02-14 13:25:19,199
I am hoping to use a separate appender for Ignite logs in my application. In my
configuration file, I have:
IGNITE and MAIN are both RollingRandomAccessFile appenders pointing to
different files. In my java code I have:
File logConfigFile = new
I also have seen a similar error:
Caused by: org.apache.ignite.spi.IgniteSpiException: BaselineTopology of
joining node (b1a557be-4a89-42d8-9837-ece339088cc4) is not compatible with
BaselineTopology in the cluster.
Joining node BlT id (4) is greater than cluster BlT id (0). New
A couple more questions after reading the explanation:
-You mentioned each node in the BLT has a consistent id. How is this calculated?
-The branching point hash is a sum of hashcodes of consistent ids of nodes
currently in the BaselineTopology. It is also mentioned that there is a BLT id.
How
We have recently encountered the following:
Caused by: org.apache.ignite.spi.IgniteSpiException: BaselineTopology of
joining node (404d8988-6c2d-4612-ab17-fde635b9da8f) is not compatible with
BaselineTopology in the cluster.
Branching history of cluster BlT ([-205608975, 383765073, 1797002251,
I am hoping to periodically print some Ignite metrics to Grafana. I tried
printing various DataRegionMetrics and DataStorageMetrics method results and
they were not what I expected. For the default DataRegionMetrics (the only one
we are using), getName, getOffHeapUsedSize, and
@ignite.apache.org At: 12/30/19 17:58:04To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX )
Cc: user@ignite.apache.org
Subject: Re: Isolating IgniteCache instances across JVMs on same machine by id
Mitchell,
Can you share logs from all nodes?
Evgenii
пн, 30 дек. 2019 г. в 14:42, Mitchell Rathbun (BLOOMBERG/ 731 LEX
I will apply changes that you mentioned and see if this issue still occurs. If
not, I will attach logs.
From: user@ignite.apache.org At: 12/30/19 17:58:04To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX )
Cc: user@ignite.apache.org
Subject: Re: Isolating IgniteCache instances across JVMs on same
no shared
directory for the cluster?
From: e.zhuravlev...@gmail.com At: 12/30/19 17:33:55To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX )
Cc: user@ignite.apache.org
Subject: Re: Isolating IgniteCache instances across JVMs on same machine by id
Mitchell,
-For your first response, I just want to
t for each fine?
From: e.zhuravlev...@gmail.com At: 12/30/19 15:25:40To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
Subject: Re: Isolating IgniteCache instances across JVMs on same machine by id
-From your first response, it seems like the db id should be included in the
time.
From: user@ignite.apache.org At: 12/30/19 14:16:33To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user@ignite.apache.org
Subject: Re: Isolating IgniteCache instances across JVMs on same machine by id
Hi,
1. If the cache mode is local, why does IgniteCluster even come into play? All
that we
Any thoughts on this?
From: user@ignite.apache.org At: 12/18/19 19:55:47To: user@ignite.apache.org
Cc: Anant Narayan (BLOOMBERG/ 731 LEX ) , Ranjith Lingamaneni (BLOOMBERG/ 731
LEX )
Subject: Isolating IgniteCache instances across JVMs on same machine by id
We have multiple different
Sorry for the delayed response, I have not since been able to reproduce this.
In general how does key equality work with Ignite, given that the keys in the
cache must be serialized? Does equals/hashCode even come into play?
From: user@ignite.apache.org At: 12/19/19 04:27:04To:
We have multiple different database instances for which we are looking into
using Ignite as a local caching layer. Each db has a unique id that we are
using as part of the IgniteInstanceName and for the WorkDirectory path. We are
running IgniteCache in Local mode with persistence enabled, as we
I am attempting to use a custom Java class as a key in an IgniteCache instance
with persistence enabled. I ran into some issues, so I made this class very
simple for testing the problem. This class has an int member that is set to 1
in the constructor, and then has equals overridden to return
We recently ran into an issue where we are hanging we attempting to iterate
through the results from IgniteCache.query. The basic code is as follows:
QueryCursor> queryResults =
ts3KeysMetadataCache.query(new ScanQuery<>());
queryResults.forEach(e -> {
// Do stuff
});
We are running in LOCAL cache mode with persistence enabled. We were able to
trigger the crash with large putAll calls and low off-heap memory allocated
(persistence is enabled). It was IgniteOutOfMemory exception that occurred. I
can try and get a thread dump and add it to this e-mail chain.
in the same
data region. If I use Ignite's DataStreamer API instead of putAll, I get much
better performance and no OOM exception. Any insight into why this might be
would be appreciated.
From: user@ignite.apache.org At: 12/10/19 11:24:35To: Mitchell Rathbun
(BLOOMBERG/ 731 LEX ) , user
I am a little confused on the exact difference between clear and removeAll in
the IgniteCache API. In the documentation for removeAll:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#removeAll--,
it mentions this being an expensive operation and that clear
For the requested full ignite log, where would this be found if we are running
using local mode? We are not explicitly running a separate ignite node, and our
WorkDirectory does not seem to have any logs
From: user@ignite.apache.org At: 12/03/19 19:00:18To: user@ignite.apache.org
Subject: Re:
For our configuration properties, our DataRegion initialSize and MaxSize was
set to 11 MB and persistence was enabled. For DataStorage, our pageSize was set
to 8192 instead of 4096. For Cache, write behind is disabled, on heap cache is
disabled, and Atomicity Mode is Atomic
From:
We are running Ignite in LOCAL mode currently and have persistence enabled. For
this specific test, the off heap memory was set to as low as possible (~10 MB)
to test on-disk performance. We have multiple caches sharing the same
DataRegion. For the test, we basically write synchronously to
33 matches
Mail list logo