I use persistent storage but not so many caches and so far no problems,
I have 20 caches with around a total of 800.000 entries, each entry is
small (<100 bytes), while testing I update around 5000-10000 entries per
second, restarts do take some time but never more than 2 minutes or so,
so far only tested with one node, I have not run for that long time yet
though, around 5 days at the time so far.
Oh, you are not the last one by the way, I do all configuration of
Ignite in Java ;)
Mikael
Den 2018-10-02 kl. 10:50, skrev Hamed Zahedifar:
Hi Gianluca
Maybe it`s for JVM -XX:+AlwaysPreTouch Switch. try redhat workaround
for this problem, i hope it will start faster.
Java process takes a long time with -XX:+AlwaysPreTouch - Red Hat
Customer Portal <https://access.redhat.com/solutions/2685771>
Java process takes a long time with -XX:+AlwaysPreTouch - Red Hat
Custom...
<https://access.redhat.com/solutions/2685771>
------------
Hamed
On Tuesday, October 2, 2018, 12:01:38 PM GMT+3:30, Gianluca Bonetti
<gianluca.bone...@gmail.com <mailto:gianluca.bone...@gmail.com>> wrote:
Hello everyone
This is my first question to the mailing list, which I follow since
some time, to get hints about using Ignite.
Until now I used in other softwares development, and Ignite always
rocked and made the difference, hence I literally love it :)
Now I am facing troubles in restarting an Apache Ignite instance on a
new product we are developing and testing.
Previously, I have been developing using Apache Ignite with custom
loader from database, but this time we wanted to go with a "cache
centric" approach and use only Ignite Persistence, as there is no need
of integrating with databases or JDBC tools.
So Ignite Instance is the main and only storage.
The software is a monitoring platform, which receives small chunks of
data (more or less 500 bytes) and stores in different caches,
depending on the source address.
The number of incoming data packets is really low as we are only in
testing, let's say around 100 packes per minute.
The software is running in testing enviroment, so only one server is
deployed at the moment.
The software can run for weeks with no problem, the caches get bigger
and bigger and everything runs fine and fast.
Then if we restart the software, it takes ages to restart, and
actually most of the times it does not ever complete the initial
restart of Ignite.
So we have to delete the persistence storage files, to be able to
start again.
As we are only in testing, we can still withstand it.
We get just a message in the logs: "Ignite node stopped in the middle
of checkpoint. Will restore memory state and finish checkpoint on node
start."
The client instances connecting to Ignite gets the log:
"org.apache.ignite.logger.java.JavaLogger.info Join cluster while
cluster state transition is in progress, waiting when transition finish."
But it never finishes.
Speaking of sizes, when running tests with no interruption, the cache
grew up to 50 GBs, with no degradation in performance or data loss.
The issues with restarting start just when the cache grows up to ~4 GBs.
The other softwares I developed using Ignite, with custom database
loader, never had problems with large caches in memory.
The testing server is a dedicated Linux machine with 8 cores Xeon
processor, 64 GB RAM, and SATA disks on software mdraid.
The JVM is OpenJDK 8, started with "-server -Xms24g -Xmx24g
-XX:MaxMetaspaceSize=1g -XX:+AlwaysPreTouch -XX:+UseG1GC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AggressiveOpts"
For starting Ignite instance, I am one (the last?) which prefers Java
code instead of XML files.
I recently switched off PeerClassLoading and added the
BinaryTypeConfiguration, which previosly I hadn't specified, but
didn't help.
public static final Ignite newInstance(List<String> remotes) {
DataStorageConfiguration storage = new DataStorageConfiguration();
DataRegionConfiguration region =
storage.getDefaultDataRegionConfiguration();
BinaryConfiguration binary = new BinaryConfiguration();
TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder();
TcpDiscoverySpi discovery = new TcpDiscoverySpi();
IgniteConfiguration config = new IgniteConfiguration();
storage.setStoragePath("/home/ignite/data");
storage.setWalPath("/home/ignite/wal");
storage.setWalArchivePath("/home/ignite/archive");
region.setPersistenceEnabled(true);
region.setInitialSize(16L * 1024 * 1024 * 1024);
region.setMaxSize(16L * 1024 * 1024 * 1024);
binary.setCompactFooter(false);
binary.setTypeConfigurations(Arrays.asList(new
BinaryTypeConfiguration(Datum.class.getCanonicalName())));
finder.setAddresses(remotes);
discovery.setIpFinder(finder);
config.setDataStorageConfiguration(storage);
config.setBinaryConfiguration(binary);
config.setPeerClassLoadingEnabled(false);
config.setDiscoverySpi(discovery);
config.setClientMode(false);
Ignite ignite = Ignition.start(config);
ignite.cluster().active(true);
return ignite;
}
Datum is a small POJO class, with nearly 100 fields and should be less
than 500 bytes of data.
Then there are nearly 200 caches in use, all containing Datum objects
(at least for now).
I am quite sure I am missing something when starting the instance, but
cannot understand what.
Is there a way to inspect the progress of the checkpoint at startup?
I cannot do anything by Ignite Visor as it would not connect until the
cluster activation finishes.
If you have any suggestions, let me know.
Thank you very much!
Best regards
Gianluca