Even I had seen the same issue in ignite 2.9.0 cluster with jdk11.
We have tried disabling WAL archive but still the same issue is seen.
When we had tried in older release 2.7.6 we had not seen such issue.
Is this a known issue in 2.9 wrt WAL (FSYNC) ? Any configuration change has
to be done wrt
I with 100 GB of data in persistence store this is easily reproducible not
sure what is causing this,
any one please let me know how visor gets number of records from cache and
what could be the issue of blocking wal cleanup. Iam adding wal usage graph
where without connecting to viros the wal
I have noticed a strange behaviour when i connect to visor shell after
ingesting large amount of data to ignite cluster.
Below is the scenario:
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting