Hi Val Thank you for response
you can find maven project here: https://github.com/Soroka21/ign-loader-spark <http://> This app actually loads any parquet file into cache (let me know if you need one) I've tried to run on 1.2 mil records wit the same symptoms - looks like my app is working with portion of cache instead of whole cache. May be something is wrong with the way I'm putting data into cache. You can see converter from Spark Row to Ignite BinaryObject in RowToIgniteBinaryObjectConverter.java Which in term is consumed in my SparkLoader.load() Sorry for not very clean code - it was done as a part of early R&D POC in my project. My ignite environment is based on Ignite 2.2.0 version running on 5 nodes. 1GB memory each instance. Config is almost unchanged - you can find it in conf directory. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/