Hi, Thanks for the answer. Unfortunately, we cannot remove Cassandra as it is being used elsewhere as well. We will have to write directly in ignite and sync with cassandra.
We had a few other issues while getting data from spark: 1) cacherdd.sql("select * from table") is giving me heap memory (GC) issues. However, getting data using spark.read.format().... works fine. Why is this so ? 2) in my persistence, i have IndexedTypes with key and value POJO classes. The key class corresponds to the key in cassandra with partition and clustering keys defined. While querying with sql, (select * from value_class) i get all the columns of the table. However, while querying using spark.read.format(...).option(OPTION_TABLE,value_class).load() , I only get the columns stored in the value class. How do i fetch all the columns using dataframe api ? Thanks, Shrey On Fri, 28 Sep 2018, 08:43 Alexey Kuznetsov, <akuznet...@apache.org> wrote: > Hi, Shrey! > > Just as idea - Ignite now has persistence (see > https://apacheignite.readme.io/docs/distributed-persistent-store), > may be you can completely replace Cassandra with Ignite? > > In this case all data always be actual, no need to sync with external db. > > -- > Alexey Kuznetsov >