Hello everyone! I'm new to Ignite and currently assessing Ignite's ability to share Spark RDD's. What I'm trying to test if Ignite can cache Spark RDD and make it available to multiple applications until this cache gets invalidated.
Following is a very limited test that I used to see what happens when I save IgniteRDD. public class SharedRddExample { public static void main(String[] args) { final SparkConf sparkConf = new SparkConf() .setAppName("shared-rdd-example") .setMaster("local"); final JavaSparkContext sparkContext = new JavaSparkContext(sparkConf); final IgniteContext igniteContext = new IgniteContext(sparkContext.sc(), "ignite/client-default-config.xml", true); final IgniteRDD igniteRDD = igniteContext.fromCache("hello-world-cache"); final JavaRDD javaRDD = igniteRDD.toJavaRDD(); if (javaRDD.isEmpty()) { final JavaRDD<String> rdd = sparkContext.parallelize(Collections.singletonList("Hello World")); igniteRDD.saveValues(rdd.rdd()); } else { javaRDD.collect().forEach(System.out::println); } } } Ignite configuration for client node: <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="clientMode" value="true"/> <property name="peerClassLoadingEnabled" value="true"/> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> <property name="addresses"> <list> <!-- In distributed environment, replace with actual host IP address. --> <value>127.0.0.1:47500..47509</value> </list> </property> </bean> </property> </bean> </property> </bean> Server node uses the same configuration as above but with clientMode set to false. The code above creates IgniteRDD with 1024 partitions which results in Spark creating 1024 tasks just to execute *javaRDD.isEmpty()*. My question is how do I make it faster? Why does Ignite use 1024 for IgniteRDD by default? Do I need a special Ignite config? I'm using Ignite 1.5.0-final. Thanks, Dima