Thanks, Val. Yes, the executor was missing the driver. I added the driver and 
that eliminated the driver missing warning and now see 
java.lang.NoSuchMethodError: at at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues. This is a warning 
though. See Error only on threshold limit exception though. Any inputs here 
will be of great help. 


scala> sharedRDD.saveValues(df2.rdd)
[15:28:36] Topology snapshot [ver=5, servers=2, clients=1, CPUs=16, heap=3.0GB]
16/11/23 15:28:37 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 5, 
finact-poc-001): java.lang.NoSuchMethodError: 
scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1$$anonfun$apply$1.apply(IgniteRDD.scala:151)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1$$anonfun$apply$1.apply(IgniteRDD.scala:150)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:150)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:138)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

[15:28:37] Topology snapshot [ver=6, servers=1, clients=1, CPUs=16, heap=2.0GB]
16/11/23 15:28:37 ERROR TaskSchedulerImpl: Lost executor 7 on finact-poc-001: 
Remote RPC client disassociated. Likely due to containers exceeding thresholds, 
or network issues. Check driver logs for WARN messages.
16/11/23 15:28:46 ERROR TaskSchedulerImpl: Lost executor 11 on 
finact-poc-004.cisco.com: Remote RPC client disassociated. Likely due to 
containers exceeding thresholds, or network issues. Check driver logs for WARN 
messages.
16/11/23 15:28:54 WARN TaskSetManager: Lost task 0.2 in stage 2.0 (TID 7, 
finact-poc-002.cisco.com): java.lang.NoSuchMethodError: 
scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1$$anonfun$apply$1.apply(IgniteRDD.scala:151)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1$$anonfun$apply$1.apply(IgniteRDD.scala:150)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:150)
        at 
org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:138)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)



Thanks,
Vidhya
________________________________________
From: vkulichenko <valentin.kuliche...@gmail.com>
Sent: Wednesday, November 23, 2016 12:18 PM
To: user@ignite.apache.org
Subject: Re: SparkRDD with Ignite

Where is this exception failing? Is it on executor node?

Does it work if you execute something like foreachPartition on the original
RDD? For now it just looks like the executors just miss the Oracle driver
and therefore can't load rows from the database. Ignite is not even touched
yet at this point.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SparkRDD-with-Ignite-tp9160p9162.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to