Hi All,

I have one master & 2 workers on my local machine. I wrote the following
Java program to count number of lines in README.md file (I am using Maven
project to do this)

import org.apache.spark.api.java.JavaSparkContext;

import org.apache.spark.api.java.JavaRDD;

import org.apache.spark.SparkConf;

import org.apache.spark.api.java.function.Function;

import org.apache.spark.api.java.function.Function2;


public class App

{

    public static void main( String[] args )

    {

    SparkConf conf = new SparkConf()

        .setAppName("Spark Java App")

        .setMaster("spark://10.2.12.59:7077");

    JavaSparkContext sc = new JavaSparkContext(conf);



    JavaRDD<String> lines = sc.textFile("README.md");

    JavaRDD<Integer> lineLengths = lines.map(s -> s.length());

    int totalLength = lineLengths.reduce((a, b) -> a + b);

    System.out.println("Total length: " + totalLength);



    }

}


When i run the above program, i am getting the following error.


16/03/10 11:39:00 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1,
10.2.12.59): java.lang.ClassCastException: cannot assign instance of
java.lang.invoke.SerializedLambda to field
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.fun$1 of
type org.apache.spark.api.java.function.Function in instance of
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1

at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(
ObjectStreamClass.java:2133)

at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)

at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)

at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)

at org.apache.spark.scheduler.Task.run(Task.scala:89)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)



It is working fine if i use .setMaster("local[2]") instead of using
.setMaster("spark://10.2.12.59:7077")


However, it is working fine if i create a jar out of this project and
submit using spark-submit.


The problem with .setMaster("local[2]") is it is not running on the
existing cluster.


Any idea how i can run this program on existing cluster from Eclipse ?


Thanks,

Rajeshwar Gaini.

Reply via email to