Hi,
I recently upgrade my spark 1.0.0 cluster to 1.0.1.
But it gives me "ERROR remote.EndpointWriter: AssociationError" when I run
simple SparkSQL job in spark-shell.

here is code,
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
case class Person(name:String, Age:Int, Gender:String, Birth:String)
val peopleRDD = sc.textFile("/sample/sample.csv").map(_.split(",")).map(p =>
Person(p(0).toString, p(1).toInt, p(2).toString, p(3).toString))
peopleRDD.collect().foreach(println)

and here is error message on worker node.
14/07/16 10:58:04 ERROR remote.EndpointWriter: AssociationError
[akka.tcp://sparkwor...@my.test6.com:38030] ->
[akka.tcp://sparkexecu...@my.test6.com:35534]: Error [Association failed
with [akka.tcp://sparkexecu...@my.tes$
akka.remote.EndpointAssociationException: Association failed with
[akka.tcp://sparkexecu...@my.test6.com:35534]
Caused by:
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
Connection
refused: my.test6.com/

and here is error message in shell 
14/07/16 11:33:15 WARN scheduler.TaskSetManager: Loss was due to
java.lang.NoClassDefFoundError
java.lang.NoClassDefFoundError: Could not initialize class $line10.$read$
        at
$line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
        at
$line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:51)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)

I tested it on spark 1.0.0(same machine) and it was fine.
It seems like Worker cannot find Executor akka endpoint.
Do you have any ideas?

Best regards
Kevin



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-1-akka-connection-refused-tp9864.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to