I'm on a road block trying to understand why Spark doesn't work for a
colleague of mine on his Windows 7 laptop.
I have pretty much the same setup and everything works fine.


I googled the error message and didn't get anything that resovled it.

Here is the exception message (after running spark 1.3.1 vanilla
installation prebuilt for Hadoop 2.4)

JDK is 1.7 64 bit

akka.actor.ActorInitializationException: exception during creation

        at akka.actor.ActorInitializationException$.apply(Actor.scala:164)

        at akka.actor.ActorCell.create(ActorCell.scala:596)

        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)

        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)

        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)

        at akka.dispatch.Mailbox.run(Mailbox.scala:219)

        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)

        at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

        at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: akka.actor.ActorNotFound:* Actor not found for:
ActorSelection[Anchor(akka://sparkDriver/deadLetters), Path(/)*

*]*

        at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)

        at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)

        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)

        at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)

        at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)

        at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)

        at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)

        at
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)

        at
akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)

        at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)

        at
akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)

        at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)

        at
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)

        at
scala.concurrent.impl.Promise$DefaultPromise.scala$concurrent$impl$Promise$DefaultPromise$$dispatchOrAddCallb

ack(Promise.scala:280)

        at
scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:270)

        at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:63)

        at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:80)

        at
org.apache.spark.util.AkkaUtils$.makeDriverRef(AkkaUtils.scala:221)

        at
org.apache.spark.executor.Executor.startDriverHeartbeater(Executor.scala:393)

        at org.apache.spark.executor.Executor.<init>(Executor.scala:119)

        at
org.apache.spark.scheduler.local.LocalActor.<init>(LocalBackend.scala:58)

        at
org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:107)

        at
org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:107)

        at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:343)

        at akka.actor.Props.newActor(Props.scala:252)

        at akka.actor.ActorCell.newActor(ActorCell.scala:552)

        at akka.actor.ActorCell.create(ActorCell.scala:578)

        ... 9 more



I have see this error mentioned, but for Linux, not windows:
http://apache-spark-user-list.1001560.n3.nabble.com/Actor-not-found-td22265.html


This one also doesn't seem to offer any resolution:
https://groups.google.com/a/lists.datastax.com/forum/#!topic/spark-connector-user/UqCYeUpgGCU



My assumption is that this is related to some resolving / IP conflicts etc,
but I'm not sure.


One difference that I did notice between my system and my friend's


when I do ping localhost, I get 127.0.0.1

when he does it he gets ::1


I saw an issue about spark having problems with ipv6, and saw it was
resolved only in 1.4, is that related?

https://issues.apache.org/jira/browse/SPARK-6440

Reply via email to