Hello,

       I am trying to run a job on two workers. I have cluster of 3
computers where one is the master and the other two are workers. I am able
to successfully register the separate physical machines as workers in the
cluster. When I run a job with a single worker connected, it runs
successfully and calculates the value of Pi. (I am running the scala program
SparkPi). But when I connect two workers and run the same program it fails
saying 'Master removed our application : FAILED'. 

I have the same user account on all computers and the installation of Spark
is also in the same location on all computers.
The version of Spark is : 1.0.0 (prebuilt Hadoop1) on all computers
Operating System : Ubuntu 12.04 LTS

*---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*
*Following is the master log : *

mpiuser@ashwini-pc:~/spark-1.0.0-bin-hadoop1$ sudo ./bin/spark-submit
\--class org.apache.spark.examples.SparkPi \--master spark://10.0.0.2:7077
\--deploy-mode client
\file:///home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-examples-1.0.0-hadoop1.0.4.jar
\--verbose 10000 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
Using properties file:
/home/mpiuser/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf 
Adding default property: spark.master=spark://10.0.0.2:7077 
Using properties file:
/home/mpiuser/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf 
Adding default property: spark.master=spark://10.0.0.2:7077 
Parsed arguments: 
  master                  spark://10.0.0.2:7077 
  deployMode              client 
  executorMemory          null 
  executorCores           null 
  totalExecutorCores      null 
  propertiesFile         
/home/mpiuser/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf 
  driverMemory            null 
  driverCores             null 
  driverExtraClassPath    null 
  driverExtraLibraryPath  null 
  driverExtraJavaOptions  null 
  supervise               false 
  queue                   null 
  numExecutors            null 
  files                   null 
  pyFiles                 null 
  archives                null 
  mainClass               org.apache.spark.examples.SparkPi 
  primaryResource        
file:///home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-examples-1.0.0-hadoop1.0.4.jar
 
  name                    org.apache.spark.examples.SparkPi 
  childArgs               [10000] 
  jars                    null 
  verbose                 true 

Default properties from
/home/mpiuser/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf: 
  spark.master -> spark://10.0.0.2:7077 

    
Using properties file:
/home/mpiuser/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf 
Adding default property: spark.master=spark://10.0.0.2:7077 
Main class: 
org.apache.spark.examples.SparkPi 
Arguments: 
10000 
System properties: 
SPARK_SUBMIT -> true 
spark.app.name -> org.apache.spark.examples.SparkPi 
spark.jars ->
file:///home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-examples-1.0.0-hadoop1.0.4.jar
 
spark.master -> spark://10.0.0.2:7077 
Classpath elements: 
file:///home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-examples-1.0.0-hadoop1.0.4.jar
 
14/08/19 10:59:41 INFO SecurityManager: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties 
14/08/19 10:59:41 INFO SecurityManager: Changing view acls to: root 
14/08/19 10:59:41 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(root) 
14/08/19 10:59:42 INFO Slf4jLogger: Slf4jLogger started 
14/08/19 10:59:42 INFO Remoting: Starting remoting 
14/08/19 10:59:42 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@ashwini-pc:34438] 
14/08/19 10:59:42 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@ashwini-pc:34438] 
14/08/19 10:59:42 INFO SparkEnv: Registering MapOutputTracker 
14/08/19 10:59:42 INFO SparkEnv: Registering BlockManagerMaster 
14/08/19 10:59:42 INFO DiskBlockManager: Created local directory at
/tmp/spark-local-20140819105942-1287 
14/08/19 10:59:42 INFO MemoryStore: MemoryStore started with capacity 294.6
MB. 
14/08/19 10:59:42 INFO ConnectionManager: Bound socket to port 32777 with id
= ConnectionManagerId(ashwini-pc,32777) 
14/08/19 10:59:42 INFO BlockManagerMaster: Trying to register BlockManager 
14/08/19 10:59:42 INFO BlockManagerInfo: Registering block manager
ashwini-pc:32777 with 294.6 MB RAM 
14/08/19 10:59:42 INFO BlockManagerMaster: Registered BlockManager 
14/08/19 10:59:42 INFO HttpServer: Starting HTTP Server 
14/08/19 10:59:42 INFO HttpBroadcast: Broadcast server started at
http://10.0.0.2:49024 
14/08/19 10:59:42 INFO HttpFileServer: HTTP File server directory is
/tmp/spark-041bb9af-0703-4505-8c9e-cf355f857692 
14/08/19 10:59:42 INFO HttpServer: Starting HTTP Server 
14/08/19 10:59:48 INFO SparkUI: Started SparkUI at http://ashwini-pc:4040 
14/08/19 10:59:48 INFO SparkContext: Added JAR
file:///home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-examples-1.0.0-hadoop1.0.4.jar
at http://10.0.0.2:58185/jars/spark-examples-1.0.0-hadoop1.0.4.jar with
timestamp 1408460388536 
14/08/19 10:59:48 INFO AppClient$ClientActor: Connecting to master
spark://10.0.0.2:7077... 
14/08/19 10:59:48 INFO SparkContext: Starting job: reduce at
SparkPi.scala:35 
14/08/19 10:59:48 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35)
with 10000 output partitions (allowLocal=false) 
14/08/19 10:59:48 INFO DAGScheduler: Final stage: Stage 0(reduce at
SparkPi.scala:35) 
14/08/19 10:59:48 INFO DAGScheduler: Parents of final stage: List() 
14/08/19 10:59:48 INFO SparkDeploySchedulerBackend: Connected to Spark
cluster with app ID app-20140819105948-0003 
14/08/19 10:59:48 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/0 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 10:59:48 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/0 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 10:59:48 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/1 on worker-20140819105655-rasikap-46200
(rasikap:46200) with 4 cores 
14/08/19 10:59:48 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/1 on hostPort rasikap:46200 with 4 cores, 512.0 MB
RAM 
14/08/19 10:59:49 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/0 is now RUNNING 
14/08/19 10:59:49 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/1 is now RUNNING 
14/08/19 10:59:49 INFO DAGScheduler: Missing parents: List() 
14/08/19 10:59:49 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map
at SparkPi.scala:31), which has no missing parents 
14/08/19 10:59:49 INFO DAGScheduler: Submitting 10000 missing tasks from
Stage 0 (MappedRDD[1] at map at SparkPi.scala:31) 
14/08/19 10:59:49 INFO TaskSchedulerImpl: Adding task set 0.0 with 10000
tasks 
14/08/19 10:59:51 INFO SparkDeploySchedulerBackend: Registered executor:
Actor[akka.tcp://sparkExecutor@rasikap:56534/user/Executor#-1887220734] with
ID 1 
14/08/19 10:59:51 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on
executor 1: rasikap (PROCESS_LOCAL) 
14/08/19 10:59:51 INFO TaskSetManager: Serialized task 0.0:0 as 1407 bytes
in 4 ms 
14/08/19 10:59:51 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on
executor 1: rasikap (PROCESS_LOCAL) 
14/08/19 10:59:51 INFO TaskSetManager: Serialized task 0.0:1 as 1407 bytes
in 1 ms 
14/08/19 10:59:51 INFO TaskSetManager: Starting task 0.0:2 as TID 2 on
executor 1: rasikap (PROCESS_LOCAL) 
14/08/19 10:59:51 INFO TaskSetManager: Serialized task 0.0:2 as 1407 bytes
in 1 ms 
14/08/19 10:59:51 INFO TaskSetManager: Starting task 0.0:3 as TID 3 on
executor 1: rasikap (PROCESS_LOCAL) 
14/08/19 10:59:51 INFO TaskSetManager: Serialized task 0.0:3 as 1407 bytes
in 1 ms 
14/08/19 10:59:51 INFO BlockManagerInfo: Registering block manager
rasikap:49268 with 294.6 MB RAM 
14/08/19 10:59:51 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/0 is now FAILED (Command exited with code 1) 
14/08/19 10:59:51 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/0 removed: Command exited with code 1 
14/08/19 10:59:51 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/2 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 10:59:51 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/2 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 10:59:51 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/2 is now RUNNING 
14/08/19 10:59:53 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/2 is now FAILED (Command exited with code 1) 
14/08/19 10:59:53 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/2 removed: Command exited with code 1 
14/08/19 10:59:54 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/3 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 10:59:54 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/3 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 10:59:54 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/3 is now RUNNING 
14/08/19 10:59:56 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/3 is now FAILED (Command exited with code 1) 
14/08/19 10:59:56 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/3 removed: Command exited with code 1 
14/08/19 10:59:56 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/4 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 10:59:56 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/4 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 10:59:57 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/4 is now RUNNING 
14/08/19 10:59:59 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/4 is now FAILED (Command exited with code 1) 
14/08/19 10:59:59 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/4 removed: Command exited with code 1 
14/08/19 10:59:59 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/5 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 10:59:59 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/5 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 10:59:59 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/5 is now RUNNING 
14/08/19 11:00:01 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/5 is now FAILED (Command exited with code 1) 
14/08/19 11:00:01 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/5 removed: Command exited with code 1 
14/08/19 11:00:01 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/6 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 11:00:01 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/6 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 11:00:01 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/6 is now RUNNING 
14/08/19 11:00:03 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/6 is now FAILED (Command exited with code 1) 
14/08/19 11:00:03 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/6 removed: Command exited with code 1 
14/08/19 11:00:03 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/7 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 11:00:03 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/7 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 11:00:04 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/7 is now RUNNING 
14/08/19 11:00:06 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/7 is now FAILED (Command exited with code 1) 
14/08/19 11:00:06 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/7 removed: Command exited with code 1 
14/08/19 11:00:06 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/8 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 11:00:06 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/8 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 11:00:06 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/8 is now RUNNING 
14/08/19 11:00:08 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/8 is now FAILED (Command exited with code 1) 
14/08/19 11:00:08 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/8 removed: Command exited with code 1 
14/08/19 11:00:08 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/9 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 11:00:08 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/9 on hostPort gauri-Inspiron-5520.local:40727 with 4
cores, 512.0 MB RAM 
14/08/19 11:00:08 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/9 is now RUNNING 
14/08/19 11:00:10 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/9 is now FAILED (Command exited with code 1) 
14/08/19 11:00:10 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/9 removed: Command exited with code 1 
14/08/19 11:00:10 INFO AppClient$ClientActor: Executor added:
app-20140819105948-0003/10 on
worker-20140819105716-gauri-Inspiron-5520.local-40727
(gauri-Inspiron-5520.local:40727) with 4 cores 
14/08/19 11:00:10 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140819105948-0003/10 on hostPort gauri-Inspiron-5520.local:40727 with
4 cores, 512.0 MB RAM 
14/08/19 11:00:11 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/10 is now RUNNING 
14/08/19 11:00:13 INFO AppClient$ClientActor: Executor updated:
app-20140819105948-0003/10 is now FAILED (Command exited with code 1) 
14/08/19 11:00:13 INFO SparkDeploySchedulerBackend: Executor
app-20140819105948-0003/10 removed: Command exited with code 1 
14/08/19 11:00:13 ERROR SparkDeploySchedulerBackend: Application has been
killed. Reason: Master removed our application: FAILED 
14/08/19 11:00:13 INFO DAGScheduler: Failed to run reduce at
SparkPi.scala:35 
14/08/19 11:00:13 INFO TaskSchedulerImpl: Cancelling stage 0 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Master removed our application: FAILED 
        at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
 
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
 
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
 
        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015) 
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
 
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
 
        at scala.Option.foreach(Option.scala:236) 
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
 
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
 
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) 
        at akka.actor.ActorCell.invoke(ActorCell.scala:456) 
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) 
        at akka.dispatch.Mailbox.run(Mailbox.scala:219) 
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
 
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
14/08/19 11:00:13 INFO TaskSchedulerImpl: Stage 0 was cancelled 
mpiuser@ashwini-pc:~/spark-1.0.0-bin-hadoop1$ 
*---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*

*Log on first worker :*

root@rasikap:/home/mpiuser/spark-1.0.0-bin-hadoop1# ./bin/spark-class
org.apache.spark.deploy.worker.Worker spark://10.0.0.2:7077
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
14/08/19 10:56:54 INFO SecurityManager: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
14/08/19 10:56:54 INFO SecurityManager: Changing view acls to: root
14/08/19 10:56:54 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(root)
14/08/19 10:56:54 INFO Slf4jLogger: Slf4jLogger started
14/08/19 10:56:54 INFO Remoting: Starting remoting
14/08/19 10:56:55 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkWorker@rasikap:46200]
14/08/19 10:56:55 INFO Worker: Starting Spark worker rasikap:46200 with 4
cores, 2.8 GB RAM
14/08/19 10:56:55 INFO Worker: Spark home:
/home/mpiuser/spark-1.0.0-bin-hadoop1
14/08/19 10:57:00 WARN AbstractLifeCycle: FAILED
SelectChannelConnector@0.0.0.0:8081: java.net.BindException: Address already
in use
java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.Server.doStart(Server.java:293)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
        at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
        at org.apache.spark.deploy.worker.Worker.preStart(Worker.scala:134)
        at akka.actor.ActorCell.create(ActorCell.scala:562)
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
        at akka.dispatch.Mailbox.run(Mailbox.scala:218)
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/08/19 10:57:00 WARN AbstractLifeCycle: FAILED
org.eclipse.jetty.server.Server@52c029: java.net.BindException: Address
already in use
java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.Server.doStart(Server.java:293)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
        at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
        at org.apache.spark.deploy.worker.Worker.preStart(Worker.scala:134)
        at akka.actor.ActorCell.create(ActorCell.scala:562)
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
        at akka.dispatch.Mailbox.run(Mailbox.scala:218)
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/08/19 10:57:00 INFO JettyUtils: Failed to create UI at port, 8081. Trying
again.
14/08/19 10:57:00 INFO JettyUtils: Error was:
Failure(java.net.BindException: Address already in use)
14/08/19 10:57:05 INFO WorkerWebUI: Started WorkerWebUI at
http://rasikap:8082
14/08/19 10:57:05 INFO Worker: Connecting to master spark://10.0.0.2:7077...
14/08/19 10:57:05 INFO Worker: Successfully registered with master
spark://10.0.0.2:7077

14/08/19 10:59:43 INFO Worker: Asked to launch executor
app-20140819105948-0003/1 for Spark Pi
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
14/08/19 10:59:44 INFO ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "1"
"rasikap" "4" "akka.tcp://sparkWorker@rasikap:46200/user/Worker"
"app-20140819105948-0003"
^[[B^[[B14/08/19 11:00:08 INFO Worker: Asked to kill executor
app-20140819105948-0003/1
14/08/19 11:00:08 INFO ExecutorRunner: Runner thread for executor
app-20140819105948-0003/1 interrupted
14/08/19 11:00:08 INFO ExecutorRunner: Killing process!
14/08/19 11:00:08 INFO Worker: Executor app-20140819105948-0003/1 finished
with state KILLED
14/08/19 11:00:08 INFO LocalActorRef: Message
[akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
Actor[akka://sparkWorker/deadLetters] to
Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%4010.0.0.1%3A52580-2#-100145973]
was not delivered. [1] dead letters encountered. This logging can be turned
off or adjusted with configuration settings 'akka.log-dead-letters' and
'akka.log-dead-letters-during-shutdown'.
14/08/19 11:00:08 ERROR EndpointWriter: AssociationError
[akka.tcp://sparkWorker@rasikap:46200] ->
[akka.tcp://sparkExecutor@rasikap:56534]: Error [Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]] [
akka.remote.EndpointAssociationException: Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]
Caused by:
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
Connection refused: rasikap/10.0.0.1:56534
]
14/08/19 11:00:08 ERROR EndpointWriter: AssociationError
[akka.tcp://sparkWorker@rasikap:46200] ->
[akka.tcp://sparkExecutor@rasikap:56534]: Error [Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]] [
akka.remote.EndpointAssociationException: Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]
Caused by:
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
Connection refused: rasikap/10.0.0.1:56534
]
14/08/19 11:00:08 ERROR EndpointWriter: AssociationError
[akka.tcp://sparkWorker@rasikap:46200] ->
[akka.tcp://sparkExecutor@rasikap:56534]: Error [Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]] [
akka.remote.EndpointAssociationException: Association failed with
[akka.tcp://sparkExecutor@rasikap:56534]
Caused by:
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
Connection refused: rasikap/10.0.0.1:56534

*---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*

*Log on second worker : *

mpiuser@gauri-Inspiron-5520:~/spark-1.0.0-bin-hadoop1$ ./bin/spark-class
org.apache.spark.deploy.worker.Worker spark://10.0.0.2:7077 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:57:15 WARN util.Utils: Your hostname, gauri-Inspiron-5520
resolves to a loopback address: 127.0.1.1; using 10.0.0.4 instead (on
interface eth1) 
14/08/19 10:57:15 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to
another address 
14/08/19 10:57:15 INFO spark.SecurityManager: Changing view acls to: mpiuser 
14/08/19 10:57:15 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(mpiuser) 
14/08/19 10:57:15 INFO slf4j.Slf4jLogger: Slf4jLogger started 
14/08/19 10:57:15 INFO Remoting: Starting remoting 
14/08/19 10:57:15 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727] 
14/08/19 10:57:16 INFO worker.Worker: Starting Spark worker
gauri-Inspiron-5520.local:40727 with 4 cores, 2.8 GB RAM 
14/08/19 10:57:16 INFO worker.Worker: Spark home:
/home/mpiuser/spark-1.0.0-bin-hadoop1 
14/08/19 10:57:21 INFO server.Server: jetty-8.y.z-SNAPSHOT 
14/08/19 10:57:21 WARN component.AbstractLifeCycle: FAILED
SelectChannelConnector@0.0.0.0:8081: java.net.BindException: Address already
in use 
java.net.BindException: Address already in use 
        at sun.nio.ch.Net.bind0(Native Method) 
        at sun.nio.ch.Net.bind(Net.java:444) 
        at sun.nio.ch.Net.bind(Net.java:436) 
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) 
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
 
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316) 
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
 
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
 
        at org.eclipse.jetty.server.Server.doStart(Server.java:293) 
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
 
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192) 
        at 
org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192) 
        at 
org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192) 
        at scala.util.Try$.apply(Try.scala:161) 
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191) 
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205) 
        at org.apache.spark.ui.WebUI.bind(WebUI.scala:99) 
        at org.apache.spark.deploy.worker.Worker.preStart(Worker.scala:134) 
        at akka.actor.ActorCell.create(ActorCell.scala:562) 
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425) 
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) 
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) 
        at akka.dispatch.Mailbox.run(Mailbox.scala:218) 
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
 
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
14/08/19 10:57:21 WARN component.AbstractLifeCycle: FAILED
org.eclipse.jetty.server.Server@268da6: java.net.BindException: Address
already in use 
java.net.BindException: Address already in use 
        at sun.nio.ch.Net.bind0(Native Method) 
        at sun.nio.ch.Net.bind(Net.java:444) 
        at sun.nio.ch.Net.bind(Net.java:436) 
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) 
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
 
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316) 
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
 
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
 
        at org.eclipse.jetty.server.Server.doStart(Server.java:293) 
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
 
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192) 
        at 
org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192) 
        at 
org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192) 
        at scala.util.Try$.apply(Try.scala:161) 
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191) 
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205) 
        at org.apache.spark.ui.WebUI.bind(WebUI.scala:99) 
        at org.apache.spark.deploy.worker.Worker.preStart(Worker.scala:134) 
        at akka.actor.ActorCell.create(ActorCell.scala:562) 
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425) 
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) 
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) 
        at akka.dispatch.Mailbox.run(Mailbox.scala:218) 
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
 
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/metrics/json,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/log,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/static,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/json,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/logPage/json,null} 
14/08/19 10:57:21 INFO handler.ContextHandler: stopped
o.e.j.s.ServletContextHandler{/logPage,null} 
14/08/19 10:57:21 INFO ui.JettyUtils: Failed to create UI at port, 8081.
Trying again. 
14/08/19 10:57:21 INFO ui.JettyUtils: Error was:
Failure(java.net.BindException: Address already in use) 
14/08/19 10:57:26 INFO server.Server: jetty-8.y.z-SNAPSHOT 
14/08/19 10:57:26 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:8082 
14/08/19 10:57:26 INFO ui.WorkerWebUI: Started WorkerWebUI at
http://gauri-Inspiron-5520.local:8082 
14/08/19 10:57:26 INFO worker.Worker: Connecting to master
spark://10.0.0.2:7077... 
14/08/19 10:57:26 INFO worker.Worker: Successfully registered with master
spark://10.0.0.2:7077 

14/08/19 10:59:47 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/0 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:59:48 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "0"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 10:59:50 INFO worker.Worker: Executor app-20140819105948-0003/0
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 10:59:50 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/2 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:59:50 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "2"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 10:59:52 INFO worker.Worker: Executor app-20140819105948-0003/2
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 10:59:53 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/3 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:59:53 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "3"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 10:59:55 INFO worker.Worker: Executor app-20140819105948-0003/3
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 10:59:55 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/4 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:59:56 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "4"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 10:59:58 INFO worker.Worker: Executor app-20140819105948-0003/4
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 10:59:58 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/5 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 10:59:58 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "5"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
^[[B14/08/19 11:00:00 INFO worker.Worker: Executor app-20140819105948-0003/5
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 11:00:00 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/6 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 11:00:00 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "6"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 11:00:02 INFO worker.Worker: Executor app-20140819105948-0003/6
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 11:00:02 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/7 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 11:00:03 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "7"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 11:00:04 INFO worker.Worker: Executor app-20140819105948-0003/7
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 11:00:05 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/8 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 11:00:05 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "8"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 11:00:07 INFO worker.Worker: Executor app-20140819105948-0003/8
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 11:00:07 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/9 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 11:00:07 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "9"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 11:00:09 INFO worker.Worker: Executor app-20140819105948-0003/9
finished with state FAILED message Command exited with code 1 exitStatus 1 
14/08/19 11:00:09 INFO worker.Worker: Asked to launch executor
app-20140819105948-0003/10 for Spark Pi 
Spark assembly has been built with Hive, including Datanucleus jars on
classpath 
14/08/19 11:00:10 INFO worker.ExecutorRunner: Launch command: "java" "-cp"
"::/home/mpiuser/spark-1.0.0-bin-hadoop1/conf:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-rdbms-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-api-jdo-3.2.1.jar:/home/mpiuser/spark-1.0.0-bin-hadoop1/lib/datanucleus-core-3.2.2.jar:/etc/hadoop"
"-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@ashwini-pc:34438/user/CoarseGrainedScheduler" "10"
"gauri-Inspiron-5520.local" "4"
"akka.tcp://sparkWorker@gauri-Inspiron-5520.local:40727/user/Worker"
"app-20140819105948-0003" 
14/08/19 11:00:12 INFO worker.Worker: Executor app-20140819105948-0003/10
finished with state FAILED message Command exited with code 1 exitStatus 1 
*---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------*

I am new to Spark. Am I doing something wrong? The job does not seem to able
to run in parallel. Could somebody please tell me how to solve the problem? 

Thankyou.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Problem-in-running-a-job-on-more-than-one-workers-tp12361.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to