In order to check if there is any issue with python API I ran a scala
application provided in the examples. Still the same error

./bin/run-example org.apache.spark.examples.SparkPi
spark://[Master-URL]:7077


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/mnt/work/spark-0.9.1/examples/target/scala-2.10/spark-examples-assembly-0.9.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/mnt/work/spark-0.9.1/assembly/target/scala-2.10/spark-assembly-0.9.1-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/04/25 17:07:10 INFO Utils: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
14/04/25 17:07:10 WARN Utils: Your hostname, rd-hu resolves to a loopback
address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
14/04/25 17:07:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to
another address
14/04/25 17:07:11 INFO Slf4jLogger: Slf4jLogger started
14/04/25 17:07:11 INFO Remoting: Starting remoting
14/04/25 17:07:11 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@192.168.122.1:26278]
14/04/25 17:07:11 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@192.168.122.1:26278]
14/04/25 17:07:11 INFO SparkEnv: Registering BlockManagerMaster
14/04/25 17:07:11 INFO DiskBlockManager: Created local directory at
/tmp/spark-local-20140425170711-d1da
14/04/25 17:07:11 INFO MemoryStore: MemoryStore started with capacity 16.0
GB.
14/04/25 17:07:11 INFO ConnectionManager: Bound socket to port 9788 with id
= ConnectionManagerId(192.168.122.1,9788)
14/04/25 17:07:11 INFO BlockManagerMaster: Trying to register BlockManager
14/04/25 17:07:11 INFO BlockManagerMasterActor$BlockManagerInfo: Registering
block manager 192.168.122.1:9788 with 16.0 GB RAM
14/04/25 17:07:11 INFO BlockManagerMaster: Registered BlockManager
14/04/25 17:07:11 INFO HttpServer: Starting HTTP Server
14/04/25 17:07:11 INFO HttpBroadcast: Broadcast server started at
http://192.168.122.1:58091
14/04/25 17:07:11 INFO SparkEnv: Registering MapOutputTracker
14/04/25 17:07:11 INFO HttpFileServer: HTTP File server directory is
/tmp/spark-599577a4-5732-4949-a2e8-f59eb679e843
14/04/25 17:07:11 INFO HttpServer: Starting HTTP Server
14/04/25 17:07:12 WARN AbstractLifeCycle: FAILED
SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already
in use
java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.Server.doStart(Server.java:286)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:118)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:118)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:118)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:118)
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:129)
        at org.apache.spark.ui.SparkUI.bind(SparkUI.scala:57)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:159)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:100)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
14/04/25 17:07:12 WARN AbstractLifeCycle: FAILED
org.eclipse.jetty.server.Server@74f4b96: java.net.BindException: Address
already in use
java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
        at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
        at
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.Server.doStart(Server.java:286)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at
org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:118)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:118)
        at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:118)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:118)
        at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:129)
        at org.apache.spark.ui.SparkUI.bind(SparkUI.scala:57)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:159)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:100)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
14/04/25 17:07:12 INFO JettyUtils: Failed to create UI at port, 4040. Trying
again.
14/04/25 17:07:12 INFO JettyUtils: Error was:
Failure(java.net.BindException: Address already in use)
14/04/25 17:07:12 INFO SparkUI: Started Spark Web UI at
http://192.168.122.1:4041
14/04/25 17:07:12 INFO SparkContext: Added JAR
/mnt/work/spark-0.9.1/examples/target/scala-2.10/spark-examples-assembly-0.9.1.jar
at http://192.168.122.1:49137/jars/spark-examples-assembly-0.9.1.jar with
timestamp 1398442032736
14/04/25 17:07:12 INFO AppClient$ClientActor: Connecting to master
spark://ec2-54-220-220-133.eu-west-1.compute.amazonaws.com:7077...
14/04/25 17:07:13 INFO SparkContext: Starting job: reduce at
SparkPi.scala:39
14/04/25 17:07:13 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:39)
with 2 output partitions (allowLocal=false)
14/04/25 17:07:13 INFO DAGScheduler: Final stage: Stage 0 (reduce at
SparkPi.scala:39)
14/04/25 17:07:13 INFO DAGScheduler: Parents of final stage: List()
14/04/25 17:07:13 INFO DAGScheduler: Missing parents: List()
14/04/25 17:07:13 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map
at SparkPi.scala:35), which has no missing parents
14/04/25 17:07:13 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0
(MappedRDD[1] at map at SparkPi.scala:35)
14/04/25 17:07:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Connected to Spark
cluster with app ID app-20140425160713-0002
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/0 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/0 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/0 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/0 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/0 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/1 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/1 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/1 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/1 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/1 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/2 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/2 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/2 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/2 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/2 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/3 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/3 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/3 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/3 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/3 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/4 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/4 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/4 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/4 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/4 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/5 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/5 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/5 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/5 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/5 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/6 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/6 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/6 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/6 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/6 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/7 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/7 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/7 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/7 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/7 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/8 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/8 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/8 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/8 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/8 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor added:
app-20140425160713-0002/9 on
worker-20140425133348-ip-10-84-7-178.eu-west-1.compute.internal-57839
(ip-10-84-7-178.eu-west-1.compute.internal:57839) with 1 cores
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140425160713-0002/9 on hostPort
ip-10-84-7-178.eu-west-1.compute.internal:57839 with 1 cores, 512.0 MB RAM
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/9 is now RUNNING
14/04/25 17:07:13 INFO AppClient$ClientActor: Executor updated:
app-20140425160713-0002/9 is now FAILED (class java.io.IOException: Cannot
run program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory)
14/04/25 17:07:13 INFO SparkDeploySchedulerBackend: Executor
app-20140425160713-0002/9 removed: class java.io.IOException: Cannot run
program "/mnt/work/spark/bin/compute-classpath.sh" (in directory "."):
error=2, No such file or directory
14/04/25 17:07:13 ERROR AppClient$ClientActor: Master removed our
application: FAILED; stopping client
14/04/25 17:07:13 WARN SparkDeploySchedulerBackend: Disconnected from Spark
cluster! Waiting for reconnection...
14/04/25 17:07:28 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/25 17:07:43 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/25 17:07:58 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/25 17:08:13 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/25 17:08:28 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Deploying-a-python-code-on-a-spark-EC2-cluster-tp4758p4833.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to