Failed to launch Worker

2014-07-01 Thread MEETHU MATHEW


 Hi ,

I am using Spark Standalone mode with one master and 2 slaves.I am not  able to 
start the workers and connect it to the master using 


./bin/spark-class org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077

The log says

Exception in thread main org.jboss.netty.channel.ChannelException: Failed to 
bind to: master/x.x.x.174:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
...
Caused by: java.net.BindException: Cannot assign requested address

When I try to start the worker from the slaves using the following java 
command,its running without any exception

java -cp 
::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m 
org.apache.spark.deploy.worker.Worker spark://:master:7077





Thanks  Regards, 
Meethu M

Re: Failed to launch Worker

2014-07-01 Thread Akhil Das
Is this command working??

java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/
assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
-XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077

Thanks
Best Regards


On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW meethu2...@yahoo.co.in
wrote:


  Hi ,

 I am using Spark Standalone mode with one master and 2 slaves.I am not
  able to start the workers and connect it to the master using

 ./bin/spark-class org.apache.spark.deploy.worker.Worker
 spark://x.x.x.174:7077

 The log says

 Exception in thread main org.jboss.netty.channel.ChannelException:
 Failed to bind to: master/x.x.x.174:0
  at
 org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
  ...
 Caused by: java.net.BindException: Cannot assign requested address

 When I try to start the worker from the slaves using the following java
 command,its running without any exception

 java -cp
 ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
 org.apache.spark.deploy.worker.Worker spark://:master:7077




 Thanks  Regards,
 Meethu M



Re: Failed to launch Worker

2014-07-01 Thread MEETHU MATHEW
Yes.
 
Thanks  Regards, 
Meethu M


On Tuesday, 1 July 2014 6:14 PM, Akhil Das ak...@sigmoidanalytics.com wrote:
 


Is this command working??

java -cp 
::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m 
org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077



Thanks
Best Regards


On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW meethu2...@yahoo.co.in wrote:



 Hi ,


I am using Spark Standalone mode with one master and 2 slaves.I am not  able 
to start the workers and connect it to the master using 


./bin/spark-class org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077


The log says


Exception in thread main org.jboss.netty.channel.ChannelException: Failed to 
bind to: master/x.x.x.174:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
...
Caused by: java.net.BindException: Cannot assign requested address


When I try to start the worker from the slaves using the following java 
command,its running without any exception


java -cp 
::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m 
org.apache.spark.deploy.worker.Worker spark://:master:7077









Thanks  Regards, 
Meethu M

Re: Failed to launch Worker

2014-07-01 Thread Aaron Davidson
Where are you running the spark-class version? Hopefully also on the
workers.

If you're trying to centrally start/stop all workers, you can add a
slaves file to the spark conf/ directory which is just a list of your
hosts, one per line. Then you can just use ./sbin/start-slaves.sh to
start the worker on all of your machines.

Note that this is already setup correctly if you're using the spark-ec2
scripts.


On Tue, Jul 1, 2014 at 5:53 AM, MEETHU MATHEW meethu2...@yahoo.co.in
wrote:

 Yes.

 Thanks  Regards,
 Meethu M


   On Tuesday, 1 July 2014 6:14 PM, Akhil Das ak...@sigmoidanalytics.com
 wrote:


  Is this command working??

 java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/
 assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m
 -Xmx512m org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077

 Thanks
 Best Regards


 On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW meethu2...@yahoo.co.in
 wrote:


  Hi ,

 I am using Spark Standalone mode with one master and 2 slaves.I am not
  able to start the workers and connect it to the master using

 ./bin/spark-class org.apache.spark.deploy.worker.Worker
 spark://x.x.x.174:7077

 The log says

  Exception in thread main org.jboss.netty.channel.ChannelException:
 Failed to bind to: master/x.x.x.174:0
  at
 org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
  ...
  Caused by: java.net.BindException: Cannot assign requested address

 When I try to start the worker from the slaves using the following java
 command,its running without any exception

 java -cp
 ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
 org.apache.spark.deploy.worker.Worker spark://:master:7077




 Thanks  Regards,
 Meethu M