Yes.
 
Thanks & Regards, 
Meethu M

On Tuesday, 1 July 2014 6:14 PM, Akhil Das <ak...@sigmoidanalytics.com> wrote:
 


Is this command working??

java -cp 
::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
 -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m 
org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077



Thanks
Best Regards


On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW <meethu2...@yahoo.co.in> wrote:


>
> Hi ,
>
>
>I am using Spark Standalone mode with one master and 2 slaves.I am not  able 
>to start the workers and connect it to the master using 
>
>
>./bin/spark-class org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077
>
>
>The log says
>
>
>Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to 
>bind to: master/x.x.x.174:0
>at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
>...
>Caused by: java.net.BindException: Cannot assign requested address
>
>
>When I try to start the worker from the slaves using the following java 
>command,its running without any exception
>
>
>java -cp 
>::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
> -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m 
>org.apache.spark.deploy.worker.Worker spark://:master:7077
>
>
>
>
>
>
>
>
>
>Thanks & Regards, 
>Meethu M

Reply via email to