You can do ./sbin/start-slave.sh --master spark://IP:PORT. I believe you're
missing --master. In addition, it's a good idea to pass with --master
exactly the spark master's endpoint as shown on your UI under
http://localhost:8080. But that should do it. If that's not working, you
can look at the Worker log and see where it's trying to connect to and if
it's getting any errors.

On Thu, Jan 22, 2015 at 12:06 PM, riginos <samarasrigi...@gmail.com> wrote:

> I have downloaded spark-1.2.0.tgz on each of my node and execute ./sbt/sbt
> assembly on each of them.  So I execute. /sbin/start-master.sh on my master
> and "./bin/spark-class org.apache.spark.deploy.worker.Worker
> spark://IP:PORT".
> Althought when I got to http://localhost:8080 I cannot see any worker. Why
> is that? Do I do something wrong with the installation deploy of the spark?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Installing-Spark-Standalone-to-a-Cluster-tp21319.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to