I see this when I start a worker and then try to start it again forgetting
it's already running (I don't use start-slaves, I start the slaves
individually with start-slave.sh). All this is telling you is that there is
already a running process on that machine. You can see it if you do a ps
-aef|grep worker

you can look on the spark UI and see if your master shows this machine as
connected to it already. If it doesn't, you might want to kill the worker
process and restart it.

On Tue, Oct 28, 2014 at 4:32 PM, Pagliari, Roberto <rpagli...@appcomsci.com>
wrote:

> I ran sbin/start-master.sh followed by sbin/start-slaves.sh (I build with
> PHive option to be able to interface with hive)
>
>
>
> I’m getting this
>
>
>
> ip_address: org.apache.spark.deploy.worker.Worker running as process xxxx.
> Stop it first.
>
>
>
> Am I doing something wrong? In my specific case, shark+hive is running on
> the nodes. Does that interfere with spark?
>
>
>
> Thank you,
>

Reply via email to