Re: problem with start-slaves.sh

2014-10-30 Thread Yana Kadiyska
*To:* Pagliari, Roberto *Cc:* user@spark.apache.org *Subject:* Re: problem with start-slaves.sh I see this when I start a worker and then try to start it again forgetting it's already running (I don't use start-slaves, I start the slaves individually with start-slave.sh). All this is telling

RE: problem with start-slaves.sh

2014-10-30 Thread Pagliari, Roberto
...@gmail.com] Sent: Wednesday, October 29, 2014 9:45 AM To: Pagliari, Roberto Cc: user@spark.apache.orgmailto:user@spark.apache.org Subject: Re: problem with start-slaves.sh I see this when I start a worker and then try to start it again forgetting it's already running (I don't use start-slaves, I

Re: problem with start-slaves.sh

2014-10-29 Thread Yana Kadiyska
I see this when I start a worker and then try to start it again forgetting it's already running (I don't use start-slaves, I start the slaves individually with start-slave.sh). All this is telling you is that there is already a running process on that machine. You can see it if you do a ps

RE: problem with start-slaves.sh

2014-10-29 Thread Pagliari, Roberto
, Roberto Cc: user@spark.apache.org Subject: Re: problem with start-slaves.sh I see this when I start a worker and then try to start it again forgetting it's already running (I don't use start-slaves, I start the slaves individually with start-slave.sh). All this is telling you

problem with start-slaves.sh

2014-10-28 Thread Pagliari, Roberto
I ran sbin/start-master.sh followed by sbin/start-slaves.sh (I build with PHive option to be able to interface with hive) I'm getting this ip_address: org.apache.spark.deploy.worker.Worker running as process . Stop it first. Am I doing something wrong? In my specific case, shark+hive is