On Wed, Jul 16, 2014 at 12:36 PM, Matt Work Coarr
<mattcoarr.w...@gmail.com> wrote:
> Thanks Marcelo, I'm not seeing anything in the logs that clearly explains
> what's causing this to break.
>
> One interesting point that we just discovered is that if we run the driver
> and the slave (worker) on the same host it runs, but if we run the driver on
> a separate host it does not run.

When I meant the executor log, I meant the log of the process launched
by the worker, not the worker. In my CDH-based Spark install, those
end up in /var/run/spark/work.

If you look at your worker log, you'll see it's launching the executor
process. So there should be something there.

Since you say it works when both are run in the same node, that
probably points to some communication issue, since the executor needs
to connect back to the driver. Check to see if you don't have any
firewalls blocking the ports Spark tries to use. (That's one of the
non-resource-related cases that will cause that message.)

-- 
Marcelo

Reply via email to