Re: Why are executors on slave never used?

2015-09-22 Thread Joshua Fox
Thank you Hemant and Andrew, I got it working. On Mon, Sep 21, 2015 at 11:48 PM, Andrew Or wrote: > Hi Joshua, > > What cluster manager are you using, standalone or YARN? (Note that > standalone here does not mean local mode). > > If standalone, you need to do

Re: Why are executors on slave never used?

2015-09-21 Thread Hemant Bhanawat
When you specify master as local[2], it starts the spark components in a single jvm. You need to specify the master correctly. I have a default AWS EMR cluster (1 master, 1 slave) with Spark. When I run a Spark process, it works fine -- but only on the master, as if it were standalone. The web-UI

Why are executors on slave never used?

2015-09-21 Thread Joshua Fox
I have a default AWS EMR cluster (1 master, 1 slave) with Spark. When I run a Spark process, it works fine -- but only on the master, as if it were standalone. The web-UI and logging code shows only 1 executor, the localhost. How can I diagnose this? (I create *SparkConf, *in Python, with

Re: Why are executors on slave never used?

2015-09-21 Thread Andrew Or
Hi Joshua, What cluster manager are you using, standalone or YARN? (Note that standalone here does not mean local mode). If standalone, you need to do `setMaster("spark://[CLUSTER_URL]:7077")`, where CLUSTER_URL is the machine that started the standalone Master. If YARN, you need to do