This is the command I am running:

spark-submit --deploy-mode cluster --master yarn --class com.myorg.myApp
s3://my-bucket/myapp-0.1.jar

On Wed, Mar 1, 2017 at 12:22 AM, Jonathan Kelly <jonathaka...@gmail.com>
wrote:

> Prithish,
>
> It would be helpful for you to share the spark-submit command you are
> running.
>
> ~ Jonathan
>
> On Sun, Feb 26, 2017 at 8:29 AM Prithish <prith...@gmail.com> wrote:
>
>> Thanks for the responses, I am running this on Amazon EMR which runs the
>> Yarn cluster manager.
>>
>> On Sat, Feb 25, 2017 at 4:45 PM, liangyhg...@gmail.com <
>> liangyhg...@gmail.com> wrote:
>>
>> Hi,
>>  I think you are using the local model of Spark. There
>> are mainly four models, which are local, standalone,  yarn
>> and Mesos. Also, "blocks" is relative to hdfs, "partitions"
>>  is relative to spark.
>>
>> liangyihuai
>>
>> ---Original---
>> *From:* "Jacek Laskowski "<ja...@japila.pl>
>> *Date:* 2017/2/25 02:45:20
>> *To:* "prithish"<prith...@gmail.com>;
>> *Cc:* "user"<user@spark.apache.org>;
>> *Subject:* Re: RDD blocks on Spark Driver
>>
>> Hi,
>>
>> Guess you're use local mode which has only one executor called driver. Is
>> my guessing correct?
>>
>> Jacek
>>
>> On 23 Feb 2017 2:03 a.m., <prith...@gmail.com> wrote:
>>
>> Hello,
>>
>> Had a question. When I look at the executors tab in Spark UI, I notice
>> that some RDD blocks are assigned to the driver as well. Can someone please
>> tell me why?
>>
>> Thanks for the help.
>>
>>
>>

Reply via email to