Hi,

Good question.  The extra memory comes from
spark.yarn.executor.memoryOverhead, the space used for the application
master, and the way the YARN rounds requests up.  This explains it in a
little more detail:
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/

-Sandy

On Tue, Apr 28, 2015 at 7:12 PM, bit1...@163.com <bit1...@163.com> wrote:

> Hi,guys,
> I have the following computation with 3 workers:
> spark-sql --master yarn --executor-memory 3g --executor-cores 2
> --driver-memory 1g -e 'select count(*) from table'
>
> The resources used are shown as below on the UI:
> I don't understand why the memory used is 15GB and vcores used is 5. I
> think the memory used should be executor-memory*numOfWorkers=3G*3=9G, and
> the Vcores used shoulde be executor-cores*numOfWorkers=6
>
> Can you please explain the result?Thanks.
>
>
>
> ------------------------------
> bit1...@163.com
>

Reply via email to