Hi Krishna,
Thanks for your reply. I will definitely take a look at it to understand
the configuration details.
Best Regards,
Tim
On Tue, Sep 1, 2015 at 6:17 PM, Krishna Sangeeth KS <
kskrishnasange...@gmail.com> wrote:
> Hi Timothy,
>
> I think the driver memory in all your examples is more
Hi Timothy,
I think the driver memory in all your examples is more than what is
necessary in usual cases and executor memory is quite less.
I found this devops talk[1] at spark-summit here to be super useful in
understanding few of this configuration details.
[1]
Dear Sandy,
Many thanks for your reply.
I am going to respond to your replies in reverse order if you don't mind as
my second question is the more pressing issue for now.
In the situation where you give more memory, but less memory overhead, and
> the job completes less quickly, have you
Added log files and diagnostics to first and second cases and removed the
images.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Effects-of-Driver-Memory-Executor-Memory-Driver-Memory-Overhead-and-Executor-Memory-Overhead-os-tp24507p24528.html
Sent
Hi Timothy,
For your first question, you would need to look in the logs and provide
additional information about why your job is failing. The SparkContext
shutting down could happen for a variety of reasons.
In the situation where you give more memory, but less memory overhead, and
the job
I am doing some memory tuning on my Spark job on YARN and I notice different
settings would give different results and affect the outcome of the Spark
job run. However, I am confused and do not understand completely why it
happens and would appreciate if someone can provide me with some guidance