There are two cases here. If the container is killed by yarn, you can increase 
jvm overhead. Otherwise, you have to increase the executor-memory if there is 
no memory leak happening.

Thanks.

Zhan Zhang

On Dec 15, 2015, at 9:58 PM, Eran Witkon 
<eranwit...@gmail.com<mailto:eranwit...@gmail.com>> wrote:

If the problem is containers trying to use more memory then they allowed, how 
do I limit them? I all ready have executor-memory 5G
Eran
On Tue, 15 Dec 2015 at 23:10 Zhan Zhang 
<zzh...@hortonworks.com<mailto:zzh...@hortonworks.com>> wrote:
You should be able to get the logs from yarn by “yarn logs -applicationId xxx”, 
where you can possible find the cause.

Thanks.

Zhan Zhang

On Dec 15, 2015, at 11:50 AM, Eran Witkon 
<eranwit...@gmail.com<mailto:eranwit...@gmail.com>> wrote:

> When running
> val data = sc.wholeTextFile("someDir/*") data.count()
>
> I get numerous warning from yarn till I get aka association exception.
> Can someone explain what happen when spark loads this rdd and can't fit it 
> all in memory?
> Based on the exception it looks like the server is disconnecting from yarn 
> and failing... Any idea why? The code is simple but still failing...
> Eran


Reply via email to