That's seems like it is the reserved memory was just not enough. Glad that it 
is working now.

Terence

Sent from my iPhone

> On Jun 3, 2016, at 2:13 PM, Keith Turner <[email protected]> wrote:
> 
> I just completed a 3 day test[1] of Fluo on EC2 using the Twill reserved
> memory setting and YARN did not kill a Fluo process once[2].   I wish I had
> kept Grafana plots from previous runs where YARN was killing Fluo processes
> all of the time.
> 
> [1]: http://fluo.io/blog/2016/05/17/webindex-long-run-2/
> [2]:
> http://fluo.io/blog/2016/05/17/webindex-long-run-2/#preventing-yarn-from-killing-workers
> 
>> On Fri, Apr 22, 2016 at 5:41 PM, Terence Yim <[email protected]> wrote:
>> 
>> Hi Keith,
>> 
>> Yes. It can be controlled by the "twill.java.reserved.memory.mb" setting in
>> the YarnConfiguration passed to the "YarnTwillRunnerService". The actual
>> "-Xmx" set for the java process is the container resource memory minus the
>> value set in the "twill.java.reserved.memory.mb". By default the reserved
>> memory is 200MB. It also has a minimum memory ratio constant (0.7), meaning
>> the container size to non-heap memory ratio cannot be lower than 0.7.
>> 
>> E.g.
>> If container size = 1GB, reserved = 200MB, then -Xmx = 800MB
>> If container size = 1GB, reserved = 500MB, then -Xmx = 700MB (because of
>> the min heap ratio).
>> 
>> Terence
>> 
>>> On Fri, Apr 22, 2016 at 9:32 AM, Keith Turner <[email protected]> wrote:
>>> 
>>> I am running into a problem where YARN is killing my containers started
>> by
>>> Twill because its using too much memory.  I would like to increase the
>> gap
>>> between the java -Xmx setting and the yarn memory limit.  For example
>> make
>>> the -Xmx setting 75% of the YARN memory limit.  Is there a way I can do
>>> this in Twill?
>>> 
>>> Thanks,
>>> 
>>> Keith
>> 

Reply via email to