[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated MAPREDUCE-6622:
----------------------------------
    Release Note: 
Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size 
property:
1) For every 100k of cache size, set the heap size of the Job History Server to 
1.2GB.  For example, mapreduce.jobhistory.loadedtasks.cache.size=500000, heap 
size=6GB.
2) Make sure that the cache size is larger than the number of tasks required 
for the largest job run on the cluster.  It might be a good idea to set the 
value slightly higher (say, 20%) in order to allow for job size growth.

  was:
Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size 
property:
1) For every 100k of cache size, set the heap size of the Job History Server to 
1.2GB.  For example, mapreduce.jobhistory.loadedtasks.cache.size=500, heap 
size=6GB.
2) Make sure that the cache size is larger than the number of tasks required 
for the largest job run on the cluster.  It might be a good idea to set the 
value slightly higher (say, 20%) in order to allow for job size growth.


> Add capability to set JHS job cache to a task-based limit
> ---------------------------------------------------------
>
>                 Key: MAPREDUCE-6622
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6622
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: jobhistoryserver
>    Affects Versions: 2.7.2
>            Reporter: Ray Chiang
>            Assignee: Ray Chiang
>              Labels: supportability
>         Attachments: MAPREDUCE-6622.001.patch, MAPREDUCE-6622.002.patch, 
> MAPREDUCE-6622.003.patch, MAPREDUCE-6622.004.patch, MAPREDUCE-6622.005.patch, 
> MAPREDUCE-6622.006.patch, MAPREDUCE-6622.007.patch
>
>
> When setting the property mapreduce.jobhistory.loadedjobs.cache.size the jobs 
> can be of varying size.  This is generally not a problem when the jobs sizes 
> are uniform or small, but when the job sizes can be very large (say greater 
> than 250k tasks), then the JHS heap size can grow tremendously.
> In cases, where multiple jobs are very large, then the JHS can lock up and 
> spend all its time in GC.  However, since the cache is holding on to all the 
> jobs, not much heap space can be freed up.
> By setting a property that sets a cap on the number of tasks allowed in the 
> cache and since the total number of tasks loaded is directly proportional to 
> the amount of heap used, this should help prevent the JHS from locking up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to