[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13421840#comment-13421840
 ] 

Alejandro Abdelnur commented on MAPREDUCE-4334:
-----------------------------------------------

Patch has TAB characters, it should not. Indentation should be 2 spaces.

* ContainerExecutor.java

Instead having 2 different ConcurrentMaps, why not having one holding a data 
structure for pidFiles and cgroupFiles?

Why do we need read/write locsk when accessing a ConcurrentMap? 

* DefaultContainerExecutor.java

The for loop adding the process ID to the cgroup should be within { }, even if 
it is a single line.

* CgroupsCreator.java

Shouldn't, at initialization, enabled/disable itself based on a config property 
that indicates if Cgroups are enabled or not? And if disabled all methods would 
be NOP?







                
> Add support for CPU isolation/monitoring of containers
> ------------------------------------------------------
>
>                 Key: MAPREDUCE-4334
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4334
>             Project: Hadoop Map/Reduce
>          Issue Type: Sub-task
>            Reporter: Arun C Murthy
>            Assignee: Andrew Ferguson
>         Attachments: MAPREDUCE-4334-executor-v1.patch, 
> MAPREDUCE-4334-pre1.patch, MAPREDUCE-4334-pre2-with_cpu.patch, 
> MAPREDUCE-4334-pre2.patch, MAPREDUCE-4334-pre3-with_cpu.patch, 
> MAPREDUCE-4334-pre3.patch, MAPREDUCE-4334-v1.patch, MAPREDUCE-4334-v2.patch
>
>
> Once we get in MAPREDUCE-4327, it will be important to actually enforce 
> limits on CPU consumption of containers. 
> Several options spring to mind:
> # taskset (RHEL5+)
> # cgroups (RHEL6+)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to