[ 
https://issues.apache.org/jira/browse/YARN-4697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15150523#comment-15150523
 ] 

Naganarasimha G R commented on YARN-4697:
-----------------------------------------

hi [~haibochen],
Thanks for working on this patch and yes it would be better to limit the number 
of threads in the executor service. But few nits/queries in the patch :
* if *YarnConfiguration* is modified then better *yarn-default.xml* is also 
modified, but in the first place do we require to keep it configurable ? IMO 
just having a fixed value like 50 should be safe.
* *threadPool* can be of default access instead of public , so that only 
accessible to testcases
* Dint understand the need of *Semaphore*, as in the *Runnable*  immediately 
after *semaphore.acquire()* we release in the finally block. And even if not i 
thought we can submit multiple runnables (say 5/10) with a short sleep and 
check whether number of live threads having thread name as 
LogAggregationService is only 1 right ?

> NM aggregation thread pool is not bound by limits
> -------------------------------------------------
>
>                 Key: YARN-4697
>                 URL: https://issues.apache.org/jira/browse/YARN-4697
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>            Reporter: Haibo Chen
>            Assignee: Haibo Chen
>         Attachments: yarn4697.001.patch
>
>
> In the LogAggregationService.java we create a threadpool to upload logs from 
> the nodemanager to HDFS if log aggregation is turned on. This is a cached 
> threadpool which based on the javadoc is an ulimited pool of threads.
> In the case that we have had a problem with log aggregation this could cause 
> a problem on restart. The number of threads created at that point could be 
> huge and will put a large load on the NameNode and in worse case could even 
> bring it down due to file descriptor issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to