[ https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675281#comment-13675281 ]
Alejandro Abdelnur commented on YARN-689: ----------------------------------------- [~acmurthy]. Can you explain why if minimum and increment are the same (as it is today), and their value is set to a low number (i.e. 128m) the 'fragmentation' problem does not arise? Also, why do you see 'fragmentation' as an issue? We are not talking about a fragmented memory that we won't be able to allocate because it is in multiple chunks, if there is capacity it can be used, there is no memory fragmentation. > Add multiplier unit to resourcecapabilities > ------------------------------------------- > > Key: YARN-689 > URL: https://issues.apache.org/jira/browse/YARN-689 > Project: Hadoop YARN > Issue Type: Bug > Components: api, scheduler > Affects Versions: 2.0.4-alpha > Reporter: Alejandro Abdelnur > Assignee: Alejandro Abdelnur > Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, > YARN-689.patch, YARN-689.patch > > > Currently we overloading the minimum resource value as the actual multiplier > used by the scheduler. > Today with a minimum memory set to 1GB, requests for 1.5GB are always > translated to allocation of 2GB. > We should decouple the minimum allocation from the multiplier. > The multiplier should also be exposed to the client via the > RegisterApplicationMasterResponse -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira