[jira] [Resolved] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-12638.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.4
   3.0.1
   2.9.1
   2.10.0
   3.1.0
   2.7.5

Just committed this into the following branches:
{code}
   3c57def..7998077  branch-2 -> branch-2
   7252e18..85eb32b  branch-2.7 -> branch-2.7
   eacccf1..19c18f7  branch-2.8 -> branch-2.8
   5a8a1e6..0f5ec01  branch-2.9 -> branch-2.9
   58d849b..def87db  branch-3.0 -> branch-3.0
   a63d19d..60fd0d7  trunk -> trunk
{code}
Thank you everybody for contributing.

> Delete copy-on-truncate block along with the original block, when deleting a 
> file being truncated
> -
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Fix For: 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-11-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/

[Nov 29, 2017 4:54:49 PM] (weichiu) HADOOP-15054. upgrade hadoop dependency on 
commons-codec to 1.11.
[Nov 29, 2017 5:43:03 PM] (weiy) HDFS-12835. Fix the javadoc errors in 
Router-based federation.
[Nov 29, 2017 7:11:36 PM] (templedf) YARN-7541. Node updates don't update the 
maximum cluster capability for
[Nov 29, 2017 9:11:14 PM] (kihwal) HDFS-11754. Make FsServerDefaults cache 
configurable. Contributed by
[Nov 29, 2017 10:38:07 PM] (weiy) YARN-6851. Capacity Scheduler: document 
configs for controlling #
[Nov 30, 2017 1:46:16 AM] (wangda) YARN-7573. Gpu Information page could be 
empty for nodes without GPU.
[Nov 30, 2017 4:28:06 AM] (cdouglas) HDFS-12681. Make HdfsLocatedFileStatus a 
subtype of LocatedFileStatus
[Nov 30, 2017 3:39:15 PM] (rkanter) HADOOP-13493. Compatibility Docs should 
clarify the policy for what




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 231] 

Failed junit tests :

   hadoop.net.TestDNS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.fs.TestUnbuffer 
   hadoop.yarn.api.TestPBImplRecords 
   hadoop.yarn.server.nodemanager.webapp.TestNMWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/whitespace-eol.txt
  [8.8M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/whitespace-tabs.txt
  [288K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [156K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [376K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/609/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org

[jira] [Resolved] (HDFS-9754) Avoid unnecessary getBlockCollection calls in BlockManager

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-9754.
---
Resolution: Fixed

Resolving this, based on the discussion in HDFS-12638.
Filed HDFS-12880 instead.

> Avoid unnecessary getBlockCollection calls in BlockManager
> --
>
> Key: HDFS-9754
> URL: https://issues.apache.org/jira/browse/HDFS-9754
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.8.2, 3.0.0-alpha1, 2.9.0
>
> Attachments: HDFS-9754.000.patch, HDFS-9754.001.patch, 
> HDFS-9754.002.patch
>
>
> Currently BlockManager calls {{Namesystem#getBlockCollection}} in order to:
> 1. check if the block has already been abandoned
> 2. identify the storage policy of the block
> 3. meta save
> For #1 we can use BlockInfo's internal state instead of checking if the 
> corresponding file still exists.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12880) Disallow abandoned blocks in the BlocksMap

2017-11-30 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-12880:
--

 Summary: Disallow abandoned blocks in the BlocksMap
 Key: HDFS-12880
 URL: https://issues.apache.org/jira/browse/HDFS-12880
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.4
Reporter: Konstantin Shvachko


BlocksMap used to contain only valid blocks, that is belonging to a file. The 
issue is intended to restore this invariant. This was discussed in details 
while fixing HDFS-12638



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12879) Ozone : add scm init command to document.

2017-11-30 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12879:
-

 Summary: Ozone : add scm init command to document.
 Key: HDFS-12879
 URL: https://issues.apache.org/jira/browse/HDFS-12879
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ozone
Reporter: Chen Liang
Priority: Minor


When an Ozone cluster is initialized, before starting SCM through {{hdfs 
--daemon start scm}}, the command {{hdfs scm -init}} needs to be called first. 
But seems this command is not being documented. We should add this note to 
document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-30 Thread Wangda Tan
+1, thanks Sunil!

Wangda Tan

On Wed, Nov 29, 2017 at 5:56 PM, Sunil G  wrote:

> Hi All,
>
>
> Based on the discussion at [1], I'd like to start a vote to merge feature
> branch
>
> YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
> 6:00PM PDT.
>
>
> This branch adds support to configure queue capacity as absolute resource
> in
>
> capacity scheduler. This will help admins who want fine control of
> resources of queues.
>
>
> Feature development is done at YARN-5881 [2], jenkins build is here
> (YARN-7510 [3]).
>
> All required tasks for this feature are committed. This feature changes
> RM’s Capacity Scheduler only,
>
> and we did extensive tests for the feature in the last couple of months
> including performance tests.
>
>
> Key points:
>
> - The feature is turned off by default, and have to configure absolute
> resource to enable same.
>
> - Detailed documentation about how to use this feature is done as part of
> [4].
>
> - No major performance degradation is observed with this branch work. SLS
> and UT performance
>
> tests are done.
>
>
> There were 11 subtasks completed for this feature.
>
>
> Huge thanks to everyone who helped with reviews, commits, guidance, and
>
> technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
> Rohith Sharma K S, Eric Payne .
>
>
> [1] :
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%
> 3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%
> 40mail.gmail.com%3E
>
> [2] : https://issues.apache.org/jira/browse/YARN-5881
>
> [3] : https://issues.apache.org/jira/browse/YARN-7510
>
> [4] : https://issues.apache.org/jira/browse/YARN-7533
>
>
> Regards
>
> Sunil and Wangda
>


[jira] [Created] (HDFS-12878) Add CryptoOutputStream to WebHdfsFileSystem append call.

2017-11-30 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12878:
-

 Summary: Add CryptoOutputStream to WebHdfsFileSystem append call.
 Key: HDFS-12878
 URL: https://issues.apache.org/jira/browse/HDFS-12878
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rushabh S Shah






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)
Chris Douglas created HDFS-12877:


 Summary: Add open(PathHandle) with default buffersize
 Key: HDFS-12877
 URL: https://issues.apache.org/jira/browse/HDFS-12877
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Chris Douglas
Assignee: Chris Douglas
Priority: Trivial


HDFS-7878 added an overload for {{FileSystem::open}} that requires the user to 
provide a buffer size when opening by {{PathHandle}}. Similar to 
{{open(Path)}}, it'd be convenient to have another overload that takes the 
default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-30 Thread Allen Wittenauer

> On Nov 30, 2017, at 1:07 AM, Rohith Sharma K S  
> wrote:
> 
> 
> >. If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?
> Ideally it should not be. Can you point out where it is marked as deprecated? 
> If it is in historyserver daemon start, that change made very long back when 
> timeline server added. 


Ahh, I see where all the problems lie.  No one is paying attention to the 
deprecation message because it’s kind of oddly worded:

* It really means “don’t use ‘yarn historyserver’ use ‘yarn timelineserver’ ” 
* ‘yarn historyserver’ was removed from the documentation in 2.7.0
* ‘yarn historyserver’ doesn’t appear in the yarn usage output
* ‘yarn timelineserver’ runs the exact same class

There’s no reason for ‘yarn historyserver’ to exist in 3.x.  Just run ‘yarn 
timelineserver’ instead.
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12876) Ozone: moving NodeType from OzoneConsts to Ozone.proto

2017-11-30 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12876:
--

 Summary: Ozone: moving NodeType from OzoneConsts to Ozone.proto
 Key: HDFS-12876
 URL: https://issues.apache.org/jira/browse/HDFS-12876
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


Since we will be using {{NodeType}} in Service Discovery API - HDFS-12868, it's 
better to have the enum in Ozone.proto than OzoneConsts. We need {{NodeType}} 
in protobuf messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-30 Thread Rohith Sharma K S
Hi Vinod/Allen

bq. We need to figure out if this V1 TimelineService should even be support
given ATSv2.
ATSv2 is in alpha phase. We should continue to support Timeline Service V1
till we have the detailed entity level ACLs in V2. And also there are
proposal to upgrade/migration paths from TSv1 to TSv2.

bq. If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?
Ideally it should not be. Can you point out where it is marked as
deprecated? If it is in historyserver daemon start, that change made very
long back when timeline server added.

Thanks & Regards
Rohith Sharma K S

On 26 November 2017 at 03:28, Allen Wittenauer 
wrote:

>
> > On Nov 21, 2017, at 2:16 PM, Vinod Kumar Vavilapalli 
> wrote:
> >
> >>> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't
> even work. Not just deprecated in favor of timelineserver as was advertised.
> >>
> >>  This works for me in trunk and the bash code doesn’t appear to
> have changed in a very long time.  Probably something local to your
> install.  (I do notice that the deprecation message says “starting” which
> is awkward when the stop command is given though.)  Also: is the
> deprecation message even true at this point?
> >
> >
> > Sorry, I mischaracterized the problem.
> >
> > The real issue is that I cannot use this command line when the MapReduce
> JobHistoryServer is already started on the same machine.
>
> The specific string is:
>
> hadoop-${HADOOP_IDENT_STRING}-${HADOOP_SUBCMD}.pid
>
> More specifically, the pid handling code will conflict if the
> following are true:
>
> * same machine (obviously)
> * same subcommand name
> * same HADOOP_IDENT_USER: which by default is the user
> name of whatever starts it… but was designed to be overridden way back in
> hadoop 0.X.
>
> … which means for most production setups, this is probably not
> real a problem.
>
>
> > So, it looks like in shell-scripts, there can ever be only one daemon of
> a given name, irrespective of which daemon scripts are invoked.
>
> Correct.  Naming multiple, different daemons the same thing is
> extremely anti-user.   In fact, I thought this was originally about the
> “other” history server.
>
> >
> > We need to figure out two things here
> >  (a) The behavior of this command. Clearly, it will conflict with the
> MapReduce JHS - only one of them can be started on the same node.
>
> … by the same user, by default.  Started by a different user or
> different HADOOP_IDENT_USER, it will come up just fine.
>
> >  (b) We need to figure out if this V1 TimelineService should even be
> support given ATSv2.
>
> If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?
>
> > On Nov 22, 2017, at 9:45 AM, Brahma Reddy Battula 
> wrote:
> >
> > 1) Change the name
> > 2) Create PID based on the CLASS Name, here applicationhistoryserver and
> jobhistoryserver
> > 3) Use same as branch-2.9..i.e suffixing with mapred or yarn
> >
> >
> > @allen, any thoughts on this..?
>
> Using the classname works in this instance, but just as we saw
> with the router daemons, people tend to use the same class names when
> building different components. It also means that if different daemons can
> be started in different ways from the same class dependent upon options,
> this conflict will still exist.  Also, with dynamic commands, it is very
> possible to run the same daemon from multiple start points.
>
> As part of this discussion, I think it’s important to recognize:
>
> a) This is likely to be primarily impacting developers.
> b) We’re talking about two daemons where one has been deprecated.
> c) Calling two different daemons “history server” is just awful from an
> end user perspective.
> d) There is already a work around in place if one absolutely needs to run
> both on the same node as the same user, just as people do with datanode and
> nodemanager today.
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-30 Thread Sunil G
Thanks everyone for the feedback!

Based on positive feedback, we started voting thread in
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%3CCACYiTuhzMrd_kFRT7_f4VBHejrajbCnVB1wmgHLMLXRr58y0MA%40mail.gmail.com%3E

@Carlo: Yes, this change should be straight forward except some minor
conflicts.

- Sunil



On Thu, Nov 30, 2017 at 9:34 AM Carlo Aldo Curino 
wrote:

> I haven't tested this, but I support the merge as the patch is very much
> needed for MS usecases as well... Can this be cherry-picked on 2.9 easily?
>
> Thanks for this contribution!
>
> Cheers,
> Carlo
>
> On Nov 29, 2017 6:34 PM, "Weiwei Yang"  wrote:
>
>> Hi Sunil
>>
>> +1 from my side.
>> Actually we have applied some of these patches to our production cluster
>> since Sep this year, on over 2000+ nodes and it works nicely. +1 for the
>> merge. I am pretty sure this feature will help a lot of users, especially
>> those on cloud. Thanks for getting this done, great job!
>>
>> --
>> Weiwei
>>
>> On 29 Nov 2017, 9:23 PM +0800, Rohith Sharma K S <
>> rohithsharm...@apache.org>, wrote:
>> +1, thanks Sunil for working on this feature!
>>
>> -Rohith Sharma K S
>>
>> On 24 November 2017 at 23:19, Sunil G  wrote:
>>
>> Hi All,
>>
>> We would like to bring up the discussion of merging “absolute min/max
>> resources support in capacity scheduler” branch (YARN-5881) [2] into trunk
>> in a few weeks. The goal is to get it in for Hadoop 3.1.
>>
>> *Major work happened in this branch*
>>
>> - YARN-6471. Support to add min/max resource configuration for a queue
>> - YARN-7332. Compute effectiveCapacity per each resource vector
>> - YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to
>> handle absolute resources.
>>
>> *Regarding design details*
>>
>> Please refer [1] for detailed design document.
>>
>> *Regarding to testing:*
>>
>> We did extensive tests for the feature in the last couple of months.
>> Comparing to latest trunk.
>>
>> - For SLS benchmark: We didn't see observable performance gap from
>> simulated test based on 8K nodes SLS traces (1 PB memory). We got 3k+
>> containers allocated per second.
>>
>> - For microbenchmark: We use performance test cases added by YARN 6775, it
>> did not show much performance regression comparing to trunk.
>>
>> *YARN-5881* >
>> #ResourceTypes = 2. Avg of fastest 20: 55294.52
>> #ResourceTypes = 2. Avg of fastest 20: 55401.66
>>
>> *trunk*
>> #ResourceTypes = 2. Avg of fastest 20: 55865.92
>> #ResourceTypes = 2. Avg of fastest 20: 55096.418
>>
>> *Regarding to API stability:*
>>
>> All newly added @Public APIs are @Unstable.
>>
>> Documentation jira [3] could help to provide detailed configuration
>> details. This feature works from end-to-end and we are running this in our
>> development cluster for last couple of months and undergone good amount of
>> testing. Branch code is run against trunk and tracked via [4].
>>
>> We would love to get your thoughts before opening a voting thread.
>>
>> Special thanks to a team of folks who worked hard and contributed towards
>> this efforts including design discussion / patch / reviews, etc.: Wangda
>> Tan, Vinod Kumar Vavilappali, Rohith Sharma K S.
>>
>> [1] :
>> https://issues.apache.org/jira/secure/attachment/
>> 12855984/YARN-5881.Support.Absolute.Min.Max.Resource.In.
>> Capacity.Scheduler.design-doc.v1.pdf
>> [2] : https://issues.apache.org/jira/browse/YARN-5881
>>
>> [3] : https://issues.apache.org/jira/browse/YARN-7533
>>
>> [4] : https://issues.apache.org/jira/browse/YARN-7510
>>
>> Thanks,
>>
>> Sunil G and Wangda Tan
>>
>>