RE: VOTE] Release Apache Hadoop 2.7.5 (RC0)

2017-12-04 Thread Brahma Reddy Battula
+1  (non-binding), thanks Konstantin for driving this.


--Built from the source
--Installed 3 Node HA Cluster
--Ran basic shell commands
--Verified append/snapshot/truncate
--Ran sample jobs like pi,wordcount


Looks follow commits are missed in changes.txt.

MAPREDUCE-6975
HADOOP-14919
HDFS-12596
YARN-7084
HADOOP-14881
HADOOP-14827
HDFS-12832


--Brahma Reddy Battula

-Original Message-
From: Konstantin Shvachko [mailto:shv.had...@gmail.com] 
Sent: 02 December 2017 10:13
To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: VOTE] Release Apache Hadoop 2.7.5 (RC0)

Hi everybody,

This is the next dot release of Apache Hadoop 2.7 line. The previous one
2.7.4 was release August 4, 2017.
Release 2.7.5 includes critical bug fixes and optimizations. See more details 
in Release Note:
http://home.apache.org/~shv/hadoop-2.7.5-RC0/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC0/

Please give it a try and vote on this thread. The vote will run for 5 days 
ending 12/08/2017.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin


Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-04 Thread Rohith Sharma K S
+1

On Nov 30, 2017 7:26 AM, "Sunil G"  wrote:

> Hi All,
>
>
> Based on the discussion at [1], I'd like to start a vote to merge feature
> branch
>
> YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
> 6:00PM PDT.
>
>
> This branch adds support to configure queue capacity as absolute resource
> in
>
> capacity scheduler. This will help admins who want fine control of
> resources of queues.
>
>
> Feature development is done at YARN-5881 [2], jenkins build is here
> (YARN-7510 [3]).
>
> All required tasks for this feature are committed. This feature changes
> RM’s Capacity Scheduler only,
>
> and we did extensive tests for the feature in the last couple of months
> including performance tests.
>
>
> Key points:
>
> - The feature is turned off by default, and have to configure absolute
> resource to enable same.
>
> - Detailed documentation about how to use this feature is done as part of
> [4].
>
> - No major performance degradation is observed with this branch work. SLS
> and UT performance
>
> tests are done.
>
>
> There were 11 subtasks completed for this feature.
>
>
> Huge thanks to everyone who helped with reviews, commits, guidance, and
>
> technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
> Rohith Sharma K S, Eric Payne .
>
>
> [1] :
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%
> 3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%
> 40mail.gmail.com%3E
>
> [2] : https://issues.apache.org/jira/browse/YARN-5881
>
> [3] : https://issues.apache.org/jira/browse/YARN-7510
>
> [4] : https://issues.apache.org/jira/browse/YARN-7533
>
>
> Regards
>
> Sunil and Wangda
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/

[Dec 4, 2017 5:15:07 AM] (sunilg) YARN-7594. 
TestNMWebServices#testGetNMResourceInfo fails on trunk.
[Dec 4, 2017 5:57:23 AM] (sunilg) YARN-6907. Node information page in the old 
web UI should report
[Dec 4, 2017 6:22:01 AM] (Arun Suresh) YARN-7587. Skip dispatching 
opportunistic containers to nodes whose




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.server.namenode.TestDeleteRace 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.TestLeaseRecoveryStriped 
   hadoop.hdfs.server.namenode.TestFSImage 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.fs.TestUnbuffer 
   hadoop.hdfs.TestReadStripedFileWithDecoding 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.TestWriteReadStripedFile 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.TestMapReduceLazyOutput 
   hadoop.mapred.TestNetworkedJob 
   hadoop.mapred.TestJobName 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/whitespace-eol.txt
  [8.8M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/whitespace-tabs.txt
  [288K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/613/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [756K]
   

Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-04 Thread Weiwei Yang
+1 (non-binding)
Thanks for getting this done Sunil.

--
Weiwei

On 5 Dec 2017, 4:06 AM +0800, Eric Payne , 
wrote:
+1. Thanks Sunil for the work on this branch.
Eric

From: Sunil G ; Hdfs-dev 
; Hadoop Common ; 
"mapreduce-dev@hadoop.apache.org" 

[jira] [Created] (MAPREDUCE-7018) Apply erasure coding properly to framework tarball

2017-12-04 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created MAPREDUCE-7018:
-

 Summary: Apply erasure coding properly to framework tarball
 Key: MAPREDUCE-7018
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7018
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi


{code}
2017-12-01 17:54:51,753 INFO uploader.FrameworkUploader: Disabling Erasure 
Coding for path: hdfs://machine:9000/tmp/mr-framework.tar.gz
2017-12-01 17:54:51,779 ERROR uploader.FrameworkUploader: Error in execution 
Attempt to set an erasure coding policy for a file /tmp/mr-framework.tar.gz
at 
org.apache.hadoop.hdfs.server.namenode.FSDirErasureCodingOp.setErasureCodingPolicyXAttr(FSDirErasureCodingOp.java:147)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirErasureCodingOp.setErasureCodingPolicy(FSDirErasureCodingOp.java:127)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setErasureCodingPolicy(FSNamesystem.java:7291)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setErasureCodingPolicy(NameNodeRpcServer.java:2115)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolServerSideTranslatorPB.java:1552)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-04 Thread Eric Payne
+1. Thanks Sunil for the work on this branch.
Eric

  From: Sunil G 
 To: "yarn-...@hadoop.apache.org" ; Hdfs-dev 
; Hadoop Common ; 
"mapreduce-dev@hadoop.apache.org"  
 Sent: Wednesday, November 29, 2017 7:56 PM
 Subject: [VOTE] Merge Absolute resource configuration support in Capacity 
Scheduler (YARN-5881) to trunk
   
Hi All,


Based on the discussion at [1], I'd like to start a vote to merge feature
branch

YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
6:00PM PDT.


This branch adds support to configure queue capacity as absolute resource in

capacity scheduler. This will help admins who want fine control of
resources of queues.


Feature development is done at YARN-5881 [2], jenkins build is here
(YARN-7510 [3]).

All required tasks for this feature are committed. This feature changes
RM’s Capacity Scheduler only,

and we did extensive tests for the feature in the last couple of months
including performance tests.


Key points:

- The feature is turned off by default, and have to configure absolute
resource to enable same.

- Detailed documentation about how to use this feature is done as part of
[4].

- No major performance degradation is observed with this branch work. SLS
and UT performance

tests are done.


There were 11 subtasks completed for this feature.


Huge thanks to everyone who helped with reviews, commits, guidance, and

technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
Rohith Sharma K S, Eric Payne .


[1] :
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%40mail.gmail.com%3E

[2] : https://issues.apache.org/jira/browse/YARN-5881

[3] : https://issues.apache.org/jira/browse/YARN-7510

[4] : https://issues.apache.org/jira/browse/YARN-7533


Regards

Sunil and Wangda

   

Re: VOTE] Release Apache Hadoop 2.7.5 (RC0)

2017-12-04 Thread Hanisha Koneru
Thanks Konstantin for putting up 2.7.5-RC0 release.

+1 (non-binding).

Verified the following:
- Built from source on Mac OS X 10.11.6 with Java 1.7.0_79
- Deployed binary to a 3-node docker cluster
- Sanity checks
- Basic dfs operations
- MapReduce Wordcount & Grep


Thanks,
Hanisha








On 12/1/17, 8:42 PM, "Konstantin Shvachko"  wrote:

>Hi everybody,
>
>This is the next dot release of Apache Hadoop 2.7 line. The previous one
>2.7.4 was release August 4, 2017.
>Release 2.7.5 includes critical bug fixes and optimizations. See more
>details in Release Note:
>http://home.apache.org/~shv/hadoop-2.7.5-RC0/releasenotes.html
>
>The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC0/
>
>Please give it a try and vote on this thread. The vote will run for 5 days
>ending 12/08/2017.
>
>My up to date public key is available from:
>https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
>Thanks,
>--Konstantin


[jira] [Created] (MAPREDUCE-7017) Too many times of meaningless invocation in TaskAttemptImpl#resolveHosts

2017-12-04 Thread jiayuhan-it (JIRA)
jiayuhan-it created MAPREDUCE-7017:
--

 Summary: Too many times of meaningless invocation in 
TaskAttemptImpl#resolveHosts
 Key: MAPREDUCE-7017
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7017
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am
Affects Versions: 3.0.0-alpha4
Reporter: jiayuhan-it


In MRAppMaster each TaskAttempt call resolveHosts function too many times 
for getting dataLocalHosts.  When a job has a lot of tasks or the machine 
configures unreasonab, it will waste a lot of time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org