Re: [VOTE] Release Apache Hadoop 3.0.0-alpha2 RC0
Thanks Andrew much for the work here! +1 (binding). - Downloaded both binary and src tarballs - Verified md5 checksum and signature for both - Built from source tarball - Deployed 2 pseudo clusters, one with the released tarball and the other with what I built from source, and did the following on both: - Run basic HDFS operations, snapshots and distcp jobs - Run pi job - Examined HDFS webui, YARN webui. Best, --Yongjun On Tue, Jan 24, 2017 at 3:56 PM, Eric Badgerwrote: > +1 (non-binding) > - Verified signatures and md5- Built from source- Started single-node > cluster on my mac- Ran some sleep jobs > Eric > > On Tuesday, January 24, 2017 4:32 PM, Yufei Gu > wrote: > > > Hi Andrew, > > Thanks for working on this. > > +1 (Non-Binding) > > 1. Downloaded the binary and verified the md5. > 2. Deployed it on 3 node cluster with 1 ResourceManager and 2 NodeManager. > 3. Set YARN to use Fair Scheduler. > 4. Ran MapReduce jobs Pi > 5. Verified Hadoop version command output is correct. > > Best, > > Yufei > > On Tue, Jan 24, 2017 at 3:02 AM, Marton Elek > wrote: > > > ]> > > > minicluster is kind of weird on filesystems that don't support mixed > > case, like OS X's default HFS+. > > > > > > $ jar tf hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar | grep > -i > > license > > > LICENSE.txt > > > license/ > > > license/LICENSE > > > license/LICENSE.dom-documentation.txt > > > license/LICENSE.dom-software.txt > > > license/LICENSE.sax.txt > > > license/NOTICE > > > license/README.dom.txt > > > license/README.sax.txt > > > LICENSE > > > Grizzly_THIRDPARTYLICENSEREADME.txt > > > > > > I added a patch to https://issues.apache.org/jira/browse/HADOOP-14018 to > > add the missing META-INF/LICENSE.txt to the shaded files. > > > > Question: what should be done with the other LICENSE files in the > > minicluster. Can we just exclude them (from legal point of view)? > > > > Regards, > > Marton > > > > - > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > > >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha2 RC0
+1 (non-binding) - Verified signatures and md5- Built from source- Started single-node cluster on my mac- Ran some sleep jobs Eric On Tuesday, January 24, 2017 4:32 PM, Yufei Guwrote: Hi Andrew, Thanks for working on this. +1 (Non-Binding) 1. Downloaded the binary and verified the md5. 2. Deployed it on 3 node cluster with 1 ResourceManager and 2 NodeManager. 3. Set YARN to use Fair Scheduler. 4. Ran MapReduce jobs Pi 5. Verified Hadoop version command output is correct. Best, Yufei On Tue, Jan 24, 2017 at 3:02 AM, Marton Elek wrote: > ]> > > minicluster is kind of weird on filesystems that don't support mixed > case, like OS X's default HFS+. > > > > $ jar tf hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar | grep -i > license > > LICENSE.txt > > license/ > > license/LICENSE > > license/LICENSE.dom-documentation.txt > > license/LICENSE.dom-software.txt > > license/LICENSE.sax.txt > > license/NOTICE > > license/README.dom.txt > > license/README.sax.txt > > LICENSE > > Grizzly_THIRDPARTYLICENSEREADME.txt > > > I added a patch to https://issues.apache.org/jira/browse/HADOOP-14018 to > add the missing META-INF/LICENSE.txt to the shaded files. > > Question: what should be done with the other LICENSE files in the > minicluster. Can we just exclude them (from legal point of view)? > > Regards, > Marton > > - > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > >
[jira] [Resolved] (HDFS-11366) Clean up old .ckpt files after saveNamespace
[ https://issues.apache.org/jira/browse/HDFS-11366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen resolved HDFS-11366. -- Resolution: Duplicate Looks like we already have HDFS-3716 in place to take care of this problem. Sorry didn't find that earlier. It's more aggressive than proposed here, but since the purge only happens after a successful checkpoint, the risk is low. > Clean up old .ckpt files after saveNamespace > > > Key: HDFS-11366 > URL: https://issues.apache.org/jira/browse/HDFS-11366 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namenode >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > > Checkpoints are done in the NN by writing to {{fsimage.ckpt_TXID}} files, and > rename to {{fsimage_TXID}} files upon success. > If a checkpoint fails half way, the fsimage.ckpt_ file will be left on disk. > There is no logic to clean it up at all. > After talking with [~atm], I understand the historical reason for not > immediately cleaning up those files, since they maybe useful for disaster > recovery. > But feels like cleaning those ckpt files after a successful checkpoint, with > a larger TXID threshold is also safe to do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha2 RC0
On Mon, Jan 23, 2017 at 9:32 PM, Allen Wittenauerwrote: > The problem here is that there is a 'license' directory and a file called > 'LICENSE'. If this gets extracted by jar via jar xf, it will fail. unzip > can be made to extract it via an option like -o. To make matters worse, none > of these license files match the one in the generated tarball. :( Ah, got it. I didn't strip the trailing slash on directories. With that, it looks like the "license" directory and "LICENSE" file are the only conflict? I've not followed the development of packaging LICENSE/NOTICE in the jar files. AFAIK, it's sufficient that we have the top-level LICENSE/NOTICE in the tarball. Unless there's a LEGAL thread to the contrary, it's OK as-is. Again, I don't think we need to restart the clock on the RC vote if the release notes and LICENSE/NOTICE were fixed, but it's Andrew's time and I don't think any of these are blockers for the release. -C - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11366) Clean up old .ckpt files after saveNamespace
Xiao Chen created HDFS-11366: Summary: Clean up old .ckpt files after saveNamespace Key: HDFS-11366 URL: https://issues.apache.org/jira/browse/HDFS-11366 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, namenode Affects Versions: 2.6.0 Reporter: Xiao Chen Assignee: Xiao Chen Checkpoints are done in the NN by writing to {{fsimage.ckpt_TXID}} files, and rename to {{fsimage_TXID}} files upon success. If a checkpoint fails half way, the fsimage.ckpt_ file will be left on disk. There is no logic to clean it up at all. After talking with [~atm], I understand the historical reason for not immediately cleaning up those files, since they maybe useful for disaster recovery. But feels like cleaning those ckpt files after a successful checkpoint, with a larger TXID threshold is also safe to do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11365) Log portnumber in PrivilegedNfsGatewayStarter
Mukul Kumar Singh created HDFS-11365: Summary: Log portnumber in PrivilegedNfsGatewayStarter Key: HDFS-11365 URL: https://issues.apache.org/jira/browse/HDFS-11365 Project: Hadoop HDFS Issue Type: Bug Components: nfs Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Port number in PrivilegedNfsGatewayStarter should be logged. This would be useful in cases where bind fails on the port. This can happen because this port number is in use by another application. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/ [Jan 23, 2017 6:49:35 AM] (sunilg) YARN-6031. Application recovery has failed when node label feature is [Jan 23, 2017 5:12:51 PM] (jlowe) YARN-5910. Support for multi-cluster delegation tokens. Contributed by [Jan 23, 2017 6:52:14 PM] (wangda) YARN-5864. Capacity Scheduler - Queue Priorities. (wangda) [Jan 24, 2017 1:42:54 AM] (templedf) YARN-6012. Remove node label (removeFromClusterNodeLabels) document is [Jan 24, 2017 5:07:25 AM] (sjlee) YARN-6117. SharedCacheManager does not start up. Contributed by Chris [Jan 24, 2017 5:29:55 AM] (rohithsharmaks) YARN-6082. Invalid REST api response for getApps since -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.yarn.server.timeline.webapp.TestTimelineWebServices hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.server.TestDiskFailures hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.TestMiniYarnClusterNodeUtilization cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-compile-javac-root.txt [160K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-patch-shellcheck.txt [24K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/296/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha2 RC0
]> > minicluster is kind of weird on filesystems that don't support mixed case, > like OS X's default HFS+. > > $ jar tf hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar | grep -i > license > LICENSE.txt > license/ > license/LICENSE > license/LICENSE.dom-documentation.txt > license/LICENSE.dom-software.txt > license/LICENSE.sax.txt > license/NOTICE > license/README.dom.txt > license/README.sax.txt > LICENSE > Grizzly_THIRDPARTYLICENSEREADME.txt I added a patch to https://issues.apache.org/jira/browse/HADOOP-14018 to add the missing META-INF/LICENSE.txt to the shaded files. Question: what should be done with the other LICENSE files in the minicluster. Can we just exclude them (from legal point of view)? Regards, Marton - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org