Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/ [Dec 11, 2017 11:17:17 PM] (inigoiri) HDFS-12875. RBF: Complete logic for -readonly option of dfsrouteradmin -1 overall The following subsystems voted -1: unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Unreaped Processes : hadoop-hdfs:20 hadoop-mapreduce-client-jobclient:8 hadoop-distcp:4 hadoop-extras:1 hadoop-yarn-applications-distributedshell:1 hadoop-yarn-client:9 hadoop-yarn-server-nodemanager:1 hadoop-yarn-server-timelineservice:1 Failed junit tests : hadoop.tools.TestIntegration hadoop.tools.TestDistCpViewFs hadoop.tools.TestDistCpSystem hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.yarn.sls.appmaster.TestAMSimulator hadoop.yarn.server.nodemanager.webapp.TestNMWebServer hadoop.yarn.server.nodemanager.TestNodeStatusUpdaterForLabels hadoop.yarn.server.nodemanager.TestNodeManagerReboot hadoop.yarn.server.TestContainerManagerSecurity Timed out junit tests : org.apache.hadoop.hdfs.TestWriteRead org.apache.hadoop.hdfs.TestDatanodeRegistration org.apache.hadoop.hdfs.TestReservedRawPaths org.apache.hadoop.hdfs.TestAclsEndToEnd org.apache.hadoop.hdfs.TestFileCreation org.apache.hadoop.hdfs.TestDatanodeDeath org.apache.hadoop.hdfs.TestSafeMode org.apache.hadoop.hdfs.TestBlockMissingException org.apache.hadoop.hdfs.TestDFSClientRetries org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestFileCorruption org.apache.hadoop.hdfs.TestFileCreationDelete org.apache.hadoop.hdfs.TestDFSAddressConfig org.apache.hadoop.hdfs.TestSeekBug org.apache.hadoop.hdfs.TestAppendSnapshotTruncate org.apache.hadoop.hdfs.TestRestartDFS org.apache.hadoop.hdfs.TestDFSClientSocketSize org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead org.apache.hadoop.hdfs.TestDFSRollback org.apache.hadoop.hdfs.TestDFSClientExcludedNodes org.apache.hadoop.mapred.TestMiniMRClasspath org.apache.hadoop.mapred.TestClusterMapReduceTestCase org.apache.hadoop.mapred.TestMRIntermediateDataEncryption org.apache.hadoop.mapred.TestJobSysDirWithDFS org.apache.hadoop.mapred.TestMRTimelineEventHandling org.apache.hadoop.mapred.join.TestDatamerge org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers org.apache.hadoop.mapred.TestReduceFetchFromPartialMem org.apache.hadoop.tools.TestDistCpSync org.apache.hadoop.tools.TestDistCpWithXAttrs org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget org.apache.hadoop.tools.TestDistCpSyncReverseFromSource org.apache.hadoop.tools.TestCopyFiles org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy org.apache.hadoop.yarn.client.TestRMFailover org.apache.hadoop.yarn.client.cli.TestYarnCLI org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation org.apache.hadoop.yarn.client.api.impl.TestAMRMClient org.apache.hadoop.yarn.client.api.impl.TestYarnClient org.apache.hadoop.yarn.client.api.impl.TestNMClient org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-compile-javac-root.txt [324K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-patch-shellcheck.txt [76K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/67/artifact/out/whitespace-eo
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
Oops, the vote was meant for 2.7.5. Sorry for the confusion. My 2.8.3 vote coming up shortly. On Tue, Dec 12, 2017 at 4:28 PM, John Zhuge wrote: > Thanks Junping for the great effort! > > >- Verified checksums and signatures of all tarballs >- Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 >- Verified cloud connectors: > - All S3A integration tests >- Deployed both binary and built source to a pseudo cluster, passed >the following sanity tests in insecure and SSL mode: > - HDFS basic and ACL > - DistCp basic > - MapReduce wordcount > - KMS and HttpFS basic > - Balancer start/stop > > > Non-blockers > >- HADOOP-13030 Handle special characters in passwords in KMS startup >script. Fixed in 2.8+. >- NameNode servlets test failures: 403 User dr.who is unauthorized to >access this page. Researching. Could be just test configuration issue. > > John > > On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger > wrote: > >> Thanks, Junping >> >> +1 (non-binding) looks good from my end >> >> - Verified all hashes and checksums >> - Built from source on macOS 10.12.6, Java 1.8.0u65 >> - Deployed a pseudo cluster >> - Ran some example jobs >> >> Eric >> >> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko < >> shv.had...@gmail.com> >> wrote: >> >> > Downloaded again, now the checksums look good. Sorry my fault >> > >> > Thanks, >> > --Konstantin >> > >> > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du >> wrote: >> > >> > > Hi Konstantin, >> > > >> > > Thanks for verification and comments. I was verifying your >> example >> > > below but found it is actually matched: >> > > >> > > >> > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* >> > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = >> > > e53d04477b85e8b58ac0a26468f04736* >> > > >> > > What's your md5 checksum for given source tar ball? >> > > >> > > >> > > Thanks, >> > > >> > > >> > > Junping >> > > >> > > >> > > -- >> > > *From:* Konstantin Shvachko >> > > *Sent:* Saturday, December 9, 2017 11:06 AM >> > > *To:* Junping Du >> > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; >> > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org >> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) >> > > >> > > Hey Junping, >> > > >> > > Could you pls upload mds relative to the tar.gz etc. files rather than >> > > their full path >> > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: >> > >MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 >> > > >> > > Otherwise mds don't match for me. >> > > >> > > Thanks, >> > > --Konstantin >> > > >> > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du >> wrote: >> > > >> > >> Hi all, >> > >> I've created the first release candidate (RC0) for Apache Hadoop >> > >> 2.8.3. This is our next maint release to follow up 2.8.2. It >> includes 79 >> > >> important fixes and improvements. >> > >> >> > >> The RC artifacts are available at: >> http://home.apache.org/~junpin >> > >> g_du/hadoop-2.8.3-RC0 >> > >> >> > >> The RC tag in git is: release-2.8.3-RC0 >> > >> >> > >> The maven artifacts are available via repository.apache.org >> at: >> > >> https://repository.apache.org/content/repositories/orgapache >> hadoop-1072 >> > >> >> > >> Please try the release and vote; the vote will run for the >> usual 5 >> > >> working days, ending on 12/12/2017 PST time. >> > >> >> > >> Thanks, >> > >> >> > >> Junping >> > >> >> > > >> > > >> > >> > > > > -- > John > -- John Zhuge
Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)
Thanks Konstantin for the great effort! +1 (binding) - Verified checksums and signatures of all tarballs - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 - Verified cloud connectors: - All S3A integration tests - Deployed both binary and built source to a pseudo cluster, passed the following sanity tests in insecure and SSL mode: - HDFS basic and ACL - DistCp basic - MapReduce wordcount - KMS and HttpFS basic - Balancer start/stop Non-blockers - HADOOP-13030 Handle special characters in passwords in KMS startup script. Fixed in 2.8+. - NameNode servlets test failures: 403 User dr.who is unauthorized to access this page. Researching. Could be just test configuration issue. On Tue, Dec 12, 2017 at 2:14 PM, Eric Badger wrote: > Thanks, Konstantin. Everything looks good to me > > +1 (non-binding) > > - Verified all signatures and digests > - Built from source on macOS 10.12.6, Java 1.8.0u65 > - Deployed a pseudo cluster > - Ran some example jobs > > Eric > > On Tue, Dec 12, 2017 at 11:01 AM, Jason Lowe > wrote: > > > Thanks for driving the release, Konstantin! > > > > +1 (binding) > > > > - Verified signatures and digests > > - Successfully performed a native build from source > > - Deployed a single-node cluster > > - Ran some sample jobs and checked the logs > > > > Jason > > > > > > On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko < > shv.had...@gmail.com> > > wrote: > > > > > Hi everybody, > > > > > > I updated CHANGES.txt and fixed documentation links. > > > Also committed MAPREDUCE-6165, which fixes a consistently failing > test. > > > > > > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The > > > previous one 2.7.4 was release August 4, 2017. > > > Release 2.7.5 includes critical bug fixes and optimizations. See more > > > details in Release Note: > > > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html > > > > > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/ > > > > > > Please give it a try and vote on this thread. The vote will run for 5 > > days > > > ending 12/13/2017. > > > > > > My up to date public key is available from: > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > > > Thanks, > > > --Konstantin > > > > > > -- John
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (binding) - downloaded the binary tarball and the source tarball and checked signatures - verified that the source builds cleanly - verified that the shaded client jars are correct - checked the basic pseudo-distributed cluster set-up and checked UI and logs (hdfs and YARN) - ran some test jobs successfully - enabled Timeline Service v.2 and tested the writer/reader with test jobs On Tue, Dec 12, 2017 at 6:14 PM, Jonathan Hung wrote: > Thanks Andrew for the huge effort. > > +1 (non-binding) > - Downloaded binary tarball and verified md5 > - Ran RM HA and verified manual failover > - Verified add/remove/update scheduler configuration API (CLI/REST) works > for leveldb/zookeeper backend > - Verified scheduler configuration changes persisted on restart/failover > - Verified "yarn rmadmin -refreshQueues" works when scheduler configuration > API disabled, and does not work when scheduler configuration API enabled > > > Jonathan Hung > > On Tue, Dec 12, 2017 at 5:44 PM, Junping Du wrote: > > > Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just get > > chance to validate new RC now. > > > > Basically, I found two critical issues with the same rolling upgrade > > scenario as where HADOOP-15059 get found previously: > > HDFS-12920, we changed value format for some hdfs configurations that old > > version MR client doesn't understand when fetching these configurations. > > Some quick workarounds are to add old value (without time unit) in > > hdfs-site.xml to override new default values but will generate many > > annoying warnings. I provided my fix suggestions on the JIRA already for > > more discussion. > > The other one is YARN-7646. After we workaround HDFS-12920, will hit the > > issue that old version MR AppMaster cannot communicate with new version > of > > YARN RM - could be related to resource profile changes from YARN side but > > root cause are still in investigation. > > > > The first issue may not belong to a blocker given we can workaround this > > without code change. I am not sure if we can workaround 2nd issue so far. > > If not, we may have to fix this or compromise with withdrawing support of > > rolling upgrade or calling it a stable release. > > > > > > Thanks, > > > > Junping > > > > > > From: Robert Kanter > > Sent: Tuesday, December 12, 2017 3:10 PM > > To: Arun Suresh > > Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron T. > > Myers; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > > yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org > > Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1 > > > > +1 (binding) > > > > + Downloaded the binary release > > + Deployed on a 3 node cluster on CentOS 7.3 > > + Ran some MR jobs, clicked around the UI, etc > > + Ran some CLI commands (yarn logs, etc) > > > > Good job everyone on Hadoop 3! > > > > > > - Robert > > > > On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh wrote: > > > > > +1 (binding) > > > > > > - Verified signatures of the source tarball. > > > - built from source - using the docker build environment. > > > - set up a pseudo-distributed test cluster. > > > - ran basic HDFS commands > > > - ran some basic MR jobs > > > > > > Cheers > > > -Arun > > > > > > On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang > > > > wrote: > > > > > > > Hi everyone, > > > > > > > > As a reminder, this vote closes tomorrow at 12:31pm, so please give > it > > a > > > > whack if you have time. There are already enough binding +1s to pass > > this > > > > vote, but it'd be great to get additional validation. > > > > > > > > Thanks to everyone who's voted thus far! > > > > > > > > Best, > > > > Andrew > > > > > > > > > > > > > > > > On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > > > > > > > > > +1 (binding) > > > > > > > > > > * Verified src tarball and bin tarball, verified md5 of each. > > > > > * Build source with -Pdist,native > > > > > * Started a pseudo cluster > > > > > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run > hdfs > > > > > dfs put/get/cat on "/" with XOR-2-1 policy. > > > > > > > > > > Thanks Andrew for this great effort! > > > > > > > > > > Best, > > > > > > > > > > > > > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang < > > andrew.w...@cloudera.com > > > > > > > > > wrote: > > > > > > Hi Wei-Chiu, > > > > > > > > > > > > The patchprocess directory is left over from the create-release > > > > process, > > > > > > and it looks empty to me. We should still file a create-release > > JIRA > > > to > > > > > fix > > > > > > this, but I think this is not a blocker. Would you agree? > > > > > > > > > > > > Best, > > > > > > Andrew > > > > > > > > > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang < > > > weic...@cloudera.com > > > > > > > > > > > wrote: > > > > > > > > > > > >> Hi Andrew, thanks the tremendous effort. > > > > > >> I found an empty "patchprocess" directory in the source tarball, > > > that > > > > is > > > >
[jira] [Created] (YARN-7647) NM print inappropriate error log when node-labels is enabled
Yang Wang created YARN-7647: --- Summary: NM print inappropriate error log when node-labels is enabled Key: YARN-7647 URL: https://issues.apache.org/jira/browse/YARN-7647 Project: Hadoop YARN Issue Type: Bug Reporter: Yang Wang {code:title=NodeStatusUpdaterImpl.java} ... ... if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) { LOG.debug("Node Labels {" + StringUtils.join(",", previousNodeLabels) + "} were Accepted by RM "); } else { // case where updated labels from NodeLabelsProvider is sent to RM and // RM rejected the labels LOG.error( "NM node labels {" + StringUtils.join(",", previousNodeLabels) + "} were not accepted by RM and message from RM : " + response.getDiagnosticsMessage()); } ... ... {code} When LOG.isDebugEnabled() is false, NM will always print error log. It is an obvious error and is so misleading. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/ [Dec 11, 2017 6:58:02 PM] (surendralilhore) HDFS-12833. Distcp : Update the usage of delete option for dependency [Dec 11, 2017 10:00:42 PM] (rkanter) MAPREDUCE-7018. Apply erasure coding properly to framework tarball and [Dec 11, 2017 11:14:57 PM] (inigoiri) HDFS-12875. RBF: Complete logic for -readonly option of dfsrouteradmin [Dec 12, 2017 12:43:03 AM] (weichiu) HDFS-12891. Do not invalidate blocks if toInvalidate is empty. [Dec 12, 2017 4:14:15 AM] (cdouglas) HDFS-12882. Support full open(PathHandle) contract in HDFS [Dec 12, 2017 9:38:18 AM] (sunilg) YARN-7635. TestRMWebServicesSchedulerActivities fails in trunk. [Dec 12, 2017 9:50:59 AM] (sunilg) Queue ACL validations should validate parent queue ACLs before -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference of replication in org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType, Short, Byte) Dereferenced at INodeFile.java:replication in org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType, Short, Byte) Dereferenced at INodeFile.java:[line 210] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources At Resource.java:by returning Resource.resources At Resource.java:[line 234] Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 hadoop.hdfs.TestFileChecksum hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 hadoop.hdfs.TestDFSStripedInputStream hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy hadoop.hdfs.server.balancer.TestBalancerRPCDelay hadoop.hdfs.TestErasureCodingPolicies hadoop.hdfs.TestDecommission hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 hadoop.hdfs.TestCrcCorruption hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator hadoop.mapreduce.v2.TestUberAM cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-compile-javac-root.txt [280K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/whitespace-eol.txt [8.8M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/whitespace-tabs.txt [288K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/620/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [488K] https://builds.apache.org/
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
+1 (Binding). Built from source and deployed a cluster, jobs can run successfully. Thanks Junping for driving this. Best, Wangda On Tue, Dec 12, 2017 at 5:18 PM, Chandni Singh wrote: > +1 > Built the source code. > Deployed a single node cluster and ran example jobs. Also tested RM > recovery. > > Thanks, > Chandni > > On Tue, Dec 12, 2017 at 4:37 PM, Jian He wrote: > > > +1 > > I built from source code > > deployed a cluster > > and successfully ran jobs while restarting RM as well > > > > Jian > > > > > > > On Dec 12, 2017, at 4:28 PM, John Zhuge wrote: > > > > > > Thanks Junping for the great effort! > > > > > > > > > - Verified checksums and signatures of all tarballs > > > - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 > > > - Verified cloud connectors: > > > - All S3A integration tests > > > - Deployed both binary and built source to a pseudo cluster, passed > the > > > following sanity tests in insecure and SSL mode: > > > - HDFS basic and ACL > > > - DistCp basic > > > - MapReduce wordcount > > > - KMS and HttpFS basic > > > - Balancer start/stop > > > > > > > > > Non-blockers > > > > > > - HADOOP-13030 Handle special characters in passwords in KMS startup > > > script. Fixed in 2.8+. > > > - NameNode servlets test failures: 403 User dr.who is unauthorized to > > > access this page. Researching. Could be just test configuration > issue. > > > > > > John > > > > > > On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger > > > > wrote: > > > > > >> Thanks, Junping > > >> > > >> +1 (non-binding) looks good from my end > > >> > > >> - Verified all hashes and checksums > > >> - Built from source on macOS 10.12.6, Java 1.8.0u65 > > >> - Deployed a pseudo cluster > > >> - Ran some example jobs > > >> > > >> Eric > > >> > > >> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko < > > >> shv.had...@gmail.com> > > >> wrote: > > >> > > >>> Downloaded again, now the checksums look good. Sorry my fault > > >>> > > >>> Thanks, > > >>> --Konstantin > > >>> > > >>> On Mon, Dec 11, 2017 at 5:03 PM, Junping Du > > wrote: > > >>> > > Hi Konstantin, > > > > Thanks for verification and comments. I was verifying your > example > > below but found it is actually matched: > > > > > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = > > e53d04477b85e8b58ac0a26468f04736* > > > > What's your md5 checksum for given source tar ball? > > > > > > Thanks, > > > > > > Junping > > > > > > -- > > *From:* Konstantin Shvachko > > *Sent:* Saturday, December 9, 2017 11:06 AM > > *To:* Junping Du > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) > > > > Hey Junping, > > > > Could you pls upload mds relative to the tar.gz etc. files rather > than > > their full path > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: > > MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 > > > > Otherwise mds don't match for me. > > > > Thanks, > > --Konstantin > > > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du > > >> wrote: > > > > > Hi all, > > > I've created the first release candidate (RC0) for Apache > Hadoop > > > 2.8.3. This is our next maint release to follow up 2.8.2. It > includes > > >> 79 > > > important fixes and improvements. > > > > > > The RC artifacts are available at: > > >> http://home.apache.org/~junpin > > > g_du/hadoop-2.8.3-RC0 > > > > > > The RC tag in git is: release-2.8.3-RC0 > > > > > > The maven artifacts are available via repository.apache.org > at: > > > https://repository.apache.org/content/repositories/ > > >> orgapachehadoop-1072 > > > > > > Please try the release and vote; the vote will run for the > > >> usual 5 > > > working days, ending on 12/12/2017 PST time. > > > > > > Thanks, > > > > > > Junping > > > > > > > > > >>> > > >> > > > > > > > > > > > > -- > > > John > > > > > > - > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
Thanks Andrew for the huge effort. +1 (non-binding) - Downloaded binary tarball and verified md5 - Ran RM HA and verified manual failover - Verified add/remove/update scheduler configuration API (CLI/REST) works for leveldb/zookeeper backend - Verified scheduler configuration changes persisted on restart/failover - Verified "yarn rmadmin -refreshQueues" works when scheduler configuration API disabled, and does not work when scheduler configuration API enabled Jonathan Hung On Tue, Dec 12, 2017 at 5:44 PM, Junping Du wrote: > Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just get > chance to validate new RC now. > > Basically, I found two critical issues with the same rolling upgrade > scenario as where HADOOP-15059 get found previously: > HDFS-12920, we changed value format for some hdfs configurations that old > version MR client doesn't understand when fetching these configurations. > Some quick workarounds are to add old value (without time unit) in > hdfs-site.xml to override new default values but will generate many > annoying warnings. I provided my fix suggestions on the JIRA already for > more discussion. > The other one is YARN-7646. After we workaround HDFS-12920, will hit the > issue that old version MR AppMaster cannot communicate with new version of > YARN RM - could be related to resource profile changes from YARN side but > root cause are still in investigation. > > The first issue may not belong to a blocker given we can workaround this > without code change. I am not sure if we can workaround 2nd issue so far. > If not, we may have to fix this or compromise with withdrawing support of > rolling upgrade or calling it a stable release. > > > Thanks, > > Junping > > > From: Robert Kanter > Sent: Tuesday, December 12, 2017 3:10 PM > To: Arun Suresh > Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron T. > Myers; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org > Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1 > > +1 (binding) > > + Downloaded the binary release > + Deployed on a 3 node cluster on CentOS 7.3 > + Ran some MR jobs, clicked around the UI, etc > + Ran some CLI commands (yarn logs, etc) > > Good job everyone on Hadoop 3! > > > - Robert > > On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh wrote: > > > +1 (binding) > > > > - Verified signatures of the source tarball. > > - built from source - using the docker build environment. > > - set up a pseudo-distributed test cluster. > > - ran basic HDFS commands > > - ran some basic MR jobs > > > > Cheers > > -Arun > > > > On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang > > wrote: > > > > > Hi everyone, > > > > > > As a reminder, this vote closes tomorrow at 12:31pm, so please give it > a > > > whack if you have time. There are already enough binding +1s to pass > this > > > vote, but it'd be great to get additional validation. > > > > > > Thanks to everyone who's voted thus far! > > > > > > Best, > > > Andrew > > > > > > > > > > > > On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > > > > > > > +1 (binding) > > > > > > > > * Verified src tarball and bin tarball, verified md5 of each. > > > > * Build source with -Pdist,native > > > > * Started a pseudo cluster > > > > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs > > > > dfs put/get/cat on "/" with XOR-2-1 policy. > > > > > > > > Thanks Andrew for this great effort! > > > > > > > > Best, > > > > > > > > > > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang < > andrew.w...@cloudera.com > > > > > > > wrote: > > > > > Hi Wei-Chiu, > > > > > > > > > > The patchprocess directory is left over from the create-release > > > process, > > > > > and it looks empty to me. We should still file a create-release > JIRA > > to > > > > fix > > > > > this, but I think this is not a blocker. Would you agree? > > > > > > > > > > Best, > > > > > Andrew > > > > > > > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang < > > weic...@cloudera.com > > > > > > > > > wrote: > > > > > > > > > >> Hi Andrew, thanks the tremendous effort. > > > > >> I found an empty "patchprocess" directory in the source tarball, > > that > > > is > > > > >> not there if you clone from github. Any chance you might have some > > > > leftover > > > > >> trash when you made the tarball? > > > > >> Not wanting to nitpicking, but you might want to double check so > we > > > > don't > > > > >> ship anything private to you in public :) > > > > >> > > > > >> > > > > >> > > > > >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar < > > > ajay.ku...@hortonworks.com > > > > > > > > > >> wrote: > > > > >> > > > > >>> +1 (non-binding) > > > > >>> Thanks for driving this, Andrew Wang!! > > > > >>> > > > > >>> - downloaded the src tarball and verified md5 checksum > > > > >>> - built from source with jdk 1.8.0_111-b14 > > > > >>> - brought up a pseudo distributed cluster > > > > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just get chance to validate new RC now. Basically, I found two critical issues with the same rolling upgrade scenario as where HADOOP-15059 get found previously: HDFS-12920, we changed value format for some hdfs configurations that old version MR client doesn't understand when fetching these configurations. Some quick workarounds are to add old value (without time unit) in hdfs-site.xml to override new default values but will generate many annoying warnings. I provided my fix suggestions on the JIRA already for more discussion. The other one is YARN-7646. After we workaround HDFS-12920, will hit the issue that old version MR AppMaster cannot communicate with new version of YARN RM - could be related to resource profile changes from YARN side but root cause are still in investigation. The first issue may not belong to a blocker given we can workaround this without code change. I am not sure if we can workaround 2nd issue so far. If not, we may have to fix this or compromise with withdrawing support of rolling upgrade or calling it a stable release. Thanks, Junping From: Robert Kanter Sent: Tuesday, December 12, 2017 3:10 PM To: Arun Suresh Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron T. Myers; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1 +1 (binding) + Downloaded the binary release + Deployed on a 3 node cluster on CentOS 7.3 + Ran some MR jobs, clicked around the UI, etc + Ran some CLI commands (yarn logs, etc) Good job everyone on Hadoop 3! - Robert On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh wrote: > +1 (binding) > > - Verified signatures of the source tarball. > - built from source - using the docker build environment. > - set up a pseudo-distributed test cluster. > - ran basic HDFS commands > - ran some basic MR jobs > > Cheers > -Arun > > On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang > wrote: > > > Hi everyone, > > > > As a reminder, this vote closes tomorrow at 12:31pm, so please give it a > > whack if you have time. There are already enough binding +1s to pass this > > vote, but it'd be great to get additional validation. > > > > Thanks to everyone who's voted thus far! > > > > Best, > > Andrew > > > > > > > > On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > > > > > +1 (binding) > > > > > > * Verified src tarball and bin tarball, verified md5 of each. > > > * Build source with -Pdist,native > > > * Started a pseudo cluster > > > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs > > > dfs put/get/cat on "/" with XOR-2-1 policy. > > > > > > Thanks Andrew for this great effort! > > > > > > Best, > > > > > > > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang > > > > wrote: > > > > Hi Wei-Chiu, > > > > > > > > The patchprocess directory is left over from the create-release > > process, > > > > and it looks empty to me. We should still file a create-release JIRA > to > > > fix > > > > this, but I think this is not a blocker. Would you agree? > > > > > > > > Best, > > > > Andrew > > > > > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang < > weic...@cloudera.com > > > > > > > wrote: > > > > > > > >> Hi Andrew, thanks the tremendous effort. > > > >> I found an empty "patchprocess" directory in the source tarball, > that > > is > > > >> not there if you clone from github. Any chance you might have some > > > leftover > > > >> trash when you made the tarball? > > > >> Not wanting to nitpicking, but you might want to double check so we > > > don't > > > >> ship anything private to you in public :) > > > >> > > > >> > > > >> > > > >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar < > > ajay.ku...@hortonworks.com > > > > > > > >> wrote: > > > >> > > > >>> +1 (non-binding) > > > >>> Thanks for driving this, Andrew Wang!! > > > >>> > > > >>> - downloaded the src tarball and verified md5 checksum > > > >>> - built from source with jdk 1.8.0_111-b14 > > > >>> - brought up a pseudo distributed cluster > > > >>> - did basic file system operations (mkdir, list, put, cat) and > > > >>> confirmed that everything was working > > > >>> - Run word count, pi and DFSIOTest > > > >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked > > > >>> > > > >>> Cheers, > > > >>> Ajay > > > >>> > > > >>> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: > > > >>> > > > >>> +1 (binding) > > > >>> > > > >>> - downloaded src tarball, verified md5 > > > >>> - built from source with jdk1.8.0_112 > > > >>> - started a pseudo cluster with hdfs and kms > > > >>> - sanity checked encryption related operations working > > > >>> - sanity checked webui and logs. > > > >>> > > > >>> -Xiao > > > >>> > > > >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers < > a...@apache.org> > > > >>> wrote: > > > >>> > > >
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
+1 Built the source code. Deployed a single node cluster and ran example jobs. Also tested RM recovery. Thanks, Chandni On Tue, Dec 12, 2017 at 4:37 PM, Jian He wrote: > +1 > I built from source code > deployed a cluster > and successfully ran jobs while restarting RM as well > > Jian > > > > On Dec 12, 2017, at 4:28 PM, John Zhuge wrote: > > > > Thanks Junping for the great effort! > > > > > > - Verified checksums and signatures of all tarballs > > - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 > > - Verified cloud connectors: > > - All S3A integration tests > > - Deployed both binary and built source to a pseudo cluster, passed the > > following sanity tests in insecure and SSL mode: > > - HDFS basic and ACL > > - DistCp basic > > - MapReduce wordcount > > - KMS and HttpFS basic > > - Balancer start/stop > > > > > > Non-blockers > > > > - HADOOP-13030 Handle special characters in passwords in KMS startup > > script. Fixed in 2.8+. > > - NameNode servlets test failures: 403 User dr.who is unauthorized to > > access this page. Researching. Could be just test configuration issue. > > > > John > > > > On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger > > wrote: > > > >> Thanks, Junping > >> > >> +1 (non-binding) looks good from my end > >> > >> - Verified all hashes and checksums > >> - Built from source on macOS 10.12.6, Java 1.8.0u65 > >> - Deployed a pseudo cluster > >> - Ran some example jobs > >> > >> Eric > >> > >> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko < > >> shv.had...@gmail.com> > >> wrote: > >> > >>> Downloaded again, now the checksums look good. Sorry my fault > >>> > >>> Thanks, > >>> --Konstantin > >>> > >>> On Mon, Dec 11, 2017 at 5:03 PM, Junping Du > wrote: > >>> > Hi Konstantin, > > Thanks for verification and comments. I was verifying your example > below but found it is actually matched: > > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = > e53d04477b85e8b58ac0a26468f04736* > > What's your md5 checksum for given source tar ball? > > > Thanks, > > > Junping > > > -- > *From:* Konstantin Shvachko > *Sent:* Saturday, December 9, 2017 11:06 AM > *To:* Junping Du > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) > > Hey Junping, > > Could you pls upload mds relative to the tar.gz etc. files rather than > their full path > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: > MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 > > Otherwise mds don't match for me. > > Thanks, > --Konstantin > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du > >> wrote: > > > Hi all, > > I've created the first release candidate (RC0) for Apache Hadoop > > 2.8.3. This is our next maint release to follow up 2.8.2. It includes > >> 79 > > important fixes and improvements. > > > > The RC artifacts are available at: > >> http://home.apache.org/~junpin > > g_du/hadoop-2.8.3-RC0 > > > > The RC tag in git is: release-2.8.3-RC0 > > > > The maven artifacts are available via repository.apache.org at: > > https://repository.apache.org/content/repositories/ > >> orgapachehadoop-1072 > > > > Please try the release and vote; the vote will run for the > >> usual 5 > > working days, ending on 12/12/2017 PST time. > > > > Thanks, > > > > Junping > > > > > >>> > >> > > > > > > > > -- > > John > > > - > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > >
[jira] [Created] (YARN-7646) MR job (based on old version tarball) get failed due to incompatible resource request
Junping Du created YARN-7646: Summary: MR job (based on old version tarball) get failed due to incompatible resource request Key: YARN-7646 URL: https://issues.apache.org/jira/browse/YARN-7646 Project: Hadoop YARN Issue Type: Bug Components: yarn Reporter: Junping Du Priority: Blocker With quick workaround with fixing HDFS-12920 (set non time unit to hdfs-site.xml), the job still get failed with following error: {noformat} 2017-12-12 16:39:13,105 ERROR [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=-1, maxMemory=8192 at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:275) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:240) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:256) at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:246) at org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:217) at org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60) at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy81.allocate(Unknown Source) at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:206) at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:783) at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:280) at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:279) at java.lang.Thread.run(Thread.java:745) {noformat} It looks like incompatible change with communication between old MR client
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
+1 I built from source code deployed a cluster and successfully ran jobs while restarting RM as well Jian > On Dec 12, 2017, at 4:28 PM, John Zhuge wrote: > > Thanks Junping for the great effort! > > > - Verified checksums and signatures of all tarballs > - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 > - Verified cloud connectors: > - All S3A integration tests > - Deployed both binary and built source to a pseudo cluster, passed the > following sanity tests in insecure and SSL mode: > - HDFS basic and ACL > - DistCp basic > - MapReduce wordcount > - KMS and HttpFS basic > - Balancer start/stop > > > Non-blockers > > - HADOOP-13030 Handle special characters in passwords in KMS startup > script. Fixed in 2.8+. > - NameNode servlets test failures: 403 User dr.who is unauthorized to > access this page. Researching. Could be just test configuration issue. > > John > > On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger > wrote: > >> Thanks, Junping >> >> +1 (non-binding) looks good from my end >> >> - Verified all hashes and checksums >> - Built from source on macOS 10.12.6, Java 1.8.0u65 >> - Deployed a pseudo cluster >> - Ran some example jobs >> >> Eric >> >> On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko < >> shv.had...@gmail.com> >> wrote: >> >>> Downloaded again, now the checksums look good. Sorry my fault >>> >>> Thanks, >>> --Konstantin >>> >>> On Mon, Dec 11, 2017 at 5:03 PM, Junping Du wrote: >>> Hi Konstantin, Thanks for verification and comments. I was verifying your example below but found it is actually matched: *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = e53d04477b85e8b58ac0a26468f04736* What's your md5 checksum for given source tar ball? Thanks, Junping -- *From:* Konstantin Shvachko *Sent:* Saturday, December 9, 2017 11:06 AM *To:* Junping Du *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) Hey Junping, Could you pls upload mds relative to the tar.gz etc. files rather than their full path /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 Otherwise mds don't match for me. Thanks, --Konstantin On Tue, Dec 5, 2017 at 1:58 AM, Junping Du >> wrote: > Hi all, > I've created the first release candidate (RC0) for Apache Hadoop > 2.8.3. This is our next maint release to follow up 2.8.2. It includes >> 79 > important fixes and improvements. > > The RC artifacts are available at: >> http://home.apache.org/~junpin > g_du/hadoop-2.8.3-RC0 > > The RC tag in git is: release-2.8.3-RC0 > > The maven artifacts are available via repository.apache.org at: > https://repository.apache.org/content/repositories/ >> orgapachehadoop-1072 > > Please try the release and vote; the vote will run for the >> usual 5 > working days, ending on 12/12/2017 PST time. > > Thanks, > > Junping > >>> >> > > > > -- > John - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
Thanks Junping for the great effort! - Verified checksums and signatures of all tarballs - Built source with native, Azul Java 1.7.0_161 on Mac OS X 10.13.2 - Verified cloud connectors: - All S3A integration tests - Deployed both binary and built source to a pseudo cluster, passed the following sanity tests in insecure and SSL mode: - HDFS basic and ACL - DistCp basic - MapReduce wordcount - KMS and HttpFS basic - Balancer start/stop Non-blockers - HADOOP-13030 Handle special characters in passwords in KMS startup script. Fixed in 2.8+. - NameNode servlets test failures: 403 User dr.who is unauthorized to access this page. Researching. Could be just test configuration issue. John On Tue, Dec 12, 2017 at 1:10 PM, Eric Badger wrote: > Thanks, Junping > > +1 (non-binding) looks good from my end > > - Verified all hashes and checksums > - Built from source on macOS 10.12.6, Java 1.8.0u65 > - Deployed a pseudo cluster > - Ran some example jobs > > Eric > > On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko < > shv.had...@gmail.com> > wrote: > > > Downloaded again, now the checksums look good. Sorry my fault > > > > Thanks, > > --Konstantin > > > > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du wrote: > > > > > Hi Konstantin, > > > > > > Thanks for verification and comments. I was verifying your example > > > below but found it is actually matched: > > > > > > > > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* > > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = > > > e53d04477b85e8b58ac0a26468f04736* > > > > > > What's your md5 checksum for given source tar ball? > > > > > > > > > Thanks, > > > > > > > > > Junping > > > > > > > > > -- > > > *From:* Konstantin Shvachko > > > *Sent:* Saturday, December 9, 2017 11:06 AM > > > *To:* Junping Du > > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org > > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) > > > > > > Hey Junping, > > > > > > Could you pls upload mds relative to the tar.gz etc. files rather than > > > their full path > > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: > > >MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 > > > > > > Otherwise mds don't match for me. > > > > > > Thanks, > > > --Konstantin > > > > > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du > wrote: > > > > > >> Hi all, > > >> I've created the first release candidate (RC0) for Apache Hadoop > > >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes > 79 > > >> important fixes and improvements. > > >> > > >> The RC artifacts are available at: > http://home.apache.org/~junpin > > >> g_du/hadoop-2.8.3-RC0 > > >> > > >> The RC tag in git is: release-2.8.3-RC0 > > >> > > >> The maven artifacts are available via repository.apache.org at: > > >> https://repository.apache.org/content/repositories/ > orgapachehadoop-1072 > > >> > > >> Please try the release and vote; the vote will run for the > usual 5 > > >> working days, ending on 12/12/2017 PST time. > > >> > > >> Thanks, > > >> > > >> Junping > > >> > > > > > > > > > -- John
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (binding) + Downloaded the binary release + Deployed on a 3 node cluster on CentOS 7.3 + Ran some MR jobs, clicked around the UI, etc + Ran some CLI commands (yarn logs, etc) Good job everyone on Hadoop 3! - Robert On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh wrote: > +1 (binding) > > - Verified signatures of the source tarball. > - built from source - using the docker build environment. > - set up a pseudo-distributed test cluster. > - ran basic HDFS commands > - ran some basic MR jobs > > Cheers > -Arun > > On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang > wrote: > > > Hi everyone, > > > > As a reminder, this vote closes tomorrow at 12:31pm, so please give it a > > whack if you have time. There are already enough binding +1s to pass this > > vote, but it'd be great to get additional validation. > > > > Thanks to everyone who's voted thus far! > > > > Best, > > Andrew > > > > > > > > On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > > > > > +1 (binding) > > > > > > * Verified src tarball and bin tarball, verified md5 of each. > > > * Build source with -Pdist,native > > > * Started a pseudo cluster > > > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs > > > dfs put/get/cat on "/" with XOR-2-1 policy. > > > > > > Thanks Andrew for this great effort! > > > > > > Best, > > > > > > > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang > > > > wrote: > > > > Hi Wei-Chiu, > > > > > > > > The patchprocess directory is left over from the create-release > > process, > > > > and it looks empty to me. We should still file a create-release JIRA > to > > > fix > > > > this, but I think this is not a blocker. Would you agree? > > > > > > > > Best, > > > > Andrew > > > > > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang < > weic...@cloudera.com > > > > > > > wrote: > > > > > > > >> Hi Andrew, thanks the tremendous effort. > > > >> I found an empty "patchprocess" directory in the source tarball, > that > > is > > > >> not there if you clone from github. Any chance you might have some > > > leftover > > > >> trash when you made the tarball? > > > >> Not wanting to nitpicking, but you might want to double check so we > > > don't > > > >> ship anything private to you in public :) > > > >> > > > >> > > > >> > > > >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar < > > ajay.ku...@hortonworks.com > > > > > > > >> wrote: > > > >> > > > >>> +1 (non-binding) > > > >>> Thanks for driving this, Andrew Wang!! > > > >>> > > > >>> - downloaded the src tarball and verified md5 checksum > > > >>> - built from source with jdk 1.8.0_111-b14 > > > >>> - brought up a pseudo distributed cluster > > > >>> - did basic file system operations (mkdir, list, put, cat) and > > > >>> confirmed that everything was working > > > >>> - Run word count, pi and DFSIOTest > > > >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked > > > >>> > > > >>> Cheers, > > > >>> Ajay > > > >>> > > > >>> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: > > > >>> > > > >>> +1 (binding) > > > >>> > > > >>> - downloaded src tarball, verified md5 > > > >>> - built from source with jdk1.8.0_112 > > > >>> - started a pseudo cluster with hdfs and kms > > > >>> - sanity checked encryption related operations working > > > >>> - sanity checked webui and logs. > > > >>> > > > >>> -Xiao > > > >>> > > > >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers < > a...@apache.org> > > > >>> wrote: > > > >>> > > > >>> > +1 (binding) > > > >>> > > > > >>> > - downloaded the src tarball and built the source (-Pdist > > > -Pnative) > > > >>> > - verified the checksum > > > >>> > - brought up a secure pseudo distributed cluster > > > >>> > - did some basic file system operations (mkdir, list, put, > cat) > > > and > > > >>> > confirmed that everything was working > > > >>> > - confirmed that the web UI worked > > > >>> > > > > >>> > Best, > > > >>> > Aaron > > > >>> > > > > >>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < > > > >>> andrew.w...@cloudera.com> > > > >>> > wrote: > > > >>> > > > > >>> > > Hi all, > > > >>> > > > > > >>> > > Let me start, as always, by thanking the efforts of all the > > > >>> contributors > > > >>> > > who contributed to this release, especially those who > jumped > > on > > > >>> the > > > >>> > issues > > > >>> > > found in RC0. > > > >>> > > > > > >>> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release > > > >>> incorporates 302 > > > >>> > > fixed JIRAs since the previous 3.0.0-beta1 release. > > > >>> > > > > > >>> > > You can find the artifacts here: > > > >>> > > > > > >>> > > http://home.apache.org/~wang/3.0.0-RC1/ > > > >>> > > > > > >>> > > I've done the traditional testing of building from the > source > > > >>> tarball and > > > >>> > > running a Pi job on a single node cluster. I also verified > > that > > > >>> the > > > >>> > shaded > > > >>>
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (non-binding) I tested it in a deployment with 24 nodes across 8 subclusters. Tested a few jobs reading and writing data through HDFS Router-based federation. However, jobs failed to run when setting RBF as the default filesystem because after MAPREDUCE-6954, it tries to invoke setErasureCodingPolicy while is not implemented. I filed HDFS-12919 to track this but I don't think is a blocker. Thanks, Inigo On Tue, Dec 12, 2017 at 2:43 PM, Elek, Marton wrote: > +1 (non-binding) > > * built from the source tarball (archlinux) / verified signature > * Deployed to a kubernetes cluster (10/10 datanode/nodemanager pods) > * Enabled ec on hdfs directory (hdfs cli) > * Started example yarn jobs (pi/terragen) > * checked yarn ui/ui2 > > Thanks for all the efforts. > > Marton > > > > On 12/08/2017 09:31 PM, Andrew Wang wrote: > >> Hi all, >> >> Let me start, as always, by thanking the efforts of all the contributors >> who contributed to this release, especially those who jumped on the issues >> found in RC0. >> >> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302 >> fixed JIRAs since the previous 3.0.0-beta1 release. >> >> You can find the artifacts here: >> >> http://home.apache.org/~wang/3.0.0-RC1/ >> >> I've done the traditional testing of building from the source tarball and >> running a Pi job on a single node cluster. I also verified that the shaded >> jars are not empty. >> >> Found one issue that create-release (probably due to the mvn deploy >> change) >> didn't sign the artifacts, but I fixed that by calling mvn one more time. >> Available here: >> >> https://repository.apache.org/content/repositories/orgapachehadoop-1075/ >> >> This release will run the standard 5 days, closing on Dec 13th at 12:31pm >> Pacific. My +1 to start. >> >> Best, >> Andrew >> >> > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (non-binding) * built from the source tarball (archlinux) / verified signature * Deployed to a kubernetes cluster (10/10 datanode/nodemanager pods) * Enabled ec on hdfs directory (hdfs cli) * Started example yarn jobs (pi/terragen) * checked yarn ui/ui2 Thanks for all the efforts. Marton On 12/08/2017 09:31 PM, Andrew Wang wrote: Hi all, Let me start, as always, by thanking the efforts of all the contributors who contributed to this release, especially those who jumped on the issues found in RC0. I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302 fixed JIRAs since the previous 3.0.0-beta1 release. You can find the artifacts here: http://home.apache.org/~wang/3.0.0-RC1/ I've done the traditional testing of building from the source tarball and running a Pi job on a single node cluster. I also verified that the shaded jars are not empty. Found one issue that create-release (probably due to the mvn deploy change) didn't sign the artifacts, but I fixed that by calling mvn one more time. Available here: https://repository.apache.org/content/repositories/orgapachehadoop-1075/ This release will run the standard 5 days, closing on Dec 13th at 12:31pm Pacific. My +1 to start. Best, Andrew - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)
Thanks, Konstantin. Everything looks good to me +1 (non-binding) - Verified all signatures and digests - Built from source on macOS 10.12.6, Java 1.8.0u65 - Deployed a pseudo cluster - Ran some example jobs Eric On Tue, Dec 12, 2017 at 11:01 AM, Jason Lowe wrote: > Thanks for driving the release, Konstantin! > > +1 (binding) > > - Verified signatures and digests > - Successfully performed a native build from source > - Deployed a single-node cluster > - Ran some sample jobs and checked the logs > > Jason > > > On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko > wrote: > > > Hi everybody, > > > > I updated CHANGES.txt and fixed documentation links. > > Also committed MAPREDUCE-6165, which fixes a consistently failing test. > > > > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The > > previous one 2.7.4 was release August 4, 2017. > > Release 2.7.5 includes critical bug fixes and optimizations. See more > > details in Release Note: > > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html > > > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/ > > > > Please give it a try and vote on this thread. The vote will run for 5 > days > > ending 12/13/2017. > > > > My up to date public key is available from: > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > Thanks, > > --Konstantin > > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (binding) - Verified signatures of the source tarball. - built from source - using the docker build environment. - set up a pseudo-distributed test cluster. - ran basic HDFS commands - ran some basic MR jobs Cheers -Arun On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang wrote: > Hi everyone, > > As a reminder, this vote closes tomorrow at 12:31pm, so please give it a > whack if you have time. There are already enough binding +1s to pass this > vote, but it'd be great to get additional validation. > > Thanks to everyone who's voted thus far! > > Best, > Andrew > > > > On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > > > +1 (binding) > > > > * Verified src tarball and bin tarball, verified md5 of each. > > * Build source with -Pdist,native > > * Started a pseudo cluster > > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs > > dfs put/get/cat on "/" with XOR-2-1 policy. > > > > Thanks Andrew for this great effort! > > > > Best, > > > > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang > > wrote: > > > Hi Wei-Chiu, > > > > > > The patchprocess directory is left over from the create-release > process, > > > and it looks empty to me. We should still file a create-release JIRA to > > fix > > > this, but I think this is not a blocker. Would you agree? > > > > > > Best, > > > Andrew > > > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang > > > > wrote: > > > > > >> Hi Andrew, thanks the tremendous effort. > > >> I found an empty "patchprocess" directory in the source tarball, that > is > > >> not there if you clone from github. Any chance you might have some > > leftover > > >> trash when you made the tarball? > > >> Not wanting to nitpicking, but you might want to double check so we > > don't > > >> ship anything private to you in public :) > > >> > > >> > > >> > > >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar < > ajay.ku...@hortonworks.com > > > > > >> wrote: > > >> > > >>> +1 (non-binding) > > >>> Thanks for driving this, Andrew Wang!! > > >>> > > >>> - downloaded the src tarball and verified md5 checksum > > >>> - built from source with jdk 1.8.0_111-b14 > > >>> - brought up a pseudo distributed cluster > > >>> - did basic file system operations (mkdir, list, put, cat) and > > >>> confirmed that everything was working > > >>> - Run word count, pi and DFSIOTest > > >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked > > >>> > > >>> Cheers, > > >>> Ajay > > >>> > > >>> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: > > >>> > > >>> +1 (binding) > > >>> > > >>> - downloaded src tarball, verified md5 > > >>> - built from source with jdk1.8.0_112 > > >>> - started a pseudo cluster with hdfs and kms > > >>> - sanity checked encryption related operations working > > >>> - sanity checked webui and logs. > > >>> > > >>> -Xiao > > >>> > > >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers > > >>> wrote: > > >>> > > >>> > +1 (binding) > > >>> > > > >>> > - downloaded the src tarball and built the source (-Pdist > > -Pnative) > > >>> > - verified the checksum > > >>> > - brought up a secure pseudo distributed cluster > > >>> > - did some basic file system operations (mkdir, list, put, cat) > > and > > >>> > confirmed that everything was working > > >>> > - confirmed that the web UI worked > > >>> > > > >>> > Best, > > >>> > Aaron > > >>> > > > >>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < > > >>> andrew.w...@cloudera.com> > > >>> > wrote: > > >>> > > > >>> > > Hi all, > > >>> > > > > >>> > > Let me start, as always, by thanking the efforts of all the > > >>> contributors > > >>> > > who contributed to this release, especially those who jumped > on > > >>> the > > >>> > issues > > >>> > > found in RC0. > > >>> > > > > >>> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release > > >>> incorporates 302 > > >>> > > fixed JIRAs since the previous 3.0.0-beta1 release. > > >>> > > > > >>> > > You can find the artifacts here: > > >>> > > > > >>> > > http://home.apache.org/~wang/3.0.0-RC1/ > > >>> > > > > >>> > > I've done the traditional testing of building from the source > > >>> tarball and > > >>> > > running a Pi job on a single node cluster. I also verified > that > > >>> the > > >>> > shaded > > >>> > > jars are not empty. > > >>> > > > > >>> > > Found one issue that create-release (probably due to the mvn > > >>> deploy > > >>> > change) > > >>> > > didn't sign the artifacts, but I fixed that by calling mvn > one > > >>> more time. > > >>> > > Available here: > > >>> > > > > >>> > > https://repository.apache.org/content/repositories/orgapache > > >>> hadoop-1075/ > > >>> > > > > >>> > > This release will run the standard 5 days, closing on Dec > 13th > > at > > >>> 12:31pm > > >>> > > Pacific. My +1 to start. > > >>> > > > > >>> > > Best, > > >>> > > Andrew >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
Hi everyone, As a reminder, this vote closes tomorrow at 12:31pm, so please give it a whack if you have time. There are already enough binding +1s to pass this vote, but it'd be great to get additional validation. Thanks to everyone who's voted thus far! Best, Andrew On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu wrote: > +1 (binding) > > * Verified src tarball and bin tarball, verified md5 of each. > * Build source with -Pdist,native > * Started a pseudo cluster > * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs > dfs put/get/cat on "/" with XOR-2-1 policy. > > Thanks Andrew for this great effort! > > Best, > > > On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang > wrote: > > Hi Wei-Chiu, > > > > The patchprocess directory is left over from the create-release process, > > and it looks empty to me. We should still file a create-release JIRA to > fix > > this, but I think this is not a blocker. Would you agree? > > > > Best, > > Andrew > > > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang > > wrote: > > > >> Hi Andrew, thanks the tremendous effort. > >> I found an empty "patchprocess" directory in the source tarball, that is > >> not there if you clone from github. Any chance you might have some > leftover > >> trash when you made the tarball? > >> Not wanting to nitpicking, but you might want to double check so we > don't > >> ship anything private to you in public :) > >> > >> > >> > >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar > > >> wrote: > >> > >>> +1 (non-binding) > >>> Thanks for driving this, Andrew Wang!! > >>> > >>> - downloaded the src tarball and verified md5 checksum > >>> - built from source with jdk 1.8.0_111-b14 > >>> - brought up a pseudo distributed cluster > >>> - did basic file system operations (mkdir, list, put, cat) and > >>> confirmed that everything was working > >>> - Run word count, pi and DFSIOTest > >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked > >>> > >>> Cheers, > >>> Ajay > >>> > >>> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: > >>> > >>> +1 (binding) > >>> > >>> - downloaded src tarball, verified md5 > >>> - built from source with jdk1.8.0_112 > >>> - started a pseudo cluster with hdfs and kms > >>> - sanity checked encryption related operations working > >>> - sanity checked webui and logs. > >>> > >>> -Xiao > >>> > >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers > >>> wrote: > >>> > >>> > +1 (binding) > >>> > > >>> > - downloaded the src tarball and built the source (-Pdist > -Pnative) > >>> > - verified the checksum > >>> > - brought up a secure pseudo distributed cluster > >>> > - did some basic file system operations (mkdir, list, put, cat) > and > >>> > confirmed that everything was working > >>> > - confirmed that the web UI worked > >>> > > >>> > Best, > >>> > Aaron > >>> > > >>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < > >>> andrew.w...@cloudera.com> > >>> > wrote: > >>> > > >>> > > Hi all, > >>> > > > >>> > > Let me start, as always, by thanking the efforts of all the > >>> contributors > >>> > > who contributed to this release, especially those who jumped on > >>> the > >>> > issues > >>> > > found in RC0. > >>> > > > >>> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release > >>> incorporates 302 > >>> > > fixed JIRAs since the previous 3.0.0-beta1 release. > >>> > > > >>> > > You can find the artifacts here: > >>> > > > >>> > > http://home.apache.org/~wang/3.0.0-RC1/ > >>> > > > >>> > > I've done the traditional testing of building from the source > >>> tarball and > >>> > > running a Pi job on a single node cluster. I also verified that > >>> the > >>> > shaded > >>> > > jars are not empty. > >>> > > > >>> > > Found one issue that create-release (probably due to the mvn > >>> deploy > >>> > change) > >>> > > didn't sign the artifacts, but I fixed that by calling mvn one > >>> more time. > >>> > > Available here: > >>> > > > >>> > > https://repository.apache.org/content/repositories/orgapache > >>> hadoop-1075/ > >>> > > > >>> > > This release will run the standard 5 days, closing on Dec 13th > at > >>> 12:31pm > >>> > > Pacific. My +1 to start. > >>> > > > >>> > > Best, > >>> > > Andrew > >>> > > > >>> > > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> - > >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > >>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >>> > >> > >> > >> > >> > > > > -- > Lei (Eddy) Xu > Software Engineer, Cloudera >
[jira] [Created] (YARN-7645) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler
Robert Kanter created YARN-7645: --- Summary: TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler Key: YARN-7645 URL: https://issues.apache.org/jira/browse/YARN-7645 Project: Hadoop YARN Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Robert Kanter Assignee: Robert Kanter We've noticed some flakiness in {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}} when using {{FairScheduler}}: {noformat} java.lang.AssertionError: Attempt state is not correct (timeout). expected: but was: at org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:275) at org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:254) {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
Thanks, Junping +1 (non-binding) looks good from my end - Verified all hashes and checksums - Built from source on macOS 10.12.6, Java 1.8.0u65 - Deployed a pseudo cluster - Ran some example jobs Eric On Tue, Dec 12, 2017 at 12:55 PM, Konstantin Shvachko wrote: > Downloaded again, now the checksums look good. Sorry my fault > > Thanks, > --Konstantin > > On Mon, Dec 11, 2017 at 5:03 PM, Junping Du wrote: > > > Hi Konstantin, > > > > Thanks for verification and comments. I was verifying your example > > below but found it is actually matched: > > > > > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* > > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = > > e53d04477b85e8b58ac0a26468f04736* > > > > What's your md5 checksum for given source tar ball? > > > > > > Thanks, > > > > > > Junping > > > > > > -- > > *From:* Konstantin Shvachko > > *Sent:* Saturday, December 9, 2017 11:06 AM > > *To:* Junping Du > > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) > > > > Hey Junping, > > > > Could you pls upload mds relative to the tar.gz etc. files rather than > > their full path > > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: > >MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 > > > > Otherwise mds don't match for me. > > > > Thanks, > > --Konstantin > > > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du wrote: > > > >> Hi all, > >> I've created the first release candidate (RC0) for Apache Hadoop > >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79 > >> important fixes and improvements. > >> > >> The RC artifacts are available at: http://home.apache.org/~junpin > >> g_du/hadoop-2.8.3-RC0 > >> > >> The RC tag in git is: release-2.8.3-RC0 > >> > >> The maven artifacts are available via repository.apache.org at: > >> https://repository.apache.org/content/repositories/orgapachehadoop-1072 > >> > >> Please try the release and vote; the vote will run for the usual 5 > >> working days, ending on 12/12/2017 PST time. > >> > >> Thanks, > >> > >> Junping > >> > > > > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
+1 (binding) * Verified src tarball and bin tarball, verified md5 of each. * Build source with -Pdist,native * Started a pseudo cluster * Run ec -listPolicies / -getPolicy / -setPolicy on / , and run hdfs dfs put/get/cat on "/" with XOR-2-1 policy. Thanks Andrew for this great effort! Best, On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang wrote: > Hi Wei-Chiu, > > The patchprocess directory is left over from the create-release process, > and it looks empty to me. We should still file a create-release JIRA to fix > this, but I think this is not a blocker. Would you agree? > > Best, > Andrew > > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang > wrote: > >> Hi Andrew, thanks the tremendous effort. >> I found an empty "patchprocess" directory in the source tarball, that is >> not there if you clone from github. Any chance you might have some leftover >> trash when you made the tarball? >> Not wanting to nitpicking, but you might want to double check so we don't >> ship anything private to you in public :) >> >> >> >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar >> wrote: >> >>> +1 (non-binding) >>> Thanks for driving this, Andrew Wang!! >>> >>> - downloaded the src tarball and verified md5 checksum >>> - built from source with jdk 1.8.0_111-b14 >>> - brought up a pseudo distributed cluster >>> - did basic file system operations (mkdir, list, put, cat) and >>> confirmed that everything was working >>> - Run word count, pi and DFSIOTest >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked >>> >>> Cheers, >>> Ajay >>> >>> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: >>> >>> +1 (binding) >>> >>> - downloaded src tarball, verified md5 >>> - built from source with jdk1.8.0_112 >>> - started a pseudo cluster with hdfs and kms >>> - sanity checked encryption related operations working >>> - sanity checked webui and logs. >>> >>> -Xiao >>> >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers >>> wrote: >>> >>> > +1 (binding) >>> > >>> > - downloaded the src tarball and built the source (-Pdist -Pnative) >>> > - verified the checksum >>> > - brought up a secure pseudo distributed cluster >>> > - did some basic file system operations (mkdir, list, put, cat) and >>> > confirmed that everything was working >>> > - confirmed that the web UI worked >>> > >>> > Best, >>> > Aaron >>> > >>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < >>> andrew.w...@cloudera.com> >>> > wrote: >>> > >>> > > Hi all, >>> > > >>> > > Let me start, as always, by thanking the efforts of all the >>> contributors >>> > > who contributed to this release, especially those who jumped on >>> the >>> > issues >>> > > found in RC0. >>> > > >>> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release >>> incorporates 302 >>> > > fixed JIRAs since the previous 3.0.0-beta1 release. >>> > > >>> > > You can find the artifacts here: >>> > > >>> > > http://home.apache.org/~wang/3.0.0-RC1/ >>> > > >>> > > I've done the traditional testing of building from the source >>> tarball and >>> > > running a Pi job on a single node cluster. I also verified that >>> the >>> > shaded >>> > > jars are not empty. >>> > > >>> > > Found one issue that create-release (probably due to the mvn >>> deploy >>> > change) >>> > > didn't sign the artifacts, but I fixed that by calling mvn one >>> more time. >>> > > Available here: >>> > > >>> > > https://repository.apache.org/content/repositories/orgapache >>> hadoop-1075/ >>> > > >>> > > This release will run the standard 5 days, closing on Dec 13th at >>> 12:31pm >>> > > Pacific. My +1 to start. >>> > > >>> > > Best, >>> > > Andrew >>> > > >>> > >>> >>> >>> >>> >>> >>> >>> >>> - >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >>> >> >> >> >> -- Lei (Eddy) Xu Software Engineer, Cloudera - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-7644) NM gets backed up deleting docker containers
Eric Badger created YARN-7644: - Summary: NM gets backed up deleting docker containers Key: YARN-7644 URL: https://issues.apache.org/jira/browse/YARN-7644 Project: Hadoop YARN Issue Type: Sub-task Reporter: Eric Badger Assignee: Eric Badger We are sending a {{docker stop}} to the docker container with a timeout of 10 seconds when we shut down a container. If the container does not stop after 10 seconds then we force kill it. However, the {{docker stop}} command is a blocking call. So in cases where lots of containers don't go down with the initial SIGTERM, we have to wait 10+ seconds for the {{docker stop}} to return. This ties up the ContainerLaunch handler and so these kill events back up. It also appears to be backing up new container launches as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
Downloaded again, now the checksums look good. Sorry my fault Thanks, --Konstantin On Mon, Dec 11, 2017 at 5:03 PM, Junping Du wrote: > Hi Konstantin, > > Thanks for verification and comments. I was verifying your example > below but found it is actually matched: > > > *jduMBP:hadoop-2.8.3 jdu$ md5 ~/Downloads/hadoop-2.8.3-src.tar.gz* > *MD5 (/Users/jdu/Downloads/hadoop-2.8.3-src.tar.gz) = > e53d04477b85e8b58ac0a26468f04736* > > What's your md5 checksum for given source tar ball? > > > Thanks, > > > Junping > > > -- > *From:* Konstantin Shvachko > *Sent:* Saturday, December 9, 2017 11:06 AM > *To:* Junping Du > *Cc:* common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; > mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0) > > Hey Junping, > > Could you pls upload mds relative to the tar.gz etc. files rather than > their full path > /build/source/target/artifacts/hadoop-2.8.3-src.tar.gz: >MD5 = E5 3D 04 47 7B 85 E8 B5 8A C0 A2 64 68 F0 47 36 > > Otherwise mds don't match for me. > > Thanks, > --Konstantin > > On Tue, Dec 5, 2017 at 1:58 AM, Junping Du wrote: > >> Hi all, >> I've created the first release candidate (RC0) for Apache Hadoop >> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79 >> important fixes and improvements. >> >> The RC artifacts are available at: http://home.apache.org/~junpin >> g_du/hadoop-2.8.3-RC0 >> >> The RC tag in git is: release-2.8.3-RC0 >> >> The maven artifacts are available via repository.apache.org at: >> https://repository.apache.org/content/repositories/orgapachehadoop-1072 >> >> Please try the release and vote; the vote will run for the usual 5 >> working days, ending on 12/12/2017 PST time. >> >> Thanks, >> >> Junping >> > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
Hi Wei-Chiu, The patchprocess directory is left over from the create-release process, and it looks empty to me. We should still file a create-release JIRA to fix this, but I think this is not a blocker. Would you agree? Best, Andrew On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang wrote: > Hi Andrew, thanks the tremendous effort. > I found an empty "patchprocess" directory in the source tarball, that is > not there if you clone from github. Any chance you might have some leftover > trash when you made the tarball? > Not wanting to nitpicking, but you might want to double check so we don't > ship anything private to you in public :) > > > > On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar > wrote: > >> +1 (non-binding) >> Thanks for driving this, Andrew Wang!! >> >> - downloaded the src tarball and verified md5 checksum >> - built from source with jdk 1.8.0_111-b14 >> - brought up a pseudo distributed cluster >> - did basic file system operations (mkdir, list, put, cat) and >> confirmed that everything was working >> - Run word count, pi and DFSIOTest >> - run hdfs and yarn, confirmed that the NN, RM web UI worked >> >> Cheers, >> Ajay >> >> On 12/11/17, 9:35 PM, "Xiao Chen" wrote: >> >> +1 (binding) >> >> - downloaded src tarball, verified md5 >> - built from source with jdk1.8.0_112 >> - started a pseudo cluster with hdfs and kms >> - sanity checked encryption related operations working >> - sanity checked webui and logs. >> >> -Xiao >> >> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers >> wrote: >> >> > +1 (binding) >> > >> > - downloaded the src tarball and built the source (-Pdist -Pnative) >> > - verified the checksum >> > - brought up a secure pseudo distributed cluster >> > - did some basic file system operations (mkdir, list, put, cat) and >> > confirmed that everything was working >> > - confirmed that the web UI worked >> > >> > Best, >> > Aaron >> > >> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < >> andrew.w...@cloudera.com> >> > wrote: >> > >> > > Hi all, >> > > >> > > Let me start, as always, by thanking the efforts of all the >> contributors >> > > who contributed to this release, especially those who jumped on >> the >> > issues >> > > found in RC0. >> > > >> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release >> incorporates 302 >> > > fixed JIRAs since the previous 3.0.0-beta1 release. >> > > >> > > You can find the artifacts here: >> > > >> > > http://home.apache.org/~wang/3.0.0-RC1/ >> > > >> > > I've done the traditional testing of building from the source >> tarball and >> > > running a Pi job on a single node cluster. I also verified that >> the >> > shaded >> > > jars are not empty. >> > > >> > > Found one issue that create-release (probably due to the mvn >> deploy >> > change) >> > > didn't sign the artifacts, but I fixed that by calling mvn one >> more time. >> > > Available here: >> > > >> > > https://repository.apache.org/content/repositories/orgapache >> hadoop-1075/ >> > > >> > > This release will run the standard 5 days, closing on Dec 13th at >> 12:31pm >> > > Pacific. My +1 to start. >> > > >> > > Best, >> > > Andrew >> > > >> > >> >> >> >> >> >> >> >> - >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >> > > > >
Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
Hi Andrew, thanks the tremendous effort. I found an empty "patchprocess" directory in the source tarball, that is not there if you clone from github. Any chance you might have some leftover trash when you made the tarball? Not wanting to nitpicking, but you might want to double check so we don't ship anything private to you in public :) On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar wrote: > +1 (non-binding) > Thanks for driving this, Andrew Wang!! > > - downloaded the src tarball and verified md5 checksum > - built from source with jdk 1.8.0_111-b14 > - brought up a pseudo distributed cluster > - did basic file system operations (mkdir, list, put, cat) and > confirmed that everything was working > - Run word count, pi and DFSIOTest > - run hdfs and yarn, confirmed that the NN, RM web UI worked > > Cheers, > Ajay > > On 12/11/17, 9:35 PM, "Xiao Chen" wrote: > > +1 (binding) > > - downloaded src tarball, verified md5 > - built from source with jdk1.8.0_112 > - started a pseudo cluster with hdfs and kms > - sanity checked encryption related operations working > - sanity checked webui and logs. > > -Xiao > > On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers > wrote: > > > +1 (binding) > > > > - downloaded the src tarball and built the source (-Pdist -Pnative) > > - verified the checksum > > - brought up a secure pseudo distributed cluster > > - did some basic file system operations (mkdir, list, put, cat) and > > confirmed that everything was working > > - confirmed that the web UI worked > > > > Best, > > Aaron > > > > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang < > andrew.w...@cloudera.com> > > wrote: > > > > > Hi all, > > > > > > Let me start, as always, by thanking the efforts of all the > contributors > > > who contributed to this release, especially those who jumped on the > > issues > > > found in RC0. > > > > > > I've prepared RC1 for Apache Hadoop 3.0.0. This release > incorporates 302 > > > fixed JIRAs since the previous 3.0.0-beta1 release. > > > > > > You can find the artifacts here: > > > > > > http://home.apache.org/~wang/3.0.0-RC1/ > > > > > > I've done the traditional testing of building from the source > tarball and > > > running a Pi job on a single node cluster. I also verified that the > > shaded > > > jars are not empty. > > > > > > Found one issue that create-release (probably due to the mvn deploy > > change) > > > didn't sign the artifacts, but I fixed that by calling mvn one > more time. > > > Available here: > > > > > > https://repository.apache.org/content/repositories/orgapache > hadoop-1075/ > > > > > > This release will run the standard 5 days, closing on Dec 13th at > 12:31pm > > > Pacific. My +1 to start. > > > > > > Best, > > > Andrew > > > > > > > > > > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org >
Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)
Thanks for driving the release, Konstantin! +1 (binding) - Verified signatures and digests - Successfully performed a native build from source - Deployed a single-node cluster - Ran some sample jobs and checked the logs Jason On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko wrote: > Hi everybody, > > I updated CHANGES.txt and fixed documentation links. > Also committed MAPREDUCE-6165, which fixes a consistently failing test. > > This is RC1 for the next dot release of Apache Hadoop 2.7 line. The > previous one 2.7.4 was release August 4, 2017. > Release 2.7.5 includes critical bug fixes and optimizations. See more > details in Release Note: > http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/ > > Please give it a try and vote on this thread. The vote will run for 5 days > ending 12/13/2017. > > My up to date public key is available from: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Thanks, > --Konstantin >
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
Thanks for driving this release, Junping! +1 (binding) - Verified signatures and digests - Successfully performed native build from source - Deployed a single-node cluster - Ran some test jobs and examined the logs Jason On Tue, Dec 5, 2017 at 3:58 AM, Junping Du wrote: > Hi all, > I've created the first release candidate (RC0) for Apache Hadoop > 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79 > important fixes and improvements. > > The RC artifacts are available at: http://home.apache.org/~ > junping_du/hadoop-2.8.3-RC0 > > The RC tag in git is: release-2.8.3-RC0 > > The maven artifacts are available via repository.apache.org at: > https://repository.apache.org/content/repositories/orgapachehadoop-1072 > > Please try the release and vote; the vote will run for the usual 5 > working days, ending on 12/12/2017 PST time. > > Thanks, > > Junping >
Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)
+1 non-binding. - Built and installed from source on a pseudo distributed cluster. - Ran sample jobs like wordcount, sleep etc. - Ran Tez 0.9 sample jobs. Regards, Kuhu On Mon, Dec 11, 2017 at 7:31 PM, Brahma Reddy Battula wrote: > +1 (non-binding), thanks Junping for driving this. > > > --Built from the source > --Installaed 3 Node HA cluster > --Verified Basic shell Commands > --Browsed the HDFS/YARN web UI > --Ran sample pi,wordcount jobs > > --Brahma Reddy Battula > > > On Tue, Dec 5, 2017 at 3:28 PM, Junping Du wrote: > > > Hi all, > > I've created the first release candidate (RC0) for Apache Hadoop > > 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79 > > important fixes and improvements. > > > > The RC artifacts are available at: http://home.apache.org/~ > > junping_du/hadoop-2.8.3-RC0 > > > > The RC tag in git is: release-2.8.3-RC0 > > > > The maven artifacts are available via repository.apache.org at: > > https://repository.apache.org/content/repositories/orgapachehadoop-1072 > > > > Please try the release and vote; the vote will run for the usual 5 > > working days, ending on 12/12/2017 PST time. > > > > Thanks, > > > > Junping > > > > > > -- > > > > --Brahma Reddy Battula >
[jira] [Created] (YARN-7643) Handle recovery of applications on auto-created leaf queues
Suma Shivaprasad created YARN-7643: -- Summary: Handle recovery of applications on auto-created leaf queues Key: YARN-7643 URL: https://issues.apache.org/jira/browse/YARN-7643 Project: Hadoop YARN Issue Type: Sub-task Reporter: Suma Shivaprasad Assignee: Suma Shivaprasad CapacityScheduler application recovery should auto-create leaf queue if it doesnt exist. Also RMAppManager needs to set the queue-mapping placement context so that scheduler has necessary placement context to recreate the queue -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-7642) Container execution type is not updated after promotion/demotion in NMContext
Weiwei Yang created YARN-7642: - Summary: Container execution type is not updated after promotion/demotion in NMContext Key: YARN-7642 URL: https://issues.apache.org/jira/browse/YARN-7642 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.9.0 Reporter: Weiwei Yang Assignee: Weiwei Yang Found this bug while working on YARN-7617. After calling API to promote a container from OPPORTUNISTIC to GUARANTEED, node manager web page still shows the container execution type as OPPORTUNISTIC. Looks like the container execution type in NMContext was not updated accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-7641) Allow filter on logs page
Vasudevan Skm created YARN-7641: --- Summary: Allow filter on logs page Key: YARN-7641 URL: https://issues.apache.org/jira/browse/YARN-7641 Project: Hadoop YARN Issue Type: Sub-task Components: yarn-ui-v2 Reporter: Vasudevan Skm Assignee: Vasudevan Skm The select boxes in the Application logs page is not searchable. This doesn't scale when there are many containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-7640) Retrospect to enable CORS related configs by default in YARN
Sunil G created YARN-7640: - Summary: Retrospect to enable CORS related configs by default in YARN Key: YARN-7640 URL: https://issues.apache.org/jira/browse/YARN-7640 Project: Hadoop YARN Issue Type: Bug Components: nodemanager, resourcemanager Affects Versions: 3.0.0-beta1 Reporter: Sunil G Currently admin has to do below config changes to enable CORS in YARN. {code} 1. Add org.apache.hadoop.security.HttpCrossOriginFilterInitializer to hadoop.http.filter.initializers 2. Set hadoop.http.cross-origin.enabled to true 3. Set hadoop.http.cross-origin.allowed-methods to GET,HEAD 4. Set yarn.nodemanager.webapp.cross-origin.enabled to true 5. Set yarn.resourcemanager.webapp.cross-origin.enabled to true {code} For better handling, we could enable this config by default in YARN. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5402) Fix NoSuchMethodError in ClusterMetricsInfo
[ https://issues.apache.org/jira/browse/YARN-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Francke resolved YARN-5402. Resolution: Fixed Fix Version/s: 2.7.5 > Fix NoSuchMethodError in ClusterMetricsInfo > --- > > Key: YARN-5402 > URL: https://issues.apache.org/jira/browse/YARN-5402 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Affects Versions: YARN-3368, 2.7.4 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.7.5 > > Attachments: YARN-5402.YARN-3368.001.patch > > > When trying out new UI on a cluster, the index page failed to load because of > error {code}java.lang.NoSuchMethodError: > org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getReservedMB()J{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org