[jira] [Created] (HADOOP-14634) Remove jline from main Hadoop pom.xml
Ray Chiang created HADOOP-14634: --- Summary: Remove jline from main Hadoop pom.xml Key: HADOOP-14634 URL: https://issues.apache.org/jira/browse/HADOOP-14634 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0-alpha4 Reporter: Ray Chiang Assignee: Ray Chiang A long time ago, HADOOP-9342 removed jline from being included in the Hadoop distribution. Since then, more modules have added Zookeeper, and are pulling in jline again. Recommend excluding jline from the main Hadoop pom in order to prevent subsequent additions of Zookeeper dependencies from doing this again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/ [Jul 6, 2017 2:40:09 PM] (jlowe) YARN-6708. Nodemanager container crash after ext3 folder limit. [Jul 7, 2017 6:00:47 AM] (aajisaka) HADOOP-14587. Use GenericTestUtils.setLogLevel when available in -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestSafeMode hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.yarn.sls.nodemanager.TestNMSimulator Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-mvninstall-root.txt [620K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/368/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [924K]
[jira] [Created] (HADOOP-14633) S3Guard: optimize create codepath
Aaron Fabbri created HADOOP-14633: - Summary: S3Guard: optimize create codepath Key: HADOOP-14633 URL: https://issues.apache.org/jira/browse/HADOOP-14633 Project: Hadoop Common Issue Type: Sub-task Environment: Reporter: Aaron Fabbri Assignee: Aaron Fabbri Priority: Minor Following up on HADOOP-14457, a couple of things to do that will improve create performance as I mentioned in the comment [here|https://issues.apache.org/jira/browse/HADOOP-14457?focusedCommentId=16078465=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16078465] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
+1 (non-binding) Tested from building tar ball from binary. - Ran few MR jobs with node labels - Verified old and new YARN ui - Checked preemption as well Thanks Sunil On Fri, Jun 30, 2017 at 8:11 AM Andrew Wangwrote: > Hi all, > > As always, thanks to the many, many contributors who helped with this > release! I've prepared an RC0 for 3.0.0-alpha4: > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > Given that July 4th is a holiday in the US, I expect this vote might have > to be extended, but I'd like to close the vote relatively soon after. > > I've done my traditional testing of a pseudo-distributed cluster with a > single task pi job, which was successful. > > Normally my testing would end there, but I'm slightly more confident this > time. At Cloudera, we've successfully packaged and deployed a snapshot from > a few days ago, and run basic smoke tests. Some bugs found from this > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and > the revert of HDFS-11696, which broke NN QJM HA setup. > > Vijay is working on a test run with a fuller test suite (the results of > which we can hopefully post soon). > > My +1 to start, > > Best, > Andrew >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks again everyone for voting! I'm going to close this vote. With 4 binding +1s and 8 non-binding +1s, the vote passes. I'll go ahead and push out the release. Best, Andrew On Fri, Jul 7, 2017 at 10:35 AM, Andrew Wangwrote: > Hi Mike, the artifacts are staged on Nexus: > > https://repository.apache.org/content/repositories/orgapachehadoop-1060/ > > Best, > Andrew > > On Fri, Jul 7, 2017 at 1:34 AM, Mike Drob wrote: > >> Hi Andrew, >> >> Are there maven artifacts available for this RC? >> >> Thanks, >> Mike >> >> On 2017-06-29 19:40 (-0700), Andrew Wang >> wrote: >> > Hi all, >> > >> > As always, thanks to the many, many contributors who helped with this >> > release! I've prepared an RC0 for 3.0.0-alpha4: >> > >> > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ >> > >> > The standard 5-day vote would run until midnight on Tuesday, July 4th. >> > Given that July 4th is a holiday in the US, I expect this vote might >> have >> > to be extended, but I'd like to close the vote relatively soon after. >> > >> > I've done my traditional testing of a pseudo-distributed cluster with a >> > single task pi job, which was successful. >> > >> > Normally my testing would end there, but I'm slightly more confident >> this >> > time. At Cloudera, we've successfully packaged and deployed a snapshot >> from >> > a few days ago, and run basic smoke tests. Some bugs found from this >> > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, >> and >> > the revert of HDFS-11696, which broke NN QJM HA setup. >> > >> > Vijay is working on a test run with a fuller test suite (the results of >> > which we can hopefully post soon). >> > >> > My 1 to start, >> > >> > Best, >> > Andrew >> > >> > >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Hi Mike, the artifacts are staged on Nexus: https://repository.apache.org/content/repositories/orgapachehadoop-1060/ Best, Andrew On Fri, Jul 7, 2017 at 1:34 AM, Mike Drobwrote: > Hi Andrew, > > Are there maven artifacts available for this RC? > > Thanks, > Mike > > On 2017-06-29 19:40 (-0700), Andrew Wang wrote: > > Hi all, > > > > As always, thanks to the many, many contributors who helped with this > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > Given that July 4th is a holiday in the US, I expect this vote might have > > to be extended, but I'd like to close the vote relatively soon after. > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > single task pi job, which was successful. > > > > Normally my testing would end there, but I'm slightly more confident this > > time. At Cloudera, we've successfully packaged and deployed a snapshot > from > > a few days ago, and run basic smoke tests. Some bugs found from this > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > and > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > Vijay is working on a test run with a fuller test suite (the results of > > which we can hopefully post soon). > > > > My 1 to start, > > > > Best, > > Andrew > > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/457/ [Jul 6, 2017 2:40:09 PM] (jlowe) YARN-6708. Nodemanager container crash after ext3 folder limit. [Jul 7, 2017 6:00:47 AM] (aajisaka) HADOOP-14587. Use GenericTestUtils.setLogLevel when available in -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
[jira] [Resolved] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
[ https://issues.apache.org/jira/browse/HADOOP-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] saurab resolved HADOOP-14626. - Resolution: Fixed This error arose due to jar file incompatibility. I had hadoop-2.8.0 but hadoop-hdfs-client-2.8.0 and hadoop-hdfs-2.6.0 were trying to call the same method . It was resolved after I deleted hadoop-hdfs-2.6.0. If anyone else ran into same problem, make sure you have compatible jars > NoSuchMethodError in > org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy > > > Key: HADOOP-14626 > URL: https://issues.apache.org/jira/browse/HADOOP-14626 > Project: Hadoop Common > Issue Type: Bug >Reporter: saurab >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14632) add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed.
Hongyuan Li created HADOOP-14632: Summary: add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed. Key: HADOOP-14632 URL: https://issues.apache.org/jira/browse/HADOOP-14632 Project: Hadoop Common Issue Type: Improvement Reporter: Hongyuan Li add buffersize to SFTPFileSystem#create and SFTPFileSystem#open method, which can improve the transfer speed. Test example shows transfer performance has improved a lot. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14631) Distcp should add a default AtomicWorkPath properties when using atomic
Hongyuan Li created HADOOP-14631: Summary: Distcp should add a default AtomicWorkPath properties when using atomic Key: HADOOP-14631 URL: https://issues.apache.org/jira/browse/HADOOP-14631 Project: Hadoop Common Issue Type: Bug Reporter: Hongyuan Li Distcp should add a default AtomicWorkPath properties when using atomic {{Distcp}}#{{configureOutputFormat}} using code below to generate atomic work path {code} if (context.shouldAtomicCommit()) { Path workDir = context.getAtomicWorkPath(); if (workDir == null) { workDir = targetPath.getParent(); } workDir = new Path(workDir, WIP_PREFIX + targetPath.getName() + rand.nextInt()); [code} When atomic is set and tAtomicWorkPath == null, distcp will get the parent of current WorkDir. In this case, if {{workdir}} is {{"/"}}, the parent will be {{null}}, wich means {{ workDir = new Path(workDir, WIP_PREFIX + targetPath.getName() + rand.nextInt());}} will throw a nullpoint exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14630) Extend AbstractContractCreateTest with some corner cases
Steve Loughran created HADOOP-14630: --- Summary: Extend AbstractContractCreateTest with some corner cases Key: HADOOP-14630 URL: https://issues.apache.org/jira/browse/HADOOP-14630 Project: Hadoop Common Issue Type: Improvement Components: fs, fs/azure, fs/s3 Affects Versions: 2.9.0 Reporter: Steve Loughran Object stores can get into trouble in ways which an FS would never, do, ways so obvious we've never done tests for them. We know what the problems are: test for file and dir creation directly/indirectly under other files * mkdir(file/file) * mkdir(file/subdir) * dir under file/subdir/subdir * dir/dir2/file, verify dir & dir2 exist * dir/dir2/dir3, verify dir & dir2 exist -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Hi Andrew, Are there maven artifacts available for this RC? Thanks, Mike On 2017-06-29 19:40 (-0700), Andrew Wangwrote: > Hi all, > > As always, thanks to the many, many contributors who helped with this > release! I've prepared an RC0 for 3.0.0-alpha4: > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > Given that July 4th is a holiday in the US, I expect this vote might have > to be extended, but I'd like to close the vote relatively soon after. > > I've done my traditional testing of a pseudo-distributed cluster with a > single task pi job, which was successful. > > Normally my testing would end there, but I'm slightly more confident this > time. At Cloudera, we've successfully packaged and deployed a snapshot from > a few days ago, and run basic smoke tests. Some bugs found from this > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and > the revert of HDFS-11696, which broke NN QJM HA setup. > > Vijay is working on a test run with a fuller test suite (the results of > which we can hopefully post soon). > > My 1 to start, > > Best, > Andrew >
[jira] [Created] (HADOOP-14628) Upgrade maven enforce plugin to 3.0.0
Akira Ajisaka created HADOOP-14628: -- Summary: Upgrade maven enforce plugin to 3.0.0 Key: HADOOP-14628 URL: https://issues.apache.org/jira/browse/HADOOP-14628 Project: Hadoop Common Issue Type: Sub-task Reporter: Akira Ajisaka Maven enforce plugin fails after Java 9 build 175 (MENFORCER-274). Let's upgrade the version to 3.0.0 when released. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org