Re: Requesting write access for Confluence Wiki
Done. -Akira On 2017/07/18 8:12, Aaron Fabbri wrote: Hi, I'd like to create a troubleshooting page on the Hadoop Confluence wiki for HADOOP-14467. Can someone please grant me access? Account under fab...@apache.org. - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/467/ [Jul 16, 2017 10:59:34 AM] (aengineer) HDFS-11786. Add support to make copyFromLocal multi threaded. [Jul 17, 2017 1:05:15 AM] (sunilg) MAPREDUCE-6889. Add Job#close API to shutdown MR client services. [Jul 17, 2017 2:27:55 AM] (jitendra) HADOOP-14640. Azure: Support affinity for service running on localhost -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in
Requesting write access for Confluence Wiki
Hi, I'd like to create a troubleshooting page on the Hadoop Confluence wiki for HADOOP-14467. Can someone please grant me access? Account under fab...@apache.org.
Re: zstd compression
I think we are OK to leave support for the zstd codec in the Hadoop code base. I asked Chris Mattman for clarification, noting that the support for the zstd codec requires the user to install the zstd headers and libraries and then configure it to be included in the native Hadoop build. The Hadoop releases are not shipping any zstd code (e.g.: headers or libraries) nor does it require zstd as a mandatory dependency. Here's what he said: On Monday, July 17, 2017 11:07 AM, Chris Mattmannwrote: > Hi Jason, > > This sounds like an optional dependency on a Cat-X software. This isn’t the > only type of compression > that is allowed within Hadoop, correct? If it is truly optional and you have > gone to that level of detail > below to make the user opt in, and if we are not shipping zstd with our > products (source code releases), > then this is an acceptable usage. > > Cheers, > Chris So I think we are in the clear with respect to zstd usage as long as we keep it as an optional codec where the user needs to get the headers and libraries for zstd and configure it into the native Hadoop build. Jason On Monday, July 17, 2017 9:44 AM, Sean Busbey wrote: I know that the HBase community is also looking at what to do about our inclusion of zstd. We've had it in releases since late 2016. My plan was to request that they relicense it. Perhaps the Hadoop PMC could join HBase in the request? On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauer wrote: > > It looks like HADOOP-13578 added Facebook's zstd compression codec. > Unfortunately, that codec is using the same 3-clause BSD (LICENSE file) + > patent grant license (PATENTS file) that React is using and RocksDB was using. > > Should that code get reverted? > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > -- busbey - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14666) Tests use assertTrue(....equals(...)) instead of assertEquals()
Daniel Templeton created HADOOP-14666: - Summary: Tests use assertTrue(equals(...)) instead of assertEquals() Key: HADOOP-14666 URL: https://issues.apache.org/jira/browse/HADOOP-14666 Project: Hadoop Common Issue Type: Improvement Components: test Affects Versions: 3.0.0-alpha4, 2.8.1 Reporter: Daniel Templeton Assignee: Daniel Templeton Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14665) Support comments in auth_to_local mapping rules
Hari Sekhon created HADOOP-14665: Summary: Support comments in auth_to_local mapping rules Key: HADOOP-14665 URL: https://issues.apache.org/jira/browse/HADOOP-14665 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.7.3 Environment: HDP 2.6.0 + Kerberos Reporter: Hari Sekhon Request to add support for # hash prefixed comment lines in Hadoop's auth_to_local mappings rules so I can comment what rules I've added and why inline to the rules like with code (useful when supporting multi-directory mappings). It should be fairly simple to implement, just strip all lines from # to end, trim whitespace and the exclude all blank / whitespace lines, I do this in tools I write all the time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: zstd compression
I know that the HBase community is also looking at what to do about our inclusion of zstd. We've had it in releases since late 2016. My plan was to request that they relicense it. Perhaps the Hadoop PMC could join HBase in the request? On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauerwrote: > > It looks like HADOOP-13578 added Facebook's zstd compression codec. > Unfortunately, that codec is using the same 3-clause BSD (LICENSE file) + > patent grant license (PATENTS file) that React is using and RocksDB was using. > > Should that code get reverted? > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > -- busbey - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org