hadoop git commit: HDFS-12363. Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages. Contributed by Xiao Chen

2017-08-31 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7ecc6dbed -> 1fbb662c7


HDFS-12363. Possible NPE in 
BlockManager$StorageInfoDefragmenter#scanAndCompactStorages. Contributed by 
Xiao Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1fbb662c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1fbb662c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1fbb662c

Branch: refs/heads/trunk
Commit: 1fbb662c7092d08a540acff7e92715693412e486
Parents: 7ecc6db
Author: Mingliang Liu 
Authored: Thu Aug 31 22:36:56 2017 -0700
Committer: Mingliang Liu 
Committed: Thu Aug 31 22:36:56 2017 -0700

--
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fbb662c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 6129db8..e83cbc6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -4487,8 +4487,12 @@ public class BlockManager implements BlockStatsMXBean {
 for (int i = 0; i < datanodesAndStorages.size(); i += 2) {
   namesystem.writeLock();
   try {
-DatanodeStorageInfo storage = datanodeManager.
-getDatanode(datanodesAndStorages.get(i)).
+final DatanodeDescriptor dn = datanodeManager.
+getDatanode(datanodesAndStorages.get(i));
+if (dn == null) {
+  continue;
+}
+final DatanodeStorageInfo storage = dn.
 getStorageInfo(datanodesAndStorages.get(i + 1));
 if (storage != null) {
   boolean aborted =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.

2017-08-31 Thread jzhuge
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d665d8568 -> b2f496bdc


HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.

(cherry picked from commit 7ecc6dbed62c80397f71949bee41dcd03065755c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b2f496bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b2f496bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b2f496bd

Branch: refs/heads/branch-2.8
Commit: b2f496bdcc41584703c52d942b57dc35605cb5b3
Parents: d665d85
Author: John Zhuge 
Authored: Thu Aug 31 21:16:58 2017 -0700
Committer: John Zhuge 
Committed: Thu Aug 31 21:16:58 2017 -0700

--
 hadoop-tools/hadoop-azure-datalake/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2f496bd/hadoop-tools/hadoop-azure-datalake/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml 
b/hadoop-tools/hadoop-azure-datalake/pom.xml
index 1a9c5d6..d499796 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -121,7 +121,7 @@
 
   com.microsoft.azure
   azure-data-lake-store-sdk
-  2.2.1
+  2.2.2
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.

2017-08-31 Thread jzhuge
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 41d8e4e9b -> 2442a8d71


HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.

(cherry picked from commit 7ecc6dbed62c80397f71949bee41dcd03065755c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2442a8d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2442a8d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2442a8d7

Branch: refs/heads/branch-2
Commit: 2442a8d716859efce601bd318f84a2b403475afa
Parents: 41d8e4e
Author: John Zhuge 
Authored: Thu Aug 31 21:13:50 2017 -0700
Committer: John Zhuge 
Committed: Thu Aug 31 21:13:50 2017 -0700

--
 hadoop-tools/hadoop-azure-datalake/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2442a8d7/hadoop-tools/hadoop-azure-datalake/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml 
b/hadoop-tools/hadoop-azure-datalake/pom.xml
index 0e2ee73..ebba119 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -121,7 +121,7 @@
 
   com.microsoft.azure
   azure-data-lake-store-sdk
-  2.2.1
+  2.2.2
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.

2017-08-31 Thread jzhuge
Repository: hadoop
Updated Branches:
  refs/heads/trunk 27359b713 -> 7ecc6dbed


HADOOP-14824. Update ADLS SDK to 2.2.2 for MSI fix. Contributed by Atul Sikaria.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7ecc6dbe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7ecc6dbe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7ecc6dbe

Branch: refs/heads/trunk
Commit: 7ecc6dbed62c80397f71949bee41dcd03065755c
Parents: 27359b7
Author: John Zhuge 
Authored: Thu Aug 31 21:04:12 2017 -0700
Committer: John Zhuge 
Committed: Thu Aug 31 21:13:22 2017 -0700

--
 hadoop-tools/hadoop-azure-datalake/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ecc6dbe/hadoop-tools/hadoop-azure-datalake/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml 
b/hadoop-tools/hadoop-azure-datalake/pom.xml
index 47f12df..f699464 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -110,7 +110,7 @@
 
   com.microsoft.azure
   azure-data-lake-store-sdk
-  2.2.1
+  2.2.2
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14781. Clarify that HADOOP_CONF_DIR shouldn't actually be set in hadoop-env.sh

2017-08-31 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0adc3a053 -> 27359b713


HADOOP-14781. Clarify that HADOOP_CONF_DIR shouldn't actually be set in 
hadoop-env.sh

Signed-off-by: Andrew Wang 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27359b71
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27359b71
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27359b71

Branch: refs/heads/trunk
Commit: 27359b713982b480d456067e4a71bf0c4ffb1df2
Parents: 0adc3a0
Author: Allen Wittenauer 
Authored: Tue Aug 29 10:10:56 2017 -0700
Committer: Allen Wittenauer 
Committed: Thu Aug 31 21:10:52 2017 -0700

--
 .../hadoop-common/src/main/conf/hadoop-env.sh| 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27359b71/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
--
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh 
b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
index fbc7bc3..bef4dab 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
@@ -58,10 +58,13 @@
 # export HADOOP_HOME=
 
 # Location of Hadoop's configuration information.  i.e., where this
-# file is probably living. Many sites will also set this in the
-# same location where JAVA_HOME is defined.  If this is not defined
-# Hadoop will attempt to locate it based upon its execution
-# path.
+# file is living. If this is not defined, Hadoop will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
 # export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
 
 # The maximum amount of heap to use (Java -Xmx).  If no unit


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[55/73] [abbrv] hadoop git commit: HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25a1cad0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
new file mode 100644
index 000..ee6f57d
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -0,0 +1,290 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.NamenodePriorityComparator;
+import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
+import org.apache.hadoop.hdfs.server.federation.resolver.PathLocation;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.util.Time;
+
+/**
+ * In-memory cache/mock of a namenode and file resolver. Stores the most
+ * recently updated NN information for each nameservice and block pool. Also
+ * stores a virtual mount table for resolving global namespace paths to local 
NN
+ * paths.
+ */
+public class MockResolver
+implements ActiveNamenodeResolver, FileSubclusterResolver {
+
+  private Map resolver =
+  new HashMap();
+  private Map locations =
+  new HashMap();
+  private Set namespaces =
+  new HashSet();
+  private String defaultNamespace = null;
+
+  public MockResolver(Configuration conf, StateStoreService store) {
+this.cleanRegistrations();
+  }
+
+  public void addLocation(String mount, String nameservice, String location) {
+RemoteLocation remoteLocation = new RemoteLocation(nameservice, location);
+List locationsList = locations.get(mount);
+if (locationsList == null) {
+  locationsList = new LinkedList();
+  locations.put(mount, locationsList);
+}
+if (!locationsList.contains(remoteLocation)) {
+  locationsList.add(remoteLocation);
+}
+
+if (this.defaultNamespace == null) {
+  this.defaultNamespace = nameservice;
+}
+  }
+
+  public synchronized void cleanRegistrations() {
+this.resolver =
+new HashMap();
+this.namespaces = new HashSet();
+  }
+
+  @Override
+  public void updateActiveNamenode(
+  String ns, InetSocketAddress successfulAddress) {
+
+String address = successfulAddress.getHostName() + ":" +
+successfulAddress.getPort();
+String key = ns;
+if (key != null) {
+  // Update the active entry
+  @SuppressWarnings("unchecked")
+  List iterator =
+  (List) resolver.get(key);
+  for (FederationNamenodeContext namenode : iterator) {
+if (namenode.getRpcAddress().equals(address)) {
+  MockNamenodeContext nn = (MockNamenodeContext) namenode;
+  nn.setState(FederationNamenodeServiceState.ACTIVE);
+  break;
+}
+  }
+  Collections.sort(iterator, new NamenodePriorityComparator());
+}
+  }
+
+  @Override
+  public List
+  getNamenodesForNameserviceId(String nameserviceId) {
+return resolver.get(nameserviceId);
+  }

[63/73] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43a1a5fe/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 24792bb..4bae71e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -17,16 +17,109 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_COUNT_DEFAULT;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_COUNT_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_QUEUE_SIZE_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_QUEUE_SIZE_KEY;
+
+import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.CryptoProtocolVersion;
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
+import org.apache.hadoop.fs.CacheFlag;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.ContentSummary;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FsServerDefaults;
+import org.apache.hadoop.fs.Options;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.QuotaUsage;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.fs.XAttr;
+import org.apache.hadoop.fs.XAttrSetFlag;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclStatus;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.AddBlockFlag;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.inotify.EventBatchList;
+import org.apache.hadoop.hdfs.protocol.AddingECPolicyResponse;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
+import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
+import org.apache.hadoop.hdfs.protocol.CachePoolInfo;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.CorruptFileBlocks;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.EncryptionZone;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
+import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.ClientNamenodeProtocol;
+import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;
+import 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB;
+import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
+import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 

[69/73] [abbrv] hadoop git commit: HDFS-11554. [Documentation] Router-based federation documentation. Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-11554. [Documentation] Router-based federation documentation. Contributed 
by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6ba323d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6ba323d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6ba323d0

Branch: refs/heads/HDFS-10467
Commit: 6ba323d0c54485cc6a68159f7288a29de802078b
Parents: 485c7b9
Author: Inigo Goiri 
Authored: Wed Aug 16 17:23:29 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:56 2017 -0700

--
 .../src/site/markdown/HDFSRouterFederation.md   | 170 +++
 .../site/resources/images/routerfederation.png  | Bin 0 -> 24961 bytes
 2 files changed, 170 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ba323d0/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md
new file mode 100644
index 000..f094238
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md
@@ -0,0 +1,170 @@
+
+
+HDFS Router-based Federation
+
+
+
+
+Introduction
+
+
+NameNodes have scalability limits because of the metadata overhead comprised 
of inodes (files and directories) and file blocks, the number of Datanode 
heartbeats, and the number of HDFS RPC client requests.
+The common solution is to split the filesystem into smaller subclusters [HDFS 
Federation](.Federation.html) and provide a federated view 
[ViewFs](.ViewFs.html).
+The problem is how to maintain the split of the subclusters (e.g., namespace 
partition), which forces users to connect to multiple subclusters and manage 
the allocation of folders/files to them.
+
+
+Architecture
+
+
+A natural extension to this partitioned federation is to add a layer of 
software responsible for federating the namespaces.
+This extra layer allows users to access any subcluster transparently, lets 
subclusters manage their own block pools independently, and supports 
rebalancing of data across subclusters.
+To accomplish these goals, the federation layer directs block accesses to the 
proper subcluster, maintains the state of the namespaces, and provides 
mechanisms for data rebalancing.
+This layer must be scalable, highly available, and fault tolerant.
+
+This federation layer comprises multiple components.
+The _Router_ component that has the same interface as a NameNode, and forwards 
the client requests to the correct subcluster, based on ground-truth 
information from a State Store.
+The _State Store_ combines a remote _Mount Table_ (in the flavor of 
[ViewFs](.ViewFs.html), but shared between clients) and utilization 
(load/capacity) information about the subclusters.
+This approach has the same architecture as [YARN 
federation](../hadoop-yarn/Federation.html).
+
+![Router-based Federation Sequence Diagram | 
width=800](./images/routerfederation.png)
+
+
+### Example flow
+The simplest configuration deploys a Router on each NameNode machine.
+The Router monitors the local NameNode and heartbeats the state to the State 
Store.
+When a regular DFS client contacts any of the Routers to access a file in the 
federated filesystem, the Router checks the Mount Table in the State Store 
(i.e., the local cache) to find out which subcluster contains the file.
+Then it checks the Membership table in the State Store (i.e., the local cache) 
for the NameNode responsible for the subcluster.
+After it has identified the correct NameNode, the Router proxies the request.
+The client accesses Datanodes directly.
+
+
+### Router
+There can be multiple Routers in the system with soft state.
+Each Router has two roles:
+
+* Federated interface: expose a single, global NameNode interface to the 
clients and forward the requests to the active NameNode in the correct 
subcluster
+* NameNode heartbeat: maintain the information about a NameNode in the State 
Store
+
+ Federated interface
+The Router receives a client request, checks the State Store for the correct 
subcluster, and forwards the request to the active NameNode of that subcluster.
+The reply from the NameNode then flows in the opposite direction.
+The Routers are stateless and can be behind a load balancer.
+For performance, the Router also caches remote mount table entries and the 
state of the subclusters.
+To make sure that changes have been propagated to all Routers, each Router 
heartbeats its state to the State Store.
+
+The communications between the Routers and the State Store are cached (with 

[32/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.1/CHANGES.0.21.1.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.1/CHANGES.0.21.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.1/CHANGES.0.21.1.md
index c5e4468..dcb5f6f 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.1/CHANGES.0.21.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.1/CHANGES.0.21.1.md
@@ -18,7 +18,7 @@
 -->
 # Apache Hadoop Changelog
 
-## Release 0.21.1 - Unreleased (as of 2016-03-04)
+## Release 0.21.1 - Unreleased (as of 2017-08-28)
 
 ### INCOMPATIBLE CHANGES:
 
@@ -27,12 +27,6 @@
 | [MAPREDUCE-1905](https://issues.apache.org/jira/browse/MAPREDUCE-1905) | 
Context.setStatus() and progress() api are ignored |  Blocker | task | 
Amareshwari Sriramadasu | Amareshwari Sriramadasu |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
@@ -44,65 +38,65 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7193](https://issues.apache.org/jira/browse/HADOOP-7193) | Help 
message is wrong for touchz command. |  Minor | fs | Uma Maheswara Rao G | Uma 
Maheswara Rao G |
-| [HADOOP-7177](https://issues.apache.org/jira/browse/HADOOP-7177) | CodecPool 
should report which compressor it is using |  Trivial | native | Allen 
Wittenauer | Allen Wittenauer |
-| [HADOOP-7117](https://issues.apache.org/jira/browse/HADOOP-7117) | Move 
secondary namenode checkpoint configs from core-default.xml to hdfs-default.xml 
|  Major | conf | Patrick Angeles | Harsh J |
+| [MAPREDUCE-1501](https://issues.apache.org/jira/browse/MAPREDUCE-1501) | 
FileInputFormat to support multi-level/recursive directory listing |  Major | . 
| Zheng Shao | Zheng Shao |
 | [HADOOP-6786](https://issues.apache.org/jira/browse/HADOOP-6786) | 
test-patch needs to verify Herriot integrity |  Major | build | Konstantin 
Boudnik | Konstantin Boudnik |
-| [HDFS-1596](https://issues.apache.org/jira/browse/HDFS-1596) | Move 
secondary namenode checkpoint configs from core-default.xml to hdfs-default.xml 
|  Major | documentation, namenode | Patrick Angeles | Harsh J |
 | [HDFS-1343](https://issues.apache.org/jira/browse/HDFS-1343) | Instrumented 
build should be concentrated in one build area |  Minor | build | Konstantin 
Boudnik | Konstantin Boudnik |
 | [MAPREDUCE-2140](https://issues.apache.org/jira/browse/MAPREDUCE-2140) | 
Re-generate fair scheduler design doc PDF |  Trivial | . | Matei Zaharia | 
Matei Zaharia |
-| [MAPREDUCE-1501](https://issues.apache.org/jira/browse/MAPREDUCE-1501) | 
FileInputFormat to support multi-level/recursive directory listing |  Major | . 
| Zheng Shao | Zheng Shao |
+| [HADOOP-7177](https://issues.apache.org/jira/browse/HADOOP-7177) | CodecPool 
should report which compressor it is using |  Trivial | native | Allen 
Wittenauer | Allen Wittenauer |
+| [HDFS-1596](https://issues.apache.org/jira/browse/HDFS-1596) | Move 
secondary namenode checkpoint configs from core-default.xml to hdfs-default.xml 
|  Major | documentation, namenode | Patrick Angeles | Harsh J |
+| [HADOOP-7117](https://issues.apache.org/jira/browse/HADOOP-7117) | Move 
secondary namenode checkpoint configs from core-default.xml to hdfs-default.xml 
|  Major | conf | Patrick Angeles | Harsh J |
+| [HADOOP-7193](https://issues.apache.org/jira/browse/HADOOP-7193) | Help 
message is wrong for touchz command. |  Minor | fs | Uma Maheswara Rao G | Uma 
Maheswara Rao G |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7215](https://issues.apache.org/jira/browse/HADOOP-7215) | RPC 
clients must connect over a network interface corresponding to the host name in 
the client's kerberos principal key |  Blocker | security | Suresh Srinivas | 
Suresh Srinivas |
-| [HADOOP-7194](https://issues.apache.org/jira/browse/HADOOP-7194) | Potential 
Resource leak in IOUtils.java |  Major | io | Devaraj K | Devaraj K |
-| [HADOOP-7183](https://issues.apache.org/jira/browse/HADOOP-7183) | 
WritableComparator.get should not cache comparator objects |  Blocker | . | 
Todd Lipcon | Tom White |
-| [HADOOP-7174](https://issues.apache.org/jira/browse/HADOOP-7174) | null is 
displayed in the console,if the src path is invalid while doing copyToLocal 
operation from commandLine |  Minor | fs | Uma Maheswara Rao G | Uma Maheswara 
Rao G |
-| [HADOOP-7162](https://issues.apache.org/jira/browse/HADOOP-7162) | FsShell: 
call srcFs.listStatus(src) twice |  Minor | fs | Alexey Diomin | Alexey Diomin |
-| 

[36/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
index 2ebfdc0..5243c7e 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
@@ -23,23 +23,16 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-6498](https://issues.apache.org/jira/browse/HADOOP-6498) | *Blocker* 
| **IPC client  bug may cause rpc call hang**
-
-Correct synchronization error in IPC where handler thread could hang if 
request reader got an error.
-
-

-
-* [HADOOP-6460](https://issues.apache.org/jira/browse/HADOOP-6460) | *Blocker* 
| **Namenode runs of out of memory due to memory leak in ipc Server**
+* [MAPREDUCE-826](https://issues.apache.org/jira/browse/MAPREDUCE-826) | 
*Trivial* | **harchive doesn't use ToolRunner / harchive returns 0 even if the 
job fails with exception**
 
-If an IPC server response buffer has grown to than 1MB, it is replaced by a 
smaller buffer to free up the Java heap that was used. This will improve the 
longevity of the name service.
+Use ToolRunner for archives job and return non zero error code on failure.
 
 
 ---
 
-* [HADOOP-6428](https://issues.apache.org/jira/browse/HADOOP-6428) | *Major* | 
**HttpServer sleeps with negative values**
+* [MAPREDUCE-112](https://issues.apache.org/jira/browse/MAPREDUCE-112) | 
*Blocker* | **Reduce Input Records and Reduce Output Records counters are not 
being set when using the new Mapreduce reducer API**
 
-Corrected arithmetic error that made sleep times less than zero.
+Updates of counters for reduce input and output records were added in the new 
API so they are available for jobs using the new API.
 
 
 ---
@@ -51,23 +44,23 @@ Allow a general mechanism to disable the cache on a per 
filesystem basis by usin
 
 ---
 
-* [HADOOP-6097](https://issues.apache.org/jira/browse/HADOOP-6097) | *Major* | 
**Multiple bugs w/ Hadoop archives**
+* [MAPREDUCE-979](https://issues.apache.org/jira/browse/MAPREDUCE-979) | 
*Blocker* | **JobConf.getMemoryFor{Map\|Reduce}Task doesn't fallback to newer 
config knobs when mapred.taskmaxvmem is set to DISABLED\_MEMORY\_LIMIT of -1**
 
-Bugs fixed for Hadoop archives: character escaping in paths, LineReader and 
file system caching.
+Added support to fallback to new task memory configuration when deprecated 
memory configuration values are set to disabled.
 
 
 ---
 
-* [HDFS-793](https://issues.apache.org/jira/browse/HDFS-793) | *Blocker* | 
**DataNode should first receive the whole packet ack message before it 
constructs and sends its own ack message for the packet**
+* [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | *Blocker* | 
**Rename failure due to quota results in deletion of src directory**
 
-**WARNING: No release note provided for this incompatible change.**
+Rename properly considers the case where both source and destination are over 
quota; operation will fail with error indication.
 
 
 ---
 
-* [HDFS-781](https://issues.apache.org/jira/browse/HDFS-781) | *Blocker* | 
**Metrics PendingDeletionBlocks is not decremented**
+* [HADOOP-6097](https://issues.apache.org/jira/browse/HADOOP-6097) | *Major* | 
**Multiple bugs w/ Hadoop archives**
 
-Correct PendingDeletionBlocks metric to properly decrement counts.
+Bugs fixed for Hadoop archives: character escaping in paths, LineReader and 
file system caching.
 
 
 ---
@@ -79,9 +72,9 @@ Corrected an error when checking quota policy that resulted 
in a failure to read
 
 ---
 
-* [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | *Blocker* | 
**Rename failure due to quota results in deletion of src directory**
+* [MAPREDUCE-1068](https://issues.apache.org/jira/browse/MAPREDUCE-1068) | 
*Major* | **In hadoop-0.20.0 streaming job do not throw proper verbose error 
message if file is not present**
 
-Rename properly considers the case where both source and destination are over 
quota; operation will fail with error indication.
+Fix streaming job to show proper message if file is is not present, for -file 
option.
 
 
 ---
@@ -93,44 +86,44 @@ Memory leak in function hdfsFreeFileInfo in libhdfs. This 
bug affects fuse-dfs s
 
 ---
 
-* [MAPREDUCE-1182](https://issues.apache.org/jira/browse/MAPREDUCE-1182) | 
*Blocker* | **Reducers fail with OutOfMemoryError while copying Map outputs**
+* [MAPREDUCE-1147](https://issues.apache.org/jira/browse/MAPREDUCE-1147) | 
*Blocker* | **Map output records counter missing for map-only jobs in new API**
 
-Modifies shuffle related memory parameters to use 

[46/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
index 3eac7ed..8e45328 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
@@ -20,55 +20,15 @@
 
 ## Release 0.16.4 - 2008-05-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-3304](https://issues.apache.org/jira/browse/HADOOP-3304) | [HOD] 
logcondense fails if DFS has files that are not log files, but match a certain 
pattern |  Blocker | contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-3294](https://issues.apache.org/jira/browse/HADOOP-3294) | distcp 
leaves empty blocks afte successful execution |  Blocker | util | Christian 
Kunz | Tsz Wo Nicholas Sze |
-| [HADOOP-3186](https://issues.apache.org/jira/browse/HADOOP-3186) | Incorrect 
permission checking on  mv |  Blocker | . | Koji Noguchi | Tsz Wo Nicholas Sze |
 | [HADOOP-3138](https://issues.apache.org/jira/browse/HADOOP-3138) | distcp 
fail copying to /user/\/\ (with permission on) |  
Blocker | . | Koji Noguchi | Raghu Angadi |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-3186](https://issues.apache.org/jira/browse/HADOOP-3186) | Incorrect 
permission checking on  mv |  Blocker | . | Koji Noguchi | Tsz Wo Nicholas Sze |
+| [HADOOP-3294](https://issues.apache.org/jira/browse/HADOOP-3294) | distcp 
leaves empty blocks afte successful execution |  Blocker | util | Christian 
Kunz | Tsz Wo Nicholas Sze |
+| [HADOOP-3304](https://issues.apache.org/jira/browse/HADOOP-3304) | [HOD] 
logcondense fails if DFS has files that are not log files, but match a certain 
pattern |  Blocker | contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
index bbf3d23..f88162a 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
@@ -24,242 +24,230 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | virtual 
address space limits break streaming apps |  Blocker | . | Rick Cox | Arun C 
Murthy |
-| [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | Remove 
HOD changes from CHANGES.txt, as they are now inside src/contrib/hod |  Major | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | exists() 
calls logs FileNotFoundException in namenode log |  Major | . | Lohit 
Vijayarenu | Lohit Vijayarenu |
-| [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | [HOD] 
Update hod version number |  Major | contrib/hod | Hemanth Yamijala | Hemanth 
Yamijala |
-| [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | hadoop 
dfs -put should support multiple src |  Major | . | Lohit Vijayarenu | Lohit 
Vijayarenu |
-| [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | 
MiniMRCluster is ignoring parameter taskTrackerFirst |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| 

[51/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

Signed-off-by: Andrew Wang 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/19041008
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/19041008
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/19041008

Branch: refs/heads/HDFS-10467
Commit: 190410085b86b002a7515ce3a000d87bafffc77d
Parents: 91cc070
Author: Allen Wittenauer 
Authored: Thu May 4 18:22:34 2017 -0700
Committer: Allen Wittenauer 
Committed: Thu Aug 31 19:06:49 2017 -0700

--
 hadoop-common-project/hadoop-common/pom.xml |2 +-
 .../markdown/release/0.1.0/CHANGES.0.1.0.md |  106 +-
 .../markdown/release/0.1.1/CHANGES.0.1.1.md |   36 +-
 .../markdown/release/0.10.0/CHANGES.0.10.0.md   |  118 +-
 .../markdown/release/0.10.1/CHANGES.0.10.1.md   |   52 +-
 .../markdown/release/0.11.0/CHANGES.0.11.0.md   |  106 +-
 .../markdown/release/0.11.1/CHANGES.0.11.1.md   |   44 +-
 .../markdown/release/0.11.2/CHANGES.0.11.2.md   |   42 +-
 .../markdown/release/0.12.0/CHANGES.0.12.0.md   |  124 +-
 .../markdown/release/0.12.1/CHANGES.0.12.1.md   |   70 +-
 .../markdown/release/0.12.2/CHANGES.0.12.2.md   |   44 +-
 .../markdown/release/0.12.3/CHANGES.0.12.3.md   |   50 +-
 .../markdown/release/0.13.0/CHANGES.0.13.0.md   |  252 +-
 .../markdown/release/0.13.1/CHANGES.0.13.1.md   |   64 -
 .../release/0.13.1/RELEASENOTES.0.13.1.md   |   24 -
 .../markdown/release/0.14.0/CHANGES.0.14.0.md   |  288 +--
 .../markdown/release/0.14.1/CHANGES.0.14.1.md   |   44 +-
 .../markdown/release/0.14.2/CHANGES.0.14.2.md   |   52 +-
 .../markdown/release/0.14.3/CHANGES.0.14.3.md   |   44 +-
 .../markdown/release/0.14.4/CHANGES.0.14.4.md   |   36 +-
 .../markdown/release/0.15.0/CHANGES.0.15.0.md   |  266 +-
 .../markdown/release/0.15.1/CHANGES.0.15.1.md   |   32 +-
 .../markdown/release/0.15.2/CHANGES.0.15.2.md   |   52 +-
 .../markdown/release/0.15.3/CHANGES.0.15.3.md   |   44 +-
 .../markdown/release/0.15.4/CHANGES.0.15.4.md   |   42 +-
 .../markdown/release/0.16.0/CHANGES.0.16.0.md   |  320 ++-
 .../markdown/release/0.16.1/CHANGES.0.16.1.md   |   74 +-
 .../markdown/release/0.16.2/CHANGES.0.16.2.md   |   70 +-
 .../markdown/release/0.16.3/CHANGES.0.16.3.md   |   46 +-
 .../markdown/release/0.16.4/CHANGES.0.16.4.md   |   46 +-
 .../markdown/release/0.17.0/CHANGES.0.17.0.md   |  350 ++-
 .../release/0.17.0/RELEASENOTES.0.17.0.md   |  450 ++--
 .../markdown/release/0.17.1/CHANGES.0.17.1.md   |   48 +-
 .../markdown/release/0.17.2/CHANGES.0.17.2.md   |   60 +-
 .../release/0.17.2/RELEASENOTES.0.17.2.md   |   12 +-
 .../markdown/release/0.17.3/CHANGES.0.17.3.md   |   40 +-
 .../markdown/release/0.18.0/CHANGES.0.18.0.md   |  492 ++--
 .../release/0.18.0/RELEASENOTES.0.18.0.md   |  302 +--
 .../markdown/release/0.18.1/CHANGES.0.18.1.md   |   48 +-
 .../release/0.18.1/RELEASENOTES.0.18.1.md   |8 +-
 .../markdown/release/0.18.2/CHANGES.0.18.2.md   |   58 +-
 .../release/0.18.2/RELEASENOTES.0.18.2.md   |   20 +-
 .../markdown/release/0.18.3/CHANGES.0.18.3.md   |  100 +-
 .../release/0.18.3/RELEASENOTES.0.18.3.md   |   50 +-
 .../markdown/release/0.18.4/CHANGES.0.18.4.md   |   48 +-
 .../markdown/release/0.19.0/CHANGES.0.19.0.md   |  636 +++--
 .../release/0.19.0/RELEASENOTES.0.19.0.md   |  306 +--
 .../markdown/release/0.19.1/CHANGES.0.19.1.md   |   96 +-
 .../release/0.19.1/RELEASENOTES.0.19.1.md   |   40 +-
 .../markdown/release/0.19.2/CHANGES.0.19.2.md   |   92 +-
 .../markdown/release/0.2.0/CHANGES.0.2.0.md |  102 +-
 .../markdown/release/0.2.1/CHANGES.0.2.1.md |   44 +-
 .../markdown/release/0.20.0/CHANGES.0.20.0.md   |  508 ++--
 .../release/0.20.0/RELEASENOTES.0.20.0.md   |  186 +-
 .../markdown/release/0.20.1/CHANGES.0.20.1.md   |  134 +-
 .../release/0.20.1/RELEASENOTES.0.20.1.md   |  112 +-
 .../markdown/release/0.20.2/CHANGES.0.20.2.md   |   90 +-
 .../release/0.20.2/RELEASENOTES.0.20.2.md   |   66 +-
 .../release/0.20.203.0/CHANGES.0.20.203.0.md|   64 +-
 .../0.20.203.0/RELEASENOTES.0.20.203.0.md   |   44 +-
 .../release/0.20.203.1/CHANGES.0.20.203.1.md|   42 +-
 .../release/0.20.204.0/CHANGES.0.20.204.0.md|  100 +-
 .../0.20.204.0/RELEASENOTES.0.20.204.0.md   |   38 +-
 .../release/0.20.204.1/CHANGES.0.20.204.1.md|   64 -
 .../0.20.204.1/RELEASENOTES.0.20.204.1.md   |   24 -
 .../release/0.20.205.0/CHANGES.0.20.205.0.md|  210 +-
 .../0.20.205.0/RELEASENOTES.0.20.205.0.md   |   98 +-
 .../markdown/release/0.20.3/CHANGES.0.20.3.md   |   74 +-
 .../release/0.20.3/RELEASENOTES.0.20.3.md   |   12 +-
 .../markdown/release/0.21.0/CHANGES.0.21.0.md   | 2412 +-
 .../release/0.21.0/RELEASENOTES.0.21.0.md   | 1324 +-
 .../markdown/release/0.21.1/CHANGES.0.21.1.md   

[34/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
index 75c62a1..1026058 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
@@ -24,1343 +24,1337 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-6701](https://issues.apache.org/jira/browse/HADOOP-6701) |  
Incorrect exit codes for "dfs -chown", "dfs -chgrp" |  Minor | fs | Ravi 
Phulari | Ravi Phulari |
-| [HADOOP-6686](https://issues.apache.org/jira/browse/HADOOP-6686) | Remove 
redundant exception class name in unwrapped exceptions thrown at the RPC client 
|  Major | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-6577](https://issues.apache.org/jira/browse/HADOOP-6577) | IPC 
server response buffer reset threshold should be configurable |  Major | . | 
Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-6569](https://issues.apache.org/jira/browse/HADOOP-6569) | 
FsShell#cat should avoid calling unecessary getFileStatus before opening a file 
to read |  Major | fs | Hairong Kuang | Hairong Kuang |
-| [HADOOP-6367](https://issues.apache.org/jira/browse/HADOOP-6367) | Move 
Access Token implementation from Common to HDFS |  Major | security | Kan Zhang 
| Kan Zhang |
-| [HADOOP-6299](https://issues.apache.org/jira/browse/HADOOP-6299) | Use JAAS 
LoginContext for our login |  Major | security | Arun C Murthy | Owen O'Malley |
-| [HADOOP-6230](https://issues.apache.org/jira/browse/HADOOP-6230) | Move 
process tree, and memory calculator classes out of Common into Map/Reduce. |  
Major | util | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
-| [HADOOP-6203](https://issues.apache.org/jira/browse/HADOOP-6203) | Improve 
error message when moving to trash fails due to quota issue |  Major | fs | 
Jakob Homan | Boris Shkolnik |
-| [HADOOP-6201](https://issues.apache.org/jira/browse/HADOOP-6201) | 
FileSystem::ListStatus should throw FileNotFoundException |  Major | fs | Jakob 
Homan | Jakob Homan |
-| [HADOOP-5913](https://issues.apache.org/jira/browse/HADOOP-5913) | Allow 
administrators to be able to start and stop queues |  Major | . | rahul k singh 
| rahul k singh |
-| [HADOOP-5879](https://issues.apache.org/jira/browse/HADOOP-5879) | GzipCodec 
should read compression level etc from configuration |  Major | io | Zheng Shao 
| He Yongqiang |
-| [HADOOP-5861](https://issues.apache.org/jira/browse/HADOOP-5861) | s3n files 
are not getting split by default |  Major | fs/s3 | Joydeep Sen Sarma | Tom 
White |
-| [HADOOP-5738](https://issues.apache.org/jira/browse/HADOOP-5738) | Split 
waiting tasks field in JobTracker metrics to individual tasks |  Major | 
metrics | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
-| [HADOOP-5679](https://issues.apache.org/jira/browse/HADOOP-5679) | Resolve 
findbugs warnings in core/streaming/pipes/examples |  Major | . | Jothi 
Padmanabhan | Jothi Padmanabhan |
-| [HADOOP-5620](https://issues.apache.org/jira/browse/HADOOP-5620) | discp can 
preserve modification times of files |  Major | . | dhruba borthakur | Rodrigo 
Schmidt |
-| [HADOOP-5485](https://issues.apache.org/jira/browse/HADOOP-5485) | 
Authorisation machanism required for acceesing jobtracker url :- 
jobtracker.com:port/scheduler |  Major | . | Aroop Maliakkal | Vinod Kumar 
Vavilapalli |
-| [HADOOP-5464](https://issues.apache.org/jira/browse/HADOOP-5464) | DFSClient 
does not treat write timeout of 0 properly |  Major | . | Raghu Angadi | Raghu 
Angadi |
-| [HADOOP-5438](https://issues.apache.org/jira/browse/HADOOP-5438) | Merge 
FileSystem.create and FileSystem.append |  Major | fs | He Yongqiang | He 
Yongqiang |
-| [HADOOP-5258](https://issues.apache.org/jira/browse/HADOOP-5258) | Provide 
dfsadmin functionality to report on namenode's view of network topology |  
Major | . | Jakob Homan | Jakob Homan |
-| [HADOOP-5219](https://issues.apache.org/jira/browse/HADOOP-5219) | 
SequenceFile is using mapred property |  Major | io | Sharad Agarwal | Sharad 
Agarwal |
-| [HADOOP-5176](https://issues.apache.org/jira/browse/HADOOP-5176) | TestDFSIO 
reports itself as TestFDSIO |  Trivial | benchmarks | Bryan Duxbury | Ravi 
Phulari |
-| [HADOOP-5094](https://issues.apache.org/jira/browse/HADOOP-5094) | Show dead 
nodes information in dfsadmin -report |  Minor | . | Jim Huang | Jakob Homan |
-| [HADOOP-5022](https://issues.apache.org/jira/browse/HADOOP-5022) | [HOD] 
logcondense should delete all hod logs for a user, including jobtracker logs |  
Blocker | contrib/hod | Hemanth Yamijala | 

[71/73] [abbrv] hadoop git commit: HDFS-12312. Rebasing HDFS-10467 (2). Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-12312. Rebasing HDFS-10467 (2). Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5d906b9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5d906b9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5d906b9a

Branch: refs/heads/HDFS-10467
Commit: 5d906b9a0c0c020059cb1566d01ae2e3bbc2f9e2
Parents: 6ba323d
Author: Inigo Goiri 
Authored: Wed Aug 16 17:31:37 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:56 2017 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs   | 1 -
 .../hadoop/hdfs/server/federation/router/RouterRpcServer.java   | 1 +
 2 files changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d906b9a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index d51a8e2..d122ff7 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -31,7 +31,6 @@ function hadoop_usage
   hadoop_add_option "--hosts filename" "list of hosts to use in worker mode"
   hadoop_add_option "--workers" "turn on worker mode"
 
-<<< HEAD
   hadoop_add_subcommand "balancer" daemon "run a cluster balancing utility"
   hadoop_add_subcommand "cacheadmin" admin "configure the HDFS cache"
   hadoop_add_subcommand "classpath" client "prints the class path needed to 
get the hadoop jar and the required libraries"

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d906b9a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index eaaab39..c77d255 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -1946,6 +1946,7 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol {
 }
 long inodeId = 0;
 return new HdfsFileStatus(0, true, 0, 0, modTime, accessTime, permission,
+EnumSet.noneOf(HdfsFileStatus.Flags.class),
 owner, group, new byte[0], DFSUtil.string2Bytes(name), inodeId,
 childrenNum, null, (byte) 0, null);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[56/73] [abbrv] hadoop git commit: HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/25a1cad0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/25a1cad0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/25a1cad0

Branch: refs/heads/HDFS-10467
Commit: 25a1cad0a8d5e7f9f2a05bc8fab477e9715eaeba
Parents: 1904100
Author: Inigo 
Authored: Tue Mar 28 14:30:59 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:53 2017 -0700

--
 .../hadoop-hdfs/src/main/bin/hdfs   |   5 +
 .../hadoop-hdfs/src/main/bin/hdfs.cmd   |   8 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  17 +
 .../resolver/ActiveNamenodeResolver.java| 117 +++
 .../resolver/FederationNamenodeContext.java |  87 +++
 .../FederationNamenodeServiceState.java |  46 ++
 .../resolver/FederationNamespaceInfo.java   |  99 +++
 .../resolver/FileSubclusterResolver.java|  75 ++
 .../resolver/NamenodePriorityComparator.java|  63 ++
 .../resolver/NamenodeStatusReport.java  | 195 +
 .../federation/resolver/PathLocation.java   | 122 +++
 .../federation/resolver/RemoteLocation.java |  74 ++
 .../federation/resolver/package-info.java   |  41 +
 .../federation/router/FederationUtil.java   | 117 +++
 .../router/RemoteLocationContext.java   |  38 +
 .../hdfs/server/federation/router/Router.java   | 263 +++
 .../federation/router/RouterRpcServer.java  | 102 +++
 .../server/federation/router/package-info.java  |  31 +
 .../federation/store/StateStoreService.java |  77 ++
 .../server/federation/store/package-info.java   |  62 ++
 .../src/main/resources/hdfs-default.xml |  16 +
 .../server/federation/FederationTestUtils.java  | 233 ++
 .../hdfs/server/federation/MockResolver.java| 290 +++
 .../server/federation/RouterConfigBuilder.java  |  40 +
 .../server/federation/RouterDFSCluster.java | 767 +++
 .../server/federation/router/TestRouter.java|  96 +++
 26 files changed, 3080 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/25a1cad0/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index e6405b5..b1f44a4 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -57,6 +57,7 @@ function hadoop_usage
   hadoop_add_subcommand "oiv" admin "apply the offline fsimage viewer to an 
fsimage"
   hadoop_add_subcommand "oiv_legacy" admin "apply the offline fsimage viewer 
to a legacy fsimage"
   hadoop_add_subcommand "portmap" daemon "run a portmap service"
+  hadoop_add_subcommand "router" daemon "run the DFS router"
   hadoop_add_subcommand "secondarynamenode" daemon "run the DFS secondary 
namenode"
   hadoop_add_subcommand "snapshotDiff" client "diff two snapshots of a 
directory or diff the current directory contents with a snapshot"
   hadoop_add_subcommand "storagepolicies" admin "list/get/set block storage 
policies"
@@ -176,6 +177,10 @@ function hdfscmd_case
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   HADOOP_CLASSNAME=org.apache.hadoop.portmap.Portmap
 ;;
+router)
+  HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
+  HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.federation.router.Router'
+;;
 secondarynamenode)
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   
HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

http://git-wip-us.apache.org/repos/asf/hadoop/blob/25a1cad0/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
index 2181e47..b9853d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
@@ -59,7 +59,7 @@ if "%1" == "--loglevel" (
 )
   )
 
-  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies classpath 
crypto debug
+  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies classpath 
crypto router debug
   for %%i in ( %hdfscommands% ) do (
 if %hdfs-command% == %%i set hdfscommand=true

[28/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md
index 69e364f..3e3ef45 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md
@@ -23,201 +23,198 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-7740](https://issues.apache.org/jira/browse/HADOOP-7740) | *Minor* | 
**security audit logger is not on by default, fix the log4j properties to 
enable the logger**
+* [HADOOP-6683](https://issues.apache.org/jira/browse/HADOOP-6683) | *Minor* | 
**the first optimization: ZlibCompressor does not fully utilize the buffer**
 
-Fixed security audit logger configuration. (Arpit Gupta via Eric Yang)
+Improve the buffer utilization of ZlibCompressor to avoid invoking a JNI per 
write request.
 
 
 ---
 
-* [HADOOP-7728](https://issues.apache.org/jira/browse/HADOOP-7728) | *Major* | 
**hadoop-setup-conf.sh should be modified to enable task memory manager**
+* [HADOOP-7023](https://issues.apache.org/jira/browse/HADOOP-7023) | *Major* | 
**Add listCorruptFileBlocks to FileSystem**
 
-Enable task memory management to be configurable via hadoop config setup 
script.
+Add a new API listCorruptFileBlocks to FIleContext that returns a list of 
files that have corrupt blocks.
 
 
 ---
 
-* [HADOOP-7724](https://issues.apache.org/jira/browse/HADOOP-7724) | *Major* | 
**hadoop-setup-conf.sh should put proxy user info into the core-site.xml**
+* [HADOOP-7059](https://issues.apache.org/jira/browse/HADOOP-7059) | *Major* | 
**Remove "unused" warning in native code**
 
-Fixed hadoop-setup-conf.sh to put proxy user in core-site.xml.  (Arpit Gupta 
via Eric Yang)
+Adds \_\_attribute\_\_ ((unused))
 
 
 ---
 
-* [HADOOP-7720](https://issues.apache.org/jira/browse/HADOOP-7720) | *Major* | 
**improve the hadoop-setup-conf.sh to read in the hbase user and setup the 
configs**
+* [HDFS-1526](https://issues.apache.org/jira/browse/HDFS-1526) | *Major* | 
**Dfs client name for a map/reduce task should have some randomness**
 
-Added parameter for HBase user to setup config script. (Arpit Gupta via Eric 
Yang)
+Make a client name has this format: 
DFSClient\_applicationid\_randomint\_threadid, where applicationid = 
mapred.task.id or else = "NONMAPREDUCE".
 
 
 ---
 
-* [HADOOP-7715](https://issues.apache.org/jira/browse/HADOOP-7715) | *Major* | 
**see log4j Error when running mr jobs and certain dfs calls**
+* [HDFS-1560](https://issues.apache.org/jira/browse/HDFS-1560) | *Minor* | 
**dfs.data.dir permissions should default to 700**
 
-Removed unnecessary security logger configuration. (Eric Yang)
+The permissions on datanode data directories (configured by 
dfs.datanode.data.dir.perm) now default to 0700. Upon startup, the datanode 
will automatically change the permissions to match the configured value.
 
 
 ---
 
-* [HADOOP-7711](https://issues.apache.org/jira/browse/HADOOP-7711) | *Major* | 
**hadoop-env.sh generated from templates has duplicate info**
+* [MAPREDUCE-1906](https://issues.apache.org/jira/browse/MAPREDUCE-1906) | 
*Major* | **Lower default minimum heartbeat interval for tasktracker \> 
Jobtracker**
 
-Fixed recursive sourcing of HADOOP\_OPTS environment variables (Arpit Gupta 
via Eric Yang)
+The default minimum heartbeat interval has been dropped from 3 seconds to 
300ms to increase scheduling throughput on small clusters. Users may tune 
mapreduce.jobtracker.heartbeats.in.second to adjust this value.
 
 
 ---
 
-* [HADOOP-7708](https://issues.apache.org/jira/browse/HADOOP-7708) | 
*Critical* | **config generator does not update the properties file if on 
exists already**
+* [MAPREDUCE-2207](https://issues.apache.org/jira/browse/MAPREDUCE-2207) | 
*Major* | **Task-cleanup task should not be scheduled on the node that the task 
just failed**
 
-Fixed hadoop-setup-conf.sh to handle config file consistently.  (Eric Yang)
+Task-cleanup task should not be scheduled on the node that the task just failed
 
 
 ---
 
-* [HADOOP-7707](https://issues.apache.org/jira/browse/HADOOP-7707) | *Major* | 
**improve config generator to allow users to specify proxy user, turn append on 
or off, turn webhdfs on or off**
+* [HDFS-1536](https://issues.apache.org/jira/browse/HDFS-1536) | *Major* | 
**Improve HDFS WebUI**
 
-Added toggle for dfs.support.append, webhdfs and hadoop proxy user to setup 
config script. (Arpit Gupta via Eric Yang)
+On web UI, missing block number now becomes accurate and under-replicated 
blocks do not include missing blocks.
 
 
 ---
 
-* 

[21/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
index 9bf1d66..c9c200c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
@@ -20,56 +20,16 @@
 
 ## Release 0.3.1 - 2006-06-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-276](https://issues.apache.org/jira/browse/HADOOP-276) | No 
appenders could be found for logger |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-274](https://issues.apache.org/jira/browse/HADOOP-274) | The new 
logging framework puts application logs into server directory in hadoop.log |  
Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-272](https://issues.apache.org/jira/browse/HADOOP-272) | bin/hadoop 
dfs -rm \ crashes in log4j code |  Major | . | Owen O'Malley | Owen 
O'Malley |
+| [HADOOP-274](https://issues.apache.org/jira/browse/HADOOP-274) | The new 
logging framework puts application logs into server directory in hadoop.log |  
Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-262](https://issues.apache.org/jira/browse/HADOOP-262) | the reduce 
tasks do not report progress if they the map output locations is empty. |  
Major | . | Mahadev konar | Mahadev konar |
 | [HADOOP-245](https://issues.apache.org/jira/browse/HADOOP-245) | record io 
translator doesn't strip path names |  Major | record | Owen O'Malley | Milind 
Bhandarkar |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-276](https://issues.apache.org/jira/browse/HADOOP-276) | No 
appenders could be found for logger |  Major | . | Owen O'Malley | Owen 
O'Malley |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
index dd30d8c..cb69295 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
@@ -20,16 +20,6 @@
 
 ## Release 0.3.2 - 2006-06-09
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
@@ -51,33 +41,15 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-294](https://issues.apache.org/jira/browse/HADOOP-294) | dfs client 
error retries aren't happening (already being created and not replicated yet) | 
 Major | . | Owen O'Malley | Owen O'Malley |
-| [HADOOP-292](https://issues.apache.org/jira/browse/HADOOP-292) | hadoop dfs 
commands should not output superfluous data to stdout |  Minor | . | Yoram 
Arnon | Owen O'Malley |
-| [HADOOP-289](https://issues.apache.org/jira/browse/HADOOP-289) | Datanodes 
need to catch SocketTimeoutException and UnregisteredDatanodeException |  Major 
| . | Konstantin Shvachko | Konstantin Shvachko |
-| [HADOOP-285](https://issues.apache.org/jira/browse/HADOOP-285) | Data nodes 
cannot re-join the cluster once connection is lost |  Blocker | . | Konstantin 
Shvachko | Hairong Kuang |
-| [HADOOP-284](https://issues.apache.org/jira/browse/HADOOP-284) | dfs timeout 

[61/73] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10687. Federation Membership State Store internal API. Contributed by 
Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fad7865e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fad7865e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fad7865e

Branch: refs/heads/HDFS-10467
Commit: fad7865e9624695049cfbf8d72a13d3b0217e167
Parents: 904138c
Author: Inigo Goiri 
Authored: Mon Jul 31 10:55:21 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:54 2017 -0700

--
 .../dev-support/findbugsExcludeFile.xml |   3 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   1 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  17 +-
 .../resolver/MembershipNamenodeResolver.java| 290 
 .../federation/router/FederationUtil.java   |  42 +-
 .../federation/store/CachedRecordStore.java | 237 ++
 .../federation/store/MembershipStore.java   | 126 +
 .../federation/store/StateStoreCache.java   |  36 ++
 .../store/StateStoreCacheUpdateService.java |  67 +++
 .../federation/store/StateStoreService.java | 202 +++-
 .../store/impl/MembershipStoreImpl.java | 311 +
 .../federation/store/impl/package-info.java |  31 ++
 .../GetNamenodeRegistrationsRequest.java|  52 +++
 .../GetNamenodeRegistrationsResponse.java   |  55 +++
 .../store/protocol/GetNamespaceInfoRequest.java |  30 ++
 .../protocol/GetNamespaceInfoResponse.java  |  52 +++
 .../protocol/NamenodeHeartbeatRequest.java  |  52 +++
 .../protocol/NamenodeHeartbeatResponse.java |  49 ++
 .../UpdateNamenodeRegistrationRequest.java  |  72 +++
 .../UpdateNamenodeRegistrationResponse.java |  51 ++
 .../impl/pb/FederationProtocolPBTranslator.java | 145 ++
 .../GetNamenodeRegistrationsRequestPBImpl.java  |  87 
 .../GetNamenodeRegistrationsResponsePBImpl.java |  99 
 .../impl/pb/GetNamespaceInfoRequestPBImpl.java  |  60 +++
 .../impl/pb/GetNamespaceInfoResponsePBImpl.java |  95 
 .../impl/pb/NamenodeHeartbeatRequestPBImpl.java |  93 
 .../pb/NamenodeHeartbeatResponsePBImpl.java |  71 +++
 ...UpdateNamenodeRegistrationRequestPBImpl.java |  95 
 ...pdateNamenodeRegistrationResponsePBImpl.java |  73 +++
 .../store/protocol/impl/pb/package-info.java|  29 ++
 .../store/records/MembershipState.java  | 329 +
 .../store/records/MembershipStats.java  | 126 +
 .../records/impl/pb/MembershipStatePBImpl.java  | 334 +
 .../records/impl/pb/MembershipStatsPBImpl.java  | 191 
 .../src/main/proto/FederationProtocol.proto | 107 +
 .../src/main/resources/hdfs-default.xml |  18 +-
 .../resolver/TestNamenodeResolver.java  | 284 
 .../store/FederationStateStoreTestUtils.java|  23 +-
 .../federation/store/TestStateStoreBase.java|  81 
 .../store/TestStateStoreMembershipState.java| 463 +++
 .../store/driver/TestStateStoreDriverBase.java  |  69 ++-
 .../store/records/TestMembershipState.java  | 129 ++
 42 files changed, 4745 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
index 9582fcb..4b958b5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
@@ -15,6 +15,9 @@

  
  
+   
+ 
+ 

  
  

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index fa1044d..81e5fdf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -331,6 +331,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   QJournalProtocol.proto
   editlog.proto
   fsimage.proto
+  FederationProtocol.proto
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 

[43/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/RELEASENOTES.0.18.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/RELEASENOTES.0.18.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/RELEASENOTES.0.18.0.md
index f57c602..32e3c01 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/RELEASENOTES.0.18.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/RELEASENOTES.0.18.0.md
@@ -23,523 +23,523 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-3837](https://issues.apache.org/jira/browse/HADOOP-3837) | *Major* | 
**hadop streaming does not use progress reporting to detect hung tasks**
+* [HADOOP-2585](https://issues.apache.org/jira/browse/HADOOP-2585) | *Major* | 
**Automatic namespace recovery from the secondary image.**
 
-Changed streaming tasks to adhere to task timeout value specified in the job 
configuration.
+Improved management of replicas of the name space image. If all replicas on 
the Name Node are lost, the latest check point can be loaded from the secondary 
Name Node. Use parameter "-importCheckpoint" and specify the location with 
"fs.checkpoint.dir." The directory structure on the secondary Name Node has 
changed to match the primary Name Node.
 
 
 ---
 
-* [HADOOP-3808](https://issues.apache.org/jira/browse/HADOOP-3808) | *Blocker* 
| **[HOD] Include job tracker RPC in notes attribute after job submission**
+* [HADOOP-2703](https://issues.apache.org/jira/browse/HADOOP-2703) | *Minor* | 
**New files under lease (before close) still shows up as MISSING files/blocks 
in fsck**
 
-Modified HOD to include the RPC port of the JobTracker in the 'notes' 
attribute of the resource manager. The RPC port is included as the string 
'Mapred RPC Port:\'. Tools that depend on the value of the notes 
attribute must change to parse this new value.
+Changed fsck to ignore files opened for writing. Introduced new option 
"-openforwrite" to explicitly show open files.
 
 
 ---
 
-* [HADOOP-3703](https://issues.apache.org/jira/browse/HADOOP-3703) | *Blocker* 
| **[HOD] logcondense needs to use the new pattern of output in hadoop dfs 
-lsr**
+* [HADOOP-2865](https://issues.apache.org/jira/browse/HADOOP-2865) | *Major* | 
**FsShell.ls() should print file attributes first then the path name.**
 
-Modified logcondense.py to use the new format of hadoop dfs -lsr output. This 
version of logcondense would not work with previous versions of Hadoop and 
hence is incompatible.
+Changed the output of the "fs -ls" command to more closely match familiar 
Linux format. Additional changes were made by HADOOP-3459. Applications that 
parse the command output should be reviewed.
 
 
 ---
 
-* [HADOOP-3683](https://issues.apache.org/jira/browse/HADOOP-3683) | *Major* | 
**Hadoop dfs metric FilesListed shows number of files listed instead of 
operations**
+* [HADOOP-3061](https://issues.apache.org/jira/browse/HADOOP-3061) | *Major* | 
**Writable for single byte and double**
 
-Change FileListed to getNumGetListingOps and add CreateFileOps, DeleteFileOps 
and AddBlockOps metrics.
+Introduced ByteWritable and DoubleWritable (implementing WritableComparable) 
implementations for Byte and Double.
 
 
 ---
 
-* [HADOOP-3677](https://issues.apache.org/jira/browse/HADOOP-3677) | *Blocker* 
| **Problems with generation stamp upgrade**
+* [HADOOP-3164](https://issues.apache.org/jira/browse/HADOOP-3164) | *Major* | 
**Use FileChannel.transferTo() when data is read from DataNode.**
 
-Simplify generation stamp upgrade by making is a local upgrade on datandodes. 
Deleted distributed upgrade.
+Changed data node to use FileChannel.tranferTo() to transfer block data.
 
 
 ---
 
-* [HADOOP-3665](https://issues.apache.org/jira/browse/HADOOP-3665) | *Minor* | 
**WritableComparator newKey() fails for NullWritable**
+* [HADOOP-3283](https://issues.apache.org/jira/browse/HADOOP-3283) | *Major* | 
**Need a mechanism for data nodes to update generation stamps.**
 
-**WARNING: No release note provided for this incompatible change.**
+Added an IPC server in DataNode and a new IPC protocol InterDatanodeProtocol.  
Added conf properties dfs.datanode.ipc.address and dfs.datanode.handler.count 
with defaults "0.0.0.0:50020" and 3, respectively.
+Changed the serialization in DatanodeRegistration and DatanodeInfo, and 
therefore, updated the versionID in ClientProtocol, DatanodeProtocol, 
NamenodeProtocol.
 
 
 ---
 
-* [HADOOP-3610](https://issues.apache.org/jira/browse/HADOOP-3610) | *Blocker* 
| **[HOD] HOD does not automatically create a cluster directory for the script 
option**
+* [HADOOP-2797](https://issues.apache.org/jira/browse/HADOOP-2797) | 
*Critical* | **Withdraw CRC upgrade from HDFS**
 
-Modified HOD to automatically 

[60/73] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
new file mode 100644
index 000..1f0d556
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API response for overriding an existing namenode registration in the state
+ * store.
+ */
+public abstract class UpdateNamenodeRegistrationResponse {
+
+  public static UpdateNamenodeRegistrationResponse newInstance() {
+return StateStoreSerializer.newRecord(
+UpdateNamenodeRegistrationResponse.class);
+  }
+
+  public static UpdateNamenodeRegistrationResponse newInstance(boolean status)
+  throws IOException {
+UpdateNamenodeRegistrationResponse response = newInstance();
+response.setResult(status);
+return response;
+  }
+
+  @Private
+  @Unstable
+  public abstract boolean getResult();
+
+  @Private
+  @Unstable
+  public abstract void setResult(boolean result);
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
new file mode 100644
index 000..baad113
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
@@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+
+import org.apache.commons.codec.binary.Base64;
+
+import com.google.protobuf.GeneratedMessage;
+import com.google.protobuf.Message;
+import com.google.protobuf.Message.Builder;
+import com.google.protobuf.MessageOrBuilder;
+
+/**
+ * Helper class for setting/getting data elements in an object backed by a
+ * protobuf implementation.
+ */
+public class FederationProtocolPBTranslator {
+
+  /** Optional proto byte stream used to create this object. */
+  private P proto;
+  /** The class of the proto handler for this 

[39/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
index 72e7d42..04ccabb 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
@@ -20,49 +20,39 @@
 
 ## Release 0.2.0 - 2006-05-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-191](https://issues.apache.org/jira/browse/HADOOP-191) | add 
hadoopStreaming to src/contrib |  Major | . | Michel Tourn | Doug Cutting |
-| [HADOOP-189](https://issues.apache.org/jira/browse/HADOOP-189) | Add job jar 
lib, classes, etc. to CLASSPATH when in standalone mode |  Major | . | stack | 
Doug Cutting |
+| [HADOOP-51](https://issues.apache.org/jira/browse/HADOOP-51) | per-file 
replication counts |  Major | . | Doug Cutting | Konstantin Shvachko |
 | [HADOOP-148](https://issues.apache.org/jira/browse/HADOOP-148) | add a 
failure count to task trackers |  Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-132](https://issues.apache.org/jira/browse/HADOOP-132) | An API for 
reporting performance metrics |  Major | . | David Bowen |  |
+| [HADOOP-189](https://issues.apache.org/jira/browse/HADOOP-189) | Add job jar 
lib, classes, etc. to CLASSPATH when in standalone mode |  Major | . | stack | 
Doug Cutting |
 | [HADOOP-65](https://issues.apache.org/jira/browse/HADOOP-65) | add a record 
I/O framework to hadoop |  Minor | io, ipc | Sameer Paranjpye |  |
-| [HADOOP-51](https://issues.apache.org/jira/browse/HADOOP-51) | per-file 
replication counts |  Major | . | Doug Cutting | Konstantin Shvachko |
+| [HADOOP-191](https://issues.apache.org/jira/browse/HADOOP-191) | add 
hadoopStreaming to src/contrib |  Major | . | Michel Tourn | Doug Cutting |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-198](https://issues.apache.org/jira/browse/HADOOP-198) | adding 
owen's examples to exampledriver |  Minor | . | Mahadev konar | Mahadev konar |
-| [HADOOP-178](https://issues.apache.org/jira/browse/HADOOP-178) | piggyback 
block work requests to heartbeats and move block replication/deletion startup 
delay from datanodes to namenode |  Major | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-177](https://issues.apache.org/jira/browse/HADOOP-177) | improvement 
to browse through the map/reduce tasks |  Minor | . | Mahadev konar | Mahadev 
konar |
-| [HADOOP-173](https://issues.apache.org/jira/browse/HADOOP-173) | optimize 
allocation of tasks w/ local data |  Major | . | Doug Cutting | Doug Cutting |
-| [HADOOP-170](https://issues.apache.org/jira/browse/HADOOP-170) | 
setReplication and related bug fixes |  Major | fs | Konstantin Shvachko | 
Konstantin Shvachko |
-| [HADOOP-167](https://issues.apache.org/jira/browse/HADOOP-167) | reducing 
the number of Configuration & JobConf objects created |  Major | conf | Owen 
O'Malley | Owen O'Malley |
-| [HADOOP-166](https://issues.apache.org/jira/browse/HADOOP-166) | IPC is 
unable to invoke methods that use interfaces as parameter |  Minor | ipc | 
Stefan Groschupf | Doug Cutting |
-| [HADOOP-150](https://issues.apache.org/jira/browse/HADOOP-150) | tip and 
task names should reflect the job name |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-144](https://issues.apache.org/jira/browse/HADOOP-144) | the dfs 
client id isn't relatable to the map/reduce task ids |  Major | . | Owen 
O'Malley | Owen O'Malley |
-| [HADOOP-142](https://issues.apache.org/jira/browse/HADOOP-142) | failed 
tasks should be rescheduled on different hosts after other jobs |  Major | . | 
Owen O'Malley | Owen O'Malley |
-| [HADOOP-138](https://issues.apache.org/jira/browse/HADOOP-138) | stop all 
tasks |  Trivial | . | Stefan Groschupf | Doug Cutting |
+| [HADOOP-116](https://issues.apache.org/jira/browse/HADOOP-116) | cleaning up 
/tmp/hadoop/mapred/system |  Major | . | raghavendra prabhu | Doug Cutting |
 | [HADOOP-131](https://issues.apache.org/jira/browse/HADOOP-131) | Separate 
start/stop-dfs.sh and start/stop-mapred.sh scripts |  Minor | . | Chris A. 
Mattmann | Doug Cutting |
 | [HADOOP-129](https://issues.apache.org/jira/browse/HADOOP-129) | FileSystem 
should not name files with java.io.File | 

[44/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/CHANGES.0.18.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/CHANGES.0.18.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/CHANGES.0.18.0.md
index 202f434..d5cc886 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/CHANGES.0.18.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.0/CHANGES.0.18.0.md
@@ -24,299 +24,293 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-3837](https://issues.apache.org/jira/browse/HADOOP-3837) | hadop 
streaming does not use progress reporting to detect hung tasks |  Major | . | 
dhruba borthakur | dhruba borthakur |
-| [HADOOP-3808](https://issues.apache.org/jira/browse/HADOOP-3808) | [HOD] 
Include job tracker RPC in notes attribute after job submission |  Blocker | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-3703](https://issues.apache.org/jira/browse/HADOOP-3703) | [HOD] 
logcondense needs to use the new pattern of output in hadoop dfs -lsr |  
Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli |
-| [HADOOP-3683](https://issues.apache.org/jira/browse/HADOOP-3683) | Hadoop 
dfs metric FilesListed shows number of files listed instead of operations |  
Major | metrics | Lohit Vijayarenu | Lohit Vijayarenu |
-| [HADOOP-3665](https://issues.apache.org/jira/browse/HADOOP-3665) | 
WritableComparator newKey() fails for NullWritable |  Minor | io | Lukas Vlcek 
| Chris Douglas |
-| [HADOOP-3610](https://issues.apache.org/jira/browse/HADOOP-3610) | [HOD] HOD 
does not automatically create a cluster directory for the script option |  
Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli |
-| [HADOOP-3598](https://issues.apache.org/jira/browse/HADOOP-3598) | 
Map-Reduce framework needlessly creates temporary \_${taskid} directories for 
Maps |  Blocker | . | Arun C Murthy | Arun C Murthy |
-| [HADOOP-3569](https://issues.apache.org/jira/browse/HADOOP-3569) | KFS input 
stream read() returns 4 bytes instead of 1 |  Minor | . | Sriram Rao | Sriram 
Rao |
-| [HADOOP-3512](https://issues.apache.org/jira/browse/HADOOP-3512) | Split 
map/reduce tools into separate jars |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-3486](https://issues.apache.org/jira/browse/HADOOP-3486) | Change 
default for initial block report to 0 sec and document it in 
hadoop-defaults.xml |  Major | . | Sanjay Radia | Sanjay Radia |
-| [HADOOP-3483](https://issues.apache.org/jira/browse/HADOOP-3483) | [HOD] 
Improvements with cluster directory handling |  Major | contrib/hod | Hemanth 
Yamijala | Hemanth Yamijala |
-| [HADOOP-3459](https://issues.apache.org/jira/browse/HADOOP-3459) | Change 
dfs -ls listing to closely match format on Linux |  Major | . | Mukund 
Madhugiri | Mukund Madhugiri |
-| [HADOOP-3452](https://issues.apache.org/jira/browse/HADOOP-3452) | fsck exit 
code would be better if non-zero when FS corrupt |  Minor | . | Pete Wyckoff | 
Lohit Vijayarenu |
-| [HADOOP-3417](https://issues.apache.org/jira/browse/HADOOP-3417) | JobClient 
should not have a static configuration for cli parsing |  Major | . | Owen 
O'Malley | Amareshwari Sriramadasu |
-| [HADOOP-3405](https://issues.apache.org/jira/browse/HADOOP-3405) | Make 
mapred internal classes package-local |  Major | . | Enis Soztutar | Enis 
Soztutar |
-| [HADOOP-3390](https://issues.apache.org/jira/browse/HADOOP-3390) | Remove 
deprecated ClientProtocol.abandonFileInProgress() |  Major | . | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-3379](https://issues.apache.org/jira/browse/HADOOP-3379) | Document 
the "stream.non.zero.exit.status.is.failure" knob for streaming |  Blocker | 
documentation | Arun C Murthy | Amareshwari Sriramadasu |
-| [HADOOP-3329](https://issues.apache.org/jira/browse/HADOOP-3329) | 
DatanodeDescriptor objects stored in FSImage may be out dated. |  Major | . | 
Tsz Wo Nicholas Sze | dhruba borthakur |
-| [HADOOP-3317](https://issues.apache.org/jira/browse/HADOOP-3317) | add 
default port for hdfs namenode |  Minor | . | Doug Cutting | Doug Cutting |
-| [HADOOP-3310](https://issues.apache.org/jira/browse/HADOOP-3310) | Lease 
recovery for append |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-3283](https://issues.apache.org/jira/browse/HADOOP-3283) | Need a 
mechanism for data nodes to update generation stamps. |  Major | . | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-3265](https://issues.apache.org/jira/browse/HADOOP-3265) | Remove 
deprecated API getFileCacheHints |  Major | fs | Lohit Vijayarenu | Lohit 
Vijayarenu |
-| [HADOOP-3226](https://issues.apache.org/jira/browse/HADOOP-3226) | Run 
combiner when merging spills 

[66/73] [abbrv] hadoop git commit: HDFS-11826. Federation Namenode Heartbeat. Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-11826. Federation Namenode Heartbeat. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2a9235b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2a9235b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2a9235b6

Branch: refs/heads/HDFS-10467
Commit: 2a9235b614c84a8ee005204549ee8f450009fb56
Parents: fad7865
Author: Inigo Goiri 
Authored: Tue Aug 1 14:40:27 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:55 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  14 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  38 ++
 .../resolver/NamenodeStatusReport.java  | 193 ++
 .../federation/router/FederationUtil.java   |  66 
 .../router/NamenodeHeartbeatService.java| 350 +++
 .../hdfs/server/federation/router/Router.java   | 112 ++
 .../src/main/resources/hdfs-default.xml |  32 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   8 +
 .../hdfs/server/federation/MockResolver.java|   9 +-
 .../server/federation/RouterConfigBuilder.java  |  22 ++
 .../server/federation/RouterDFSCluster.java |  43 +++
 .../router/TestNamenodeHeartbeat.java   | 168 +
 .../server/federation/router/TestRouter.java|   3 +
 13 files changed, 1057 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a9235b6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index afb5bbf..d1c2b41 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1144,6 +1144,20 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   FEDERATION_ROUTER_PREFIX + "rpc.enable";
   public static final boolean DFS_ROUTER_RPC_ENABLE_DEFAULT = true;
 
+  // HDFS Router heartbeat
+  public static final String DFS_ROUTER_HEARTBEAT_ENABLE =
+  FEDERATION_ROUTER_PREFIX + "heartbeat.enable";
+  public static final boolean DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT = true;
+  public static final String DFS_ROUTER_HEARTBEAT_INTERVAL_MS =
+  FEDERATION_ROUTER_PREFIX + "heartbeat.interval";
+  public static final long DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT =
+  TimeUnit.SECONDS.toMillis(5);
+  public static final String DFS_ROUTER_MONITOR_NAMENODE =
+  FEDERATION_ROUTER_PREFIX + "monitor.namenode";
+  public static final String DFS_ROUTER_MONITOR_LOCAL_NAMENODE =
+  FEDERATION_ROUTER_PREFIX + "monitor.localnamenode.enable";
+  public static final boolean DFS_ROUTER_MONITOR_LOCAL_NAMENODE_DEFAULT = true;
+
   // HDFS Router NN client
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
   FEDERATION_ROUTER_PREFIX + "connection.pool-size";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a9235b6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 47e1c0d..0ea5e3e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -1237,6 +1237,44 @@ public class DFSUtil {
   }
 
   /**
+   * Map a logical namenode ID to its web address. Use the given nameservice if
+   * specified, or the configured one if none is given.
+   *
+   * @param conf Configuration
+   * @param nsId which nameservice nnId is a part of, optional
+   * @param nnId the namenode ID to get the service addr for
+   * @return the service addr, null if it could not be determined
+   */
+  public static String getNamenodeWebAddr(final Configuration conf, String 
nsId,
+  String nnId) {
+
+if (nsId == null) {
+  nsId = getOnlyNameServiceIdOrNull(conf);
+}
+
+String webAddrKey = DFSUtilClient.concatSuffixes(
+DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, nsId, nnId);
+
+String webAddr =
+conf.get(webAddrKey, DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_DEFAULT);
+return webAddr;
+  }
+
+  /**
+   * Get all of the Web addresses of the individual NNs in a given nameservice.
+   *
+   * @param conf Configuration
+   * @param nsId the 

[19/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/1.1.0/CHANGES.1.1.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.1.0/CHANGES.1.1.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.1.0/CHANGES.1.1.0.md
index a475bb3..db317ba 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.1.0/CHANGES.1.1.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.1.0/CHANGES.1.1.0.md
@@ -24,176 +24,170 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8552](https://issues.apache.org/jira/browse/HADOOP-8552) | Conflict: 
Same security.log.file for multiple users. |  Major | conf, security | Karthik 
Kambatla | Karthik Kambatla |
-| [HADOOP-8365](https://issues.apache.org/jira/browse/HADOOP-8365) | Add flag 
to disable durable sync |  Blocker | . | Eli Collins | Eli Collins |
+| [HADOOP-5464](https://issues.apache.org/jira/browse/HADOOP-5464) | DFSClient 
does not treat write timeout of 0 properly |  Major | . | Raghu Angadi | Raghu 
Angadi |
+| [HDFS-3044](https://issues.apache.org/jira/browse/HDFS-3044) | fsck move 
should be non-destructive by default |  Major | namenode | Eli Collins | Colin 
P. McCabe |
+| [HADOOP-8154](https://issues.apache.org/jira/browse/HADOOP-8154) | 
DNS#getIPs shouldn't silently return the local host IP for bogus interface 
names |  Major | conf | Eli Collins | Eli Collins |
 | [HADOOP-8314](https://issues.apache.org/jira/browse/HADOOP-8314) | 
HttpServer#hasAdminAccess should return false if authorization is enabled but 
user is not authenticated |  Major | security | Alejandro Abdelnur | Alejandro 
Abdelnur |
 | [HADOOP-8230](https://issues.apache.org/jira/browse/HADOOP-8230) | Enable 
sync by default and disable append |  Major | . | Eli Collins | Eli Collins |
-| [HADOOP-8154](https://issues.apache.org/jira/browse/HADOOP-8154) | 
DNS#getIPs shouldn't silently return the local host IP for bogus interface 
names |  Major | conf | Eli Collins | Eli Collins |
-| [HADOOP-5464](https://issues.apache.org/jira/browse/HADOOP-5464) | DFSClient 
does not treat write timeout of 0 properly |  Major | . | Raghu Angadi | Raghu 
Angadi |
 | [HDFS-3522](https://issues.apache.org/jira/browse/HDFS-3522) | If NN is in 
safemode, it should throw SafeModeException when getBlockLocations has zero 
locations |  Major | namenode | Brandon Li | Brandon Li |
-| [HDFS-3044](https://issues.apache.org/jira/browse/HDFS-3044) | fsck move 
should be non-destructive by default |  Major | namenode | Eli Collins | Colin 
Patrick McCabe |
+| [HADOOP-8365](https://issues.apache.org/jira/browse/HADOOP-8365) | Add flag 
to disable durable sync |  Blocker | . | Eli Collins | Eli Collins |
+| [HADOOP-8552](https://issues.apache.org/jira/browse/HADOOP-8552) | Conflict: 
Same security.log.file for multiple users. |  Major | conf, security | Karthik 
Kambatla | Karthik Kambatla |
 | [HDFS-2617](https://issues.apache.org/jira/browse/HDFS-2617) | Replaced 
Kerberized SSL for image transfer and fsck with SPNEGO-based solution |  Major 
| security | Jakob Homan | Jakob Homan |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7823](https://issues.apache.org/jira/browse/HADOOP-7823) | port 
HADOOP-4012 to branch-1 (splitting support for bzip2) |  Major | . | Tim 
Broberg | Andrew Purtell |
+| [MAPREDUCE-3118](https://issues.apache.org/jira/browse/MAPREDUCE-3118) | 
Backport Gridmix and Rumen features from trunk to Hadoop 0.20 security branch | 
 Major | contrib/gridmix, tools/rumen | Ravi Gummadi | Ravi Gummadi |
 | [HADOOP-7806](https://issues.apache.org/jira/browse/HADOOP-7806) | Support 
binding to sub-interfaces |  Major | util | Harsh J | Harsh J |
-| [HDFS-3150](https://issues.apache.org/jira/browse/HDFS-3150) | Add option 
for clients to contact DNs via hostname |  Major | datanode, hdfs-client | Eli 
Collins | Eli Collins |
 | [HDFS-3148](https://issues.apache.org/jira/browse/HDFS-3148) | The client 
should be able to use multiple local interfaces for data transfer |  Major | 
hdfs-client, performance | Eli Collins | Eli Collins |
-| [HDFS-3055](https://issues.apache.org/jira/browse/HDFS-3055) | Implement 
recovery mode for branch-1 |  Minor | . | Colin Patrick McCabe | Colin Patrick 
McCabe |
+| [HDFS-3055](https://issues.apache.org/jira/browse/HDFS-3055) | Implement 
recovery mode for branch-1 |  Minor | . | Colin P. McCabe | Colin P. McCabe |
 | [MAPREDUCE-3837](https://issues.apache.org/jira/browse/MAPREDUCE-3837) | Job 
tracker is not able to recover job in case of crash 

[40/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
index 187b087..a7c0fb2 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
@@ -23,281 +23,284 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-4466](https://issues.apache.org/jira/browse/HADOOP-4466) | *Blocker* 
| **SequenceFileOutputFormat is coupled to WritableComparable and Writable**
+* [HADOOP-3595](https://issues.apache.org/jira/browse/HADOOP-3595) | *Major* | 
**Remove deprecated mapred.combine.once functionality**
 
-Ensure that SequenceFileOutputFormat isn't tied to Writables and can be used 
with other Serialization frameworks.
+ Removed deprecated methods for mapred.combine.once functionality.
 
 
 ---
 
-* [HADOOP-4433](https://issues.apache.org/jira/browse/HADOOP-4433) | *Major* | 
**Improve data loader for collecting metrics and log files from hadoop and 
system**
+* [HADOOP-2664](https://issues.apache.org/jira/browse/HADOOP-2664) | *Major* | 
**lzop-compatible CompresionCodec**
 
-- Added startup and shutdown script
-- Added torque metrics data loader
-- Improve handling of Exec Plugin
-- Added Test cases for File Tailing Adaptors
-- Added Test cases for Start streaming at specific offset
+Introduced LZOP codec.
 
 
 ---
 
-* [HADOOP-4430](https://issues.apache.org/jira/browse/HADOOP-4430) | *Blocker* 
| **Namenode Web UI capacity report is inconsistent with Balancer**
+* [HADOOP-3667](https://issues.apache.org/jira/browse/HADOOP-3667) | *Major* | 
**Remove deprecated methods in JobConf**
 
-Changed reporting in the NameNode Web UI to more closely reflect the behavior 
of the re-balancer. Removed no longer used config parameter dfs.datanode.du.pct 
from hadoop-default.xml.
+Removed the following deprecated methods from JobConf:
+  addInputPath(Path)
+  getInputPaths()
+  getMapOutputCompressionType()
+  getOutputPath()
+  getSystemDir()
+  setInputPath(Path)
+  setMapOutputCompressionType(CompressionType style)
+  setOutputPath(Path)
 
 
 ---
 
-* [HADOOP-4293](https://issues.apache.org/jira/browse/HADOOP-4293) | *Major* | 
**Remove WritableJobConf**
+* [HADOOP-3652](https://issues.apache.org/jira/browse/HADOOP-3652) | *Major* | 
**Remove deprecated class OutputFormatBase**
 
-Made Configuration Writable and rename the old write method to writeXml.
+Removed deprecated org.apache.hadoop.mapred.OutputFormatBase.
 
 
 ---
 
-* [HADOOP-4281](https://issues.apache.org/jira/browse/HADOOP-4281) | *Blocker* 
| **Capacity reported in some of the commands is not consistent with the Web UI 
reported data**
+* [HADOOP-2325](https://issues.apache.org/jira/browse/HADOOP-2325) | *Major* | 
**Require Java 6**
 
-Changed command "hadoop dfsadmin -report" to be consistent with Web UI for 
both Namenode and Datanode reports. "Total raw bytes" is changed to "Configured 
Capacity". "Present Capacity" is newly added to indicate the present capacity 
of the DFS. "Remaining raw bytes" is changed to "DFS Remaining". "Used raw 
bytes" is changed to "DFS Used". "% used" is changed to "DFS Used%". 
Applications that parse command output should be reviewed.
+Hadoop now requires Java 6.
 
 
 ---
 
-* [HADOOP-4227](https://issues.apache.org/jira/browse/HADOOP-4227) | *Minor* | 
**Remove the deprecated, unused class ShellCommand.**
+* [HADOOP-3695](https://issues.apache.org/jira/browse/HADOOP-3695) | *Major* | 
**[HOD] Have an ability to run multiple slaves per node**
 
-Removed the deprecated class org.apache.hadoop.fs.ShellCommand.
+Added an ability in HOD to start multiple workers (TaskTrackers and/or 
DataNodes) per node to assist testing and simulation of scale. A configuration 
variable ringmaster.workers\_per\_ring was added to specify the number of 
workers to start.
 
 
 ---
 
-* [HADOOP-4205](https://issues.apache.org/jira/browse/HADOOP-4205) | *Major* | 
**[Hive] metastore and ql to use the refactored SerDe library**
+* [HADOOP-3149](https://issues.apache.org/jira/browse/HADOOP-3149) | *Major* | 
**supporting multiple outputs for M/R jobs**
 
-Improved Hive metastore and ql to use the refactored SerDe library.
+Introduced MultipleOutputs class so Map/Reduce jobs can write data to 
different output files. Each output can use a different OutputFormat. 
Outpufiles are created within the job output directory. 
FileOutputFormat.getPathForCustomFile() creates a filename under the outputdir 
that is named with the task ID and task type (i.e. myfile-r-1).
 
 
 

[22/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.7/CHANGES.0.23.7.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.7/CHANGES.0.23.7.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.7/CHANGES.0.23.7.md
index d67a6b7..17f5be0 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.7/CHANGES.0.23.7.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.7/CHANGES.0.23.7.md
@@ -27,12 +27,6 @@
 | [HDFS-395](https://issues.apache.org/jira/browse/HDFS-395) | DFS 
Scalability: Incremental block reports |  Major | datanode, namenode | dhruba 
borthakur | Tomasz Nykiel |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
@@ -44,158 +38,152 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9379](https://issues.apache.org/jira/browse/HADOOP-9379) | capture 
the ulimit info after printing the log to the console |  Trivial | . | Arpit 
Gupta | Arpit Gupta |
-| [HADOOP-9374](https://issues.apache.org/jira/browse/HADOOP-9374) | Add 
tokens from -tokenCacheFile into UGI |  Major | security | Daryn Sharp | Daryn 
Sharp |
-| [HADOOP-9352](https://issues.apache.org/jira/browse/HADOOP-9352) | Expose 
UGI.setLoginUser for tests |  Major | security | Daryn Sharp | Daryn Sharp |
-| [HADOOP-9336](https://issues.apache.org/jira/browse/HADOOP-9336) | Allow UGI 
of current connection to be queried |  Critical | ipc | Daryn Sharp | Daryn 
Sharp |
-| [HADOOP-9253](https://issues.apache.org/jira/browse/HADOOP-9253) | Capture 
ulimit info in the logs at service start time |  Major | . | Arpit Gupta | 
Arpit Gupta |
-| [HADOOP-9247](https://issues.apache.org/jira/browse/HADOOP-9247) | 
parametrize Clover "generateXxx" properties to make them re-definable via -D in 
mvn calls |  Minor | . | Ivan A. Veselovsky | Ivan A. Veselovsky |
-| [HADOOP-9216](https://issues.apache.org/jira/browse/HADOOP-9216) | 
CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
Configuration. |  Major | io | Tsuyoshi Ozawa | Tsuyoshi Ozawa |
-| [HADOOP-9147](https://issues.apache.org/jira/browse/HADOOP-9147) | Add 
missing fields to FIleStatus.toString |  Trivial | . | Jonathan Allen | 
Jonathan Allen |
-| [HADOOP-8849](https://issues.apache.org/jira/browse/HADOOP-8849) | 
FileUtil#fullyDelete should grant the target directories +rwx permissions 
before trying to delete them |  Minor | . | Ivan A. Veselovsky | Ivan A. 
Veselovsky |
-| [HADOOP-8711](https://issues.apache.org/jira/browse/HADOOP-8711) | provide 
an option for IPC server users to avoid printing stack information for certain 
exceptions |  Major | ipc | Brandon Li | Brandon Li |
-| [HADOOP-8462](https://issues.apache.org/jira/browse/HADOOP-8462) | 
Native-code implementation of bzip2 codec |  Major | io | Govind Kamat | Govind 
Kamat |
-| [HADOOP-8214](https://issues.apache.org/jira/browse/HADOOP-8214) | make 
hadoop script recognize a full set of deprecated commands |  Major | scripts | 
Roman Shaposhnik | Roman Shaposhnik |
-| [HADOOP-8075](https://issues.apache.org/jira/browse/HADOOP-8075) | Lower 
native-hadoop library log from info to debug |  Major | native | Eli Collins | 
Hızır Sefa İrken |
-| [HADOOP-7886](https://issues.apache.org/jira/browse/HADOOP-7886) | Add 
toString to FileStatus |  Minor | . | Jakob Homan | SreeHari |
 | [HADOOP-7358](https://issues.apache.org/jira/browse/HADOOP-7358) | Improve 
log levels when exceptions caught in RPC handler |  Minor | ipc | Todd Lipcon | 
Todd Lipcon |
+| [HADOOP-7886](https://issues.apache.org/jira/browse/HADOOP-7886) | Add 
toString to FileStatus |  Minor | . | Jakob Homan | SreeHari |
+| [HADOOP-8214](https://issues.apache.org/jira/browse/HADOOP-8214) | make 
hadoop script recognize a full set of deprecated commands |  Major | scripts | 
Roman Shaposhnik | Roman Shaposhnik |
+| [HADOOP-8711](https://issues.apache.org/jira/browse/HADOOP-8711) | provide 
an option for IPC server users to avoid printing stack information for certain 
exceptions |  Major | ipc | Brandon Li | Brandon Li |
 | [HDFS-3817](https://issues.apache.org/jira/browse/HDFS-3817) | avoid 
printing stack information for SafeModeException |  Major | namenode | Brandon 
Li | Brandon Li |
-| [MAPREDUCE-5079](https://issues.apache.org/jira/browse/MAPREDUCE-5079) | 
Recovery should restore task state from job history info directly |  Critical | 
mr-am | Jason Lowe | Jason Lowe |
-| [MAPREDUCE-4990](https://issues.apache.org/jira/browse/MAPREDUCE-4990) | 
Construct debug strings conditionally in ShuffleHandler.Shuffle#sendMapOutput() 
|  Trivial | . | 

[29/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/CHANGES.0.23.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/CHANGES.0.23.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/CHANGES.0.23.0.md
index cefa86d..9fa1489 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/CHANGES.0.23.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/CHANGES.0.23.0.md
@@ -24,1165 +24,1159 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7547](https://issues.apache.org/jira/browse/HADOOP-7547) | Fix the 
warning in writable classes.[ WritableComparable is a raw type. References to 
generic type WritableComparable\ should be parameterized  ] |  Minor | io | 
Uma Maheswara Rao G | Uma Maheswara Rao G |
-| [HADOOP-7507](https://issues.apache.org/jira/browse/HADOOP-7507) | jvm 
metrics all use the same namespace |  Major | metrics | Jeff Bean | Alejandro 
Abdelnur |
-| [HADOOP-7374](https://issues.apache.org/jira/browse/HADOOP-7374) | Don't add 
tools.jar to the classpath when running Hadoop |  Major | scripts | Eli Collins 
| Eli Collins |
-| [HADOOP-7331](https://issues.apache.org/jira/browse/HADOOP-7331) | Make 
hadoop-daemon.sh to return 1 if daemon processes did not get started |  Trivial 
| scripts | Tanping Wang | Tanping Wang |
-| [HADOOP-7286](https://issues.apache.org/jira/browse/HADOOP-7286) | Refactor 
FsShell's du/dus/df |  Major | fs | Daryn Sharp | Daryn Sharp |
-| [HADOOP-7264](https://issues.apache.org/jira/browse/HADOOP-7264) | Bump avro 
version to at least 1.4.1 |  Major | io | Luke Lu | Luke Lu |
-| [HADOOP-7227](https://issues.apache.org/jira/browse/HADOOP-7227) | Remove 
protocol version check at proxy creation in Hadoop RPC. |  Major | ipc | 
Jitendra Nath Pandey | Jitendra Nath Pandey |
-| [HADOOP-7153](https://issues.apache.org/jira/browse/HADOOP-7153) | 
MapWritable violates contract of Map interface for equals() and hashCode() |  
Minor | io | Nicholas Telford | Nicholas Telford |
-| [HADOOP-7136](https://issues.apache.org/jira/browse/HADOOP-7136) | Remove 
failmon contrib |  Major | . | Nigel Daley | Nigel Daley |
-| [HADOOP-6949](https://issues.apache.org/jira/browse/HADOOP-6949) | Reduces 
RPC packet size for primitive arrays, especially long[], which is used at block 
reporting |  Major | io | Navis | Matt Foley |
-| [HADOOP-6921](https://issues.apache.org/jira/browse/HADOOP-6921) | metrics2: 
metrics plugins |  Major | . | Luke Lu | Luke Lu |
-| [HADOOP-6920](https://issues.apache.org/jira/browse/HADOOP-6920) | Metrics2: 
metrics instrumentation |  Major | . | Luke Lu | Luke Lu |
-| [HADOOP-6904](https://issues.apache.org/jira/browse/HADOOP-6904) | A baby 
step towards inter-version RPC communications |  Major | ipc | Hairong Kuang | 
Hairong Kuang |
+| [HDFS-1526](https://issues.apache.org/jira/browse/HDFS-1526) | Dfs client 
name for a map/reduce task should have some randomness |  Major | hdfs-client | 
Hairong Kuang | Hairong Kuang |
+| [HDFS-1560](https://issues.apache.org/jira/browse/HDFS-1560) | dfs.data.dir 
permissions should default to 700 |  Minor | datanode | Todd Lipcon | Todd 
Lipcon |
+| [HDFS-1536](https://issues.apache.org/jira/browse/HDFS-1536) | Improve HDFS 
WebUI |  Major | . | Hairong Kuang | Hairong Kuang |
 | [HADOOP-6864](https://issues.apache.org/jira/browse/HADOOP-6864) | Provide a 
JNI-based implementation of ShellBasedUnixGroupsNetgroupMapping (implementation 
of GroupMappingServiceProvider) |  Major | security | Erik Steffl | Boris 
Shkolnik |
+| [HADOOP-6904](https://issues.apache.org/jira/browse/HADOOP-6904) | A baby 
step towards inter-version RPC communications |  Major | ipc | Hairong Kuang | 
Hairong Kuang |
 | [HADOOP-6432](https://issues.apache.org/jira/browse/HADOOP-6432) | 
Statistics support in FileContext |  Major | . | Jitendra Nath Pandey | 
Jitendra Nath Pandey |
-| [HADOOP-6255](https://issues.apache.org/jira/browse/HADOOP-6255) | Create an 
rpm integration project |  Major | . | Owen O'Malley | Eric Yang |
-| [HADOOP-2081](https://issues.apache.org/jira/browse/HADOOP-2081) | 
Configuration getInt, getLong, and getFloat replace invalid numbers with the 
default value |  Major | conf | Owen O'Malley | Harsh J |
-| [HDFS-2210](https://issues.apache.org/jira/browse/HDFS-2210) | Remove 
hdfsproxy |  Major | contrib/hdfsproxy | Eli Collins | Eli Collins |
-| [HDFS-2202](https://issues.apache.org/jira/browse/HDFS-2202) | Changes to 
balancer bandwidth should not require datanode restart. |  Major | balancer & 
mover, datanode | Eric Payne | Eric Payne |
-| [HDFS-2107](https://issues.apache.org/jira/browse/HDFS-2107) | Move block 
management code to a package |  Major | namenode | Tsz Wo Nicholas Sze | 

[27/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/CHANGES.0.23.1.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/CHANGES.0.23.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/CHANGES.0.23.1.md
index dd31769..c9172b5 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/CHANGES.0.23.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/CHANGES.0.23.1.md
@@ -24,467 +24,461 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8013](https://issues.apache.org/jira/browse/HADOOP-8013) | 
ViewFileSystem does not honor setVerifyChecksum |  Major | fs | Daryn Sharp | 
Daryn Sharp |
-| [HADOOP-7470](https://issues.apache.org/jira/browse/HADOOP-7470) | move up 
to Jackson 1.8.8 |  Minor | util | Steve Loughran | Enis Soztutar |
 | [HADOOP-7348](https://issues.apache.org/jira/browse/HADOOP-7348) | Modify 
the option of FsShell getmerge from [addnl] to [-nl] for consistency |  Major | 
fs | XieXianshan | XieXianshan |
 | [MAPREDUCE-3720](https://issues.apache.org/jira/browse/MAPREDUCE-3720) | 
Command line listJobs should not visit each AM |  Major | client, mrv2 | Vinod 
Kumar Vavilapalli | Vinod Kumar Vavilapalli |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-7470](https://issues.apache.org/jira/browse/HADOOP-7470) | move up 
to Jackson 1.8.8 |  Minor | util | Steve Loughran | Enis Soztutar |
+| [HADOOP-8013](https://issues.apache.org/jira/browse/HADOOP-8013) | 
ViewFileSystem does not honor setVerifyChecksum |  Major | fs | Daryn Sharp | 
Daryn Sharp |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
+| [MAPREDUCE-778](https://issues.apache.org/jira/browse/MAPREDUCE-778) | 
[Rumen] Need a standalone JobHistory log anonymizer |  Major | tools/rumen | 
Hong Tang | Amar Kamat |
 | [HADOOP-7808](https://issues.apache.org/jira/browse/HADOOP-7808) | Port 
token service changes from 205 |  Major | fs, security | Daryn Sharp | Daryn 
Sharp |
-| [HDFS-2316](https://issues.apache.org/jira/browse/HDFS-2316) | [umbrella] 
WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP |  
Major | webhdfs | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
 | [MAPREDUCE-2765](https://issues.apache.org/jira/browse/MAPREDUCE-2765) | 
DistCp Rewrite |  Major | distcp, mrv2 | Mithun Radhakrishnan | Mithun 
Radhakrishnan |
-| [MAPREDUCE-778](https://issues.apache.org/jira/browse/MAPREDUCE-778) | 
[Rumen] Need a standalone JobHistory log anonymizer |  Major | tools/rumen | 
Hong Tang | Amar Kamat |
+| [HDFS-2316](https://issues.apache.org/jira/browse/HDFS-2316) | [umbrella] 
WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP |  
Major | webhdfs | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8027](https://issues.apache.org/jira/browse/HADOOP-8027) | Visiting 
/jmx on the daemon web interfaces may print unnecessary error in logs |  Minor 
| metrics | Harsh J | Aaron T. Myers |
-| [HADOOP-8015](https://issues.apache.org/jira/browse/HADOOP-8015) | 
ChRootFileSystem should extend FilterFileSystem |  Major | fs | Daryn Sharp | 
Daryn Sharp |
-| [HADOOP-8009](https://issues.apache.org/jira/browse/HADOOP-8009) | Create 
hadoop-client and hadoop-minicluster artifacts for downstream projects |  
Critical | build | Alejandro Abdelnur | Alejandro Abdelnur |
-| [HADOOP-7987](https://issues.apache.org/jira/browse/HADOOP-7987) | Support 
setting the run-as user in unsecure mode |  Major | security | Devaraj Das | 
Jitendra Nath Pandey |
-| [HADOOP-7939](https://issues.apache.org/jira/browse/HADOOP-7939) | Improve 
Hadoop subcomponent integration in Hadoop 0.23 |  Major | build, conf, 
documentation, scripts | Roman Shaposhnik | Roman Shaposhnik |
-| [HADOOP-7934](https://issues.apache.org/jira/browse/HADOOP-7934) | Normalize 
dependencies versions across all modules |  Critical | build | Alejandro 
Abdelnur | Alejandro Abdelnur |
-| [HADOOP-7919](https://issues.apache.org/jira/browse/HADOOP-7919) | [Doc] 
Remove hadoop.logfile.\* properties. |  Trivial | documentation | Harsh J | 
Harsh J |
-| [HADOOP-7910](https://issues.apache.org/jira/browse/HADOOP-7910) | add 
configuration methods to handle human readable size values |  Minor | conf | 
Sho Shimauchi | Sho Shimauchi |
-| [HADOOP-7890](https://issues.apache.org/jira/browse/HADOOP-7890) | Redirect 
hadoop script's deprecation message to stderr |  Trivial | scripts | Koji 
Noguchi | Koji 

[57/73] [abbrv] hadoop git commit: HDFS-10882. Federation State Store Interface API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10882. Federation State Store Interface API. Contributed by Jason Kace and 
Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4319a10a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4319a10a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4319a10a

Branch: refs/heads/HDFS-10467
Commit: 4319a10a4743308541d4bee0eab55971720d9e5b
Parents: 7b24a8d
Author: Inigo 
Authored: Thu Apr 6 19:18:52 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:53 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  11 ++
 .../server/federation/store/RecordStore.java| 100 
 .../store/driver/StateStoreSerializer.java  | 119 +++
 .../driver/impl/StateStoreSerializerPBImpl.java | 115 ++
 .../store/records/impl/pb/PBRecord.java |  47 
 .../store/records/impl/pb/package-info.java |  29 +
 .../src/main/resources/hdfs-default.xml |   8 ++
 7 files changed, 429 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4319a10a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index ce0a17a..7623839 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import org.apache.hadoop.http.HttpConfig;
 
 /** 
@@ -1123,6 +1124,16 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String FEDERATION_NAMENODE_RESOLVER_CLIENT_CLASS_DEFAULT 
=
   "org.apache.hadoop.hdfs.server.federation.MockResolver";
 
+  // HDFS Router-based federation State Store
+  public static final String FEDERATION_STORE_PREFIX =
+  FEDERATION_ROUTER_PREFIX + "store.";
+
+  public static final String FEDERATION_STORE_SERIALIZER_CLASS =
+  DFSConfigKeys.FEDERATION_STORE_PREFIX + "serializer";
+  public static final Class
+  FEDERATION_STORE_SERIALIZER_CLASS_DEFAULT =
+  StateStoreSerializerPBImpl.class;
+
   // dfs.client.retry confs are moved to HdfsClientConfigKeys.Retry 
   @Deprecated
   public static final String  DFS_CLIENT_RETRY_POLICY_ENABLED_KEY

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4319a10a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
new file mode 100644
index 000..524f432
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store;
+
+import java.lang.reflect.Constructor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import 

[73/73] [abbrv] hadoop git commit: HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/485c7b93
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/485c7b93
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/485c7b93

Branch: refs/heads/HDFS-10467
Commit: 485c7b931573d8ba127b978249d2722a11504d78
Parents: f2f761d
Author: Inigo Goiri 
Authored: Tue Aug 8 14:44:43 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:56 2017 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   1 +
 .../hadoop-hdfs/src/main/bin/hdfs   |   5 +
 .../hadoop-hdfs/src/main/bin/hdfs.cmd   |   7 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  19 ++
 .../hdfs/protocolPB/RouterAdminProtocolPB.java  |  44 +++
 ...uterAdminProtocolServerSideTranslatorPB.java | 151 
 .../RouterAdminProtocolTranslatorPB.java| 150 
 .../resolver/MembershipNamenodeResolver.java|  34 +-
 .../hdfs/server/federation/router/Router.java   |  52 +++
 .../federation/router/RouterAdminServer.java| 183 ++
 .../server/federation/router/RouterClient.java  |  76 +
 .../hdfs/tools/federation/RouterAdmin.java  | 341 +++
 .../hdfs/tools/federation/package-info.java |  28 ++
 .../src/main/proto/RouterProtocol.proto |  47 +++
 .../src/main/resources/hdfs-default.xml |  46 +++
 .../server/federation/RouterConfigBuilder.java  |  26 ++
 .../server/federation/RouterDFSCluster.java |  43 ++-
 .../server/federation/StateStoreDFSCluster.java | 148 
 .../federation/router/TestRouterAdmin.java  | 261 ++
 19 files changed, 1644 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/485c7b93/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 81e5fdf..360aeae 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -332,6 +332,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   editlog.proto
   fsimage.proto
   FederationProtocol.proto
+  RouterProtocol.proto
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/485c7b93/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index b1f44a4..d51a8e2 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -31,6 +31,7 @@ function hadoop_usage
   hadoop_add_option "--hosts filename" "list of hosts to use in worker mode"
   hadoop_add_option "--workers" "turn on worker mode"
 
+<<< HEAD
   hadoop_add_subcommand "balancer" daemon "run a cluster balancing utility"
   hadoop_add_subcommand "cacheadmin" admin "configure the HDFS cache"
   hadoop_add_subcommand "classpath" client "prints the class path needed to 
get the hadoop jar and the required libraries"
@@ -42,6 +43,7 @@ function hadoop_usage
   hadoop_add_subcommand "diskbalancer" daemon "Distributes data evenly among 
disks on a given node"
   hadoop_add_subcommand "envvars" client "display computed Hadoop environment 
variables"
   hadoop_add_subcommand "ec" admin "run a HDFS ErasureCoding CLI"
+  hadoop_add_subcommand "federation" admin "manage Router-based federation"
   hadoop_add_subcommand "fetchdt" client "fetch a delegation token from the 
NameNode"
   hadoop_add_subcommand "fsck" admin "run a DFS filesystem checking utility"
   hadoop_add_subcommand "getconf" client "get config values from configuration"
@@ -181,6 +183,9 @@ function hdfscmd_case
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.federation.router.Router'
 ;;
+federation)
+  HADOOP_CLASSNAME='org.apache.hadoop.hdfs.tools.federation.RouterAdmin'
+;;
 secondarynamenode)
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   
HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

http://git-wip-us.apache.org/repos/asf/hadoop/blob/485c7b93/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
index b9853d6..53bdf70 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
+++ 

[49/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
index c432600..7470dc8 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
@@ -20,211 +20,195 @@
 
 ## Release 0.14.0 - 2007-08-20
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1597](https://issues.apache.org/jira/browse/HADOOP-1597) | 
Distributed upgrade status reporting and post upgrade features. |  Blocker | . 
| Konstantin Shvachko | Konstantin Shvachko |
-| [HADOOP-1570](https://issues.apache.org/jira/browse/HADOOP-1570) | Add a 
per-job configuration knob to control loading of native hadoop libraries |  
Major | io | Arun C Murthy | Arun C Murthy |
-| [HADOOP-1568](https://issues.apache.org/jira/browse/HADOOP-1568) | NameNode 
Schema for HttpFileSystem |  Major | fs | Chris Douglas | Chris Douglas |
-| [HADOOP-1562](https://issues.apache.org/jira/browse/HADOOP-1562) | Report 
Java VM metrics |  Major | metrics | David Bowen | David Bowen |
+| [HADOOP-234](https://issues.apache.org/jira/browse/HADOOP-234) | Hadoop 
Pipes for writing map/reduce jobs in C++ and python |  Major | . | Sanjay 
Dahiya | Owen O'Malley |
+| [HADOOP-1379](https://issues.apache.org/jira/browse/HADOOP-1379) | Integrate 
Findbugs into nightly build process |  Major | test | Nigel Daley | Nigel Daley 
|
+| [HADOOP-1447](https://issues.apache.org/jira/browse/HADOOP-1447) | Support 
for textInputFormat in contrib/data\_join |  Minor | . | Senthil Subramanian | 
Senthil Subramanian |
+| [HADOOP-1469](https://issues.apache.org/jira/browse/HADOOP-1469) | 
Asynchronous table creation |  Minor | . | James Kennedy | stack |
+| [HADOOP-1377](https://issues.apache.org/jira/browse/HADOOP-1377) | Creation 
time and modification time for hadoop files and directories |  Major | . | 
dhruba borthakur | dhruba borthakur |
 | [HADOOP-1515](https://issues.apache.org/jira/browse/HADOOP-1515) | 
MultiFileSplit, MultiFileInputFormat |  Major | . | Enis Soztutar | Enis 
Soztutar |
 | [HADOOP-1508](https://issues.apache.org/jira/browse/HADOOP-1508) | ant Task 
for FsShell operations |  Minor | build, fs | Chris Douglas | Chris Douglas |
-| [HADOOP-1469](https://issues.apache.org/jira/browse/HADOOP-1469) | 
Asynchronous table creation |  Minor | . | James Kennedy | stack |
-| [HADOOP-1447](https://issues.apache.org/jira/browse/HADOOP-1447) | Support 
for textInputFormat in contrib/data\_join |  Minor | . | Senthil Subramanian | 
Senthil Subramanian |
-| [HADOOP-1437](https://issues.apache.org/jira/browse/HADOOP-1437) | Eclipse 
plugin for developing and executing MapReduce programs on Hadoop |  Major | . | 
Eugene Hung | Christophe Taton |
+| [HADOOP-1570](https://issues.apache.org/jira/browse/HADOOP-1570) | Add a 
per-job configuration knob to control loading of native hadoop libraries |  
Major | io | Arun C Murthy | Arun C Murthy |
 | [HADOOP-1433](https://issues.apache.org/jira/browse/HADOOP-1433) | Add job 
priority |  Minor | . | Johan Oskarsson | Johan Oskarsson |
-| [HADOOP-1379](https://issues.apache.org/jira/browse/HADOOP-1379) | Integrate 
Findbugs into nightly build process |  Major | test | Nigel Daley | Nigel Daley 
|
-| [HADOOP-1377](https://issues.apache.org/jira/browse/HADOOP-1377) | Creation 
time and modification time for hadoop files and directories |  Major | . | 
dhruba borthakur | dhruba borthakur |
+| [HADOOP-1597](https://issues.apache.org/jira/browse/HADOOP-1597) | 
Distributed upgrade status reporting and post upgrade features. |  Blocker | . 
| Konstantin Shvachko | Konstantin Shvachko |
+| [HADOOP-1562](https://issues.apache.org/jira/browse/HADOOP-1562) | Report 
Java VM metrics |  Major | metrics | David Bowen | David Bowen |
 | [HADOOP-1134](https://issues.apache.org/jira/browse/HADOOP-1134) | Block 
level CRCs in HDFS |  Major | . | Raghu Angadi | Raghu Angadi |
-| [HADOOP-234](https://issues.apache.org/jira/browse/HADOOP-234) | Hadoop 
Pipes for writing map/reduce jobs in C++ and python |  Major | . | Sanjay 
Dahiya | Owen O'Malley |
+| [HADOOP-1568](https://issues.apache.org/jira/browse/HADOOP-1568) | NameNode 
Schema for HttpFileSystem |  Major | fs | Chris Douglas | Chris Douglas |
+| 

[24/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/CHANGES.0.23.3.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/CHANGES.0.23.3.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/CHANGES.0.23.3.md
index 9b50eaf..4c8912e 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/CHANGES.0.23.3.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/CHANGES.0.23.3.md
@@ -24,17 +24,11 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | fs -mkdir 
creates parent directories without the -p option |  Major | fs | Robert Joseph 
Evans | John George |
-| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
-| [MAPREDUCE-4311](https://issues.apache.org/jira/browse/MAPREDUCE-4311) | 
Capacity scheduler.xml does not accept decimal values for capacity and 
maximum-capacity settings |  Major | capacity-sched, mrv2 | Thomas Graves | 
Karthik Kambatla |
 | [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
User set java.library.path seems to overwrite default creating problems native 
lib loading |  Major | mrv2 | Anupam Seth | Anupam Seth |
 | [MAPREDUCE-3812](https://issues.apache.org/jira/browse/MAPREDUCE-3812) | 
Lower default allocation sizes, fix allocation configurations and document them 
|  Major | mrv2, performance | Vinod Kumar Vavilapalli | Harsh J |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
+| [MAPREDUCE-4311](https://issues.apache.org/jira/browse/MAPREDUCE-4311) | 
Capacity scheduler.xml does not accept decimal values for capacity and 
maximum-capacity settings |  Major | capacity-sched, mrv2 | Thomas Graves | 
Karthik Kambatla |
+| [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | fs -mkdir 
creates parent directories without the -p option |  Major | fs | Robert Joseph 
Evans | John George |
 
 
 ### NEW FEATURES:
@@ -49,268 +43,268 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8700](https://issues.apache.org/jira/browse/HADOOP-8700) | Move the 
checksum type constants to an enum |  Minor | util | Tsz Wo Nicholas Sze | Tsz 
Wo Nicholas Sze |
-| [HADOOP-8635](https://issues.apache.org/jira/browse/HADOOP-8635) | Cannot 
cancel paths registered deleteOnExit |  Critical | fs | Daryn Sharp | Daryn 
Sharp |
-| [HADOOP-8535](https://issues.apache.org/jira/browse/HADOOP-8535) | Cut 
hadoop build times in half (upgrade maven-compiler-plugin to 2.5.1) |  Major | 
build | Jonathan Eagles | Jonathan Eagles |
-| [HADOOP-8525](https://issues.apache.org/jira/browse/HADOOP-8525) | Provide 
Improved Traceability for Configuration |  Trivial | . | Robert Joseph Evans | 
Robert Joseph Evans |
-| [HADOOP-8373](https://issues.apache.org/jira/browse/HADOOP-8373) | Port 
RPC.getServerAddress to 0.23 |  Major | ipc | Daryn Sharp | Daryn Sharp |
-| [HADOOP-8335](https://issues.apache.org/jira/browse/HADOOP-8335) | Improve 
Configuration's address handling |  Major | util | Daryn Sharp | Daryn Sharp |
-| [HADOOP-8286](https://issues.apache.org/jira/browse/HADOOP-8286) | Simplify 
getting a socket address from conf |  Major | conf | Daryn Sharp | Daryn Sharp |
-| [HADOOP-8242](https://issues.apache.org/jira/browse/HADOOP-8242) | 
AbstractDelegationTokenIdentifier: add getter methods for owner and realuser |  
Minor | . | Colin Patrick McCabe | Colin Patrick McCabe |
-| [HADOOP-8240](https://issues.apache.org/jira/browse/HADOOP-8240) | Allow 
users to specify a checksum type on create() |  Major | fs | Kihwal Lee | 
Kihwal Lee |
-| [HADOOP-8239](https://issues.apache.org/jira/browse/HADOOP-8239) | Extend 
MD5MD5CRC32FileChecksum to show the actual checksum type being used |  Major | 
fs | Kihwal Lee | Kihwal Lee |
-| [HADOOP-8227](https://issues.apache.org/jira/browse/HADOOP-8227) | Allow RPC 
to limit ephemeral port range. |  Blocker | . | Robert Joseph Evans | Robert 
Joseph Evans |
+| [HDFS-208](https://issues.apache.org/jira/browse/HDFS-208) | name node 
should warn if only one dir is listed in dfs.name.dir |  Minor | namenode | 
Allen Wittenauer | Uma Maheswara Rao G |
+| [MAPREDUCE-3935](https://issues.apache.org/jira/browse/MAPREDUCE-3935) | 
Annotate Counters.Counter and Counters.Group as @Public |  Major | client | Tom 
White | Tom White |
 | 

[50/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
index 40e402c..125ec55 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
@@ -20,98 +20,88 @@
 
 ## Release 0.12.0 - 2007-03-02
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1032](https://issues.apache.org/jira/browse/HADOOP-1032) | Support 
for caching Job JARs |  Minor | . | Gautam Kowshik | Gautam Kowshik |
-| [HADOOP-492](https://issues.apache.org/jira/browse/HADOOP-492) | Global 
counters |  Major | . | arkady borkovsky | David Bowen |
 | [HADOOP-491](https://issues.apache.org/jira/browse/HADOOP-491) | streaming 
jobs should allow programs that don't do any IO for a long time |  Major | . | 
arkady borkovsky | Arun C Murthy |
+| [HADOOP-492](https://issues.apache.org/jira/browse/HADOOP-492) | Global 
counters |  Major | . | arkady borkovsky | David Bowen |
+| [HADOOP-1032](https://issues.apache.org/jira/browse/HADOOP-1032) | Support 
for caching Job JARs |  Minor | . | Gautam Kowshik | Gautam Kowshik |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1043](https://issues.apache.org/jira/browse/HADOOP-1043) | Optimize 
the shuffle phase (increase the parallelism) |  Major | . | Devaraj Das | 
Devaraj Das |
-| [HADOOP-1042](https://issues.apache.org/jira/browse/HADOOP-1042) | Improve 
the handling of failed map output fetches |  Major | . | Devaraj Das | Devaraj 
Das |
-| [HADOOP-1041](https://issues.apache.org/jira/browse/HADOOP-1041) | Counter 
names are ugly |  Major | . | Owen O'Malley | David Bowen |
-| [HADOOP-1040](https://issues.apache.org/jira/browse/HADOOP-1040) | 
Improvement of RandomWriter example to use custom InputFormat, OutputFormat, 
and Counters |  Major | . | Owen O'Malley | Owen O'Malley |
-| [HADOOP-1033](https://issues.apache.org/jira/browse/HADOOP-1033) | Rewrite 
AmazonEC2 wiki page |  Minor | scripts | Tom White | Tom White |
-| [HADOOP-1030](https://issues.apache.org/jira/browse/HADOOP-1030) | in unit 
tests, set ipc timeout in one place |  Minor | test | Doug Cutting | Doug 
Cutting |
-| [HADOOP-1025](https://issues.apache.org/jira/browse/HADOOP-1025) | remove 
dead code in Server.java |  Minor | ipc | Doug Cutting | Doug Cutting |
-| [HADOOP-1017](https://issues.apache.org/jira/browse/HADOOP-1017) | 
Optimization: Reduce Overhead from ReflectionUtils.newInstance |  Major | util 
| Ron Bodkin |  |
+| [HADOOP-975](https://issues.apache.org/jira/browse/HADOOP-975) | Separation 
of user tasks' stdout and stderr streams |  Major | . | Arun C Murthy | Arun C 
Murthy |
+| [HADOOP-982](https://issues.apache.org/jira/browse/HADOOP-982) | A couple 
setter functions and toString method for BytesWritable. |  Major | io | Owen 
O'Malley | Owen O'Malley |
+| [HADOOP-858](https://issues.apache.org/jira/browse/HADOOP-858) | clean up 
smallJobsBenchmark and move to src/test/org/apache/hadoop/mapred |  Minor | 
build | Nigel Daley | Nigel Daley |
+| [HADOOP-954](https://issues.apache.org/jira/browse/HADOOP-954) | Metrics 
should offer complete set of static report methods or none at all |  Minor | 
metrics | Nigel Daley | David Bowen |
+| [HADOOP-882](https://issues.apache.org/jira/browse/HADOOP-882) | 
S3FileSystem should retry if there is a communication problem with S3 |  Major 
| fs | Tom White | Tom White |
+| [HADOOP-977](https://issues.apache.org/jira/browse/HADOOP-977) | The output 
from the user's task should be tagged and sent to the resepective console 
streams. |  Major | . | Owen O'Malley | Arun C Murthy |
 | [HADOOP-1007](https://issues.apache.org/jira/browse/HADOOP-1007) | Names 
used for map, reduce, and shuffle metrics should be unique |  Trivial | metrics 
| Nigel Daley | Nigel Daley |
+| [HADOOP-889](https://issues.apache.org/jira/browse/HADOOP-889) | DFS unit 
tests have duplicate code |  Minor | test | Doug Cutting | Milind Bhandarkar |
+| [HADOOP-943](https://issues.apache.org/jira/browse/HADOOP-943) | fsck to 
show the filename of the corrupted file |  Trivial | . | Koji Noguchi | dhruba 
borthakur |
+| 

[08/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
index 1e7747e..1042346 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
@@ -24,15 +24,9 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9944](https://issues.apache.org/jira/browse/HADOOP-9944) | 
RpcRequestHeaderProto defines callId as uint32 while 
ipc.Client.CONNECTION\_CONTEXT\_CALL\_ID is signed (-3) |  Blocker | . | Arun C 
Murthy | Arun C Murthy |
-| [YARN-1170](https://issues.apache.org/jira/browse/YARN-1170) | yarn proto 
definitions should specify package as 'hadoop.yarn' |  Blocker | . | Arun C 
Murthy | Binglin Chang |
 | [YARN-707](https://issues.apache.org/jira/browse/YARN-707) | Add user info 
in the YARN ClientToken |  Blocker | . | Bikas Saha | Jason Lowe |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [YARN-1170](https://issues.apache.org/jira/browse/YARN-1170) | yarn proto 
definitions should specify package as 'hadoop.yarn' |  Blocker | . | Arun C 
Murthy | Binglin Chang |
+| [HADOOP-9944](https://issues.apache.org/jira/browse/HADOOP-9944) | 
RpcRequestHeaderProto defines callId as uint32 while 
ipc.Client.CONNECTION\_CONTEXT\_CALL\_ID is signed (-3) |  Blocker | . | Arun C 
Murthy | Arun C Murthy |
 
 
 ### NEW FEATURES:
@@ -40,199 +34,193 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
 | [HADOOP-9789](https://issues.apache.org/jira/browse/HADOOP-9789) | Support 
server advertised kerberos principals |  Critical | ipc, security | Daryn Sharp 
| Daryn Sharp |
-| [HDFS-5118](https://issues.apache.org/jira/browse/HDFS-5118) | Provide 
testing support for DFSClient to drop RPC responses |  Major | . | Jing Zhao | 
Jing Zhao |
 | [HDFS-5076](https://issues.apache.org/jira/browse/HDFS-5076) | Add MXBean 
methods to query NN's transaction information and JournalNode's journal status 
|  Minor | . | Jing Zhao | Jing Zhao |
+| [HDFS-5118](https://issues.apache.org/jira/browse/HDFS-5118) | Provide 
testing support for DFSClient to drop RPC responses |  Major | . | Jing Zhao | 
Jing Zhao |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9962](https://issues.apache.org/jira/browse/HADOOP-9962) | in order 
to avoid dependency divergence within Hadoop itself lets enable 
DependencyConvergence |  Major | build | Roman Shaposhnik | Roman Shaposhnik |
-| [HADOOP-9945](https://issues.apache.org/jira/browse/HADOOP-9945) | 
HAServiceState should have a state for stopped services |  Minor | ha | Karthik 
Kambatla | Karthik Kambatla |
-| [HADOOP-9918](https://issues.apache.org/jira/browse/HADOOP-9918) | Add 
addIfService() to CompositeService |  Minor | . | Karthik Kambatla | Karthik 
Kambatla |
-| [HADOOP-9886](https://issues.apache.org/jira/browse/HADOOP-9886) | Turn 
warning message in RetryInvocationHandler to debug |  Minor | . | Arpit Gupta | 
Arpit Gupta |
-| [HADOOP-9879](https://issues.apache.org/jira/browse/HADOOP-9879) | Move the 
version info of zookeeper dependencies to hadoop-project/pom |  Minor | build | 
Karthik Kambatla | Karthik Kambatla |
+| [HADOOP-8814](https://issues.apache.org/jira/browse/HADOOP-8814) | 
Inefficient comparison with the empty string. Use isEmpty() instead |  Minor | 
conf, fs, fs/s3, ha, io, metrics, performance, record, security, util | Brandon 
Li | Brandon Li |
+| [MAPREDUCE-1981](https://issues.apache.org/jira/browse/MAPREDUCE-1981) | 
Improve getSplits performance by using listLocatedStatus |  Major | job 
submission | Hairong Kuang | Hairong Kuang |
+| [HADOOP-9803](https://issues.apache.org/jira/browse/HADOOP-9803) | Add 
generic type parameter to RetryInvocationHandler |  Minor | ipc | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
+| [YARN-758](https://issues.apache.org/jira/browse/YARN-758) | Augment MockNM 
to use multiple cores |  Minor | . | Bikas Saha | Karthik Kambatla |
+| [MAPREDUCE-5367](https://issues.apache.org/jira/browse/MAPREDUCE-5367) | 
Local jobs all use same local working directory |  Major | . | Sandy Ryza | 
Sandy Ryza |
+| [HDFS-5061](https://issues.apache.org/jira/browse/HDFS-5061) | Make 
FSNameSystem#auditLoggers an unmodifiable list |  Major | namenode | Arpit 
Agarwal | Arpit Agarwal |
+| 

[38/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/CHANGES.0.20.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/CHANGES.0.20.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/CHANGES.0.20.0.md
index 4c1dd51..9e1b766 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/CHANGES.0.20.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/CHANGES.0.20.0.md
@@ -24,120 +24,114 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-5531](https://issues.apache.org/jira/browse/HADOOP-5531) | Remove 
Chukwa on branch-0.20 |  Blocker | . | Nigel Daley | Nigel Daley |
-| [HADOOP-4970](https://issues.apache.org/jira/browse/HADOOP-4970) | Use the 
full path when move files to .Trash/Current |  Major | . | Prasad Chakka | 
Prasad Chakka |
-| [HADOOP-4826](https://issues.apache.org/jira/browse/HADOOP-4826) | Admin 
command saveNamespace. |  Major | . | Konstantin Shvachko | Konstantin Shvachko 
|
-| [HADOOP-4789](https://issues.apache.org/jira/browse/HADOOP-4789) | Change 
fair scheduler to share between pools by default, not between invidual jobs |  
Minor | . | Matei Zaharia | Matei Zaharia |
-| [HADOOP-4783](https://issues.apache.org/jira/browse/HADOOP-4783) | History 
files are given world readable permissions. |  Blocker | . | Hemanth Yamijala | 
Amareshwari Sriramadasu |
-| [HADOOP-4631](https://issues.apache.org/jira/browse/HADOOP-4631) | Split the 
default configurations into 3 parts |  Major | conf | Owen O'Malley | Sharad 
Agarwal |
-| [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | Move http 
server from FSNamesystem into NameNode. |  Major | . | Konstantin Shvachko | 
Konstantin Shvachko |
-| [HADOOP-4576](https://issues.apache.org/jira/browse/HADOOP-4576) | Modify 
pending tasks count in the UI to pending jobs count in the UI |  Major | . | 
Hemanth Yamijala | Sreekanth Ramakrishnan |
+| [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | Findbugs 
warnings are printed related to equals implementation of several classes |  
Major | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | Fix 
warnings generated by FindBugs |  Major | conf, fs, record | Suresh Srinivas | 
Suresh Srinivas |
 | [HADOOP-4572](https://issues.apache.org/jira/browse/HADOOP-4572) | INode and 
its sub-classes should be package private |  Major | . | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
+| [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | Move http 
server from FSNamesystem into NameNode. |  Major | . | Konstantin Shvachko | 
Konstantin Shvachko |
 | [HADOOP-4567](https://issues.apache.org/jira/browse/HADOOP-4567) | 
GetFileBlockLocations should return the NetworkTopology information of the 
machines that hosts those blocks |  Major | . | dhruba borthakur | dhruba 
borthakur |
-| [HADOOP-4445](https://issues.apache.org/jira/browse/HADOOP-4445) | Wrong 
number of running map/reduce tasks are displayed in queue information. |  Major 
| . | Karam Singh | Sreekanth Ramakrishnan |
 | [HADOOP-4435](https://issues.apache.org/jira/browse/HADOOP-4435) | The 
JobTracker should display the amount of heap memory used |  Minor | . | dhruba 
borthakur | dhruba borthakur |
-| [HADOOP-4422](https://issues.apache.org/jira/browse/HADOOP-4422) | S3 file 
systems should not create bucket |  Major | fs/s3 | David Phillips | David 
Phillips |
-| [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | Fix 
warnings generated by FindBugs |  Major | conf, fs, record | Suresh Srinivas | 
Suresh Srinivas |
-| [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | Findbugs 
warnings are printed related to equals implementation of several classes |  
Major | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-3923](https://issues.apache.org/jira/browse/HADOOP-3923) | Deprecate 
org.apache.hadoop.mapred.StatusHttpServer |  Minor | . | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
 | [HADOOP-4188](https://issues.apache.org/jira/browse/HADOOP-4188) | Remove 
Task's dependency on concrete file systems |  Major | . | Tom White | Sharad 
Agarwal |
-| [HADOOP-4103](https://issues.apache.org/jira/browse/HADOOP-4103) | Alert for 
missing blocks |  Major | . | Christian Kunz | Raghu Angadi |
-| [HADOOP-4035](https://issues.apache.org/jira/browse/HADOOP-4035) | Modify 
the capacity scheduler (HADOOP-3445) to schedule tasks based on memory 
requirements and task trackers free memory |  Blocker | . | Hemanth Yamijala | 
Vinod Kumar Vavilapalli |
-| [HADOOP-4029](https://issues.apache.org/jira/browse/HADOOP-4029) | NameNode 
should report status and performance for each replica of image and log |  Major 
| 

[41/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/CHANGES.0.19.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/CHANGES.0.19.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/CHANGES.0.19.0.md
index 29e9ab9..a27ce33 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/CHANGES.0.19.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/CHANGES.0.19.0.md
@@ -24,395 +24,389 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-4430](https://issues.apache.org/jira/browse/HADOOP-4430) | Namenode 
Web UI capacity report is inconsistent with Balancer |  Blocker | . | Suresh 
Srinivas | Suresh Srinivas |
-| [HADOOP-4293](https://issues.apache.org/jira/browse/HADOOP-4293) | Remove 
WritableJobConf |  Major | . | Owen O'Malley | Owen O'Malley |
-| [HADOOP-4281](https://issues.apache.org/jira/browse/HADOOP-4281) | Capacity 
reported in some of the commands is not consistent with the Web UI reported 
data |  Blocker | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-4227](https://issues.apache.org/jira/browse/HADOOP-4227) | Remove 
the deprecated, unused class ShellCommand. |  Minor | fs | Tsz Wo Nicholas Sze 
| Tsz Wo Nicholas Sze |
-| [HADOOP-4190](https://issues.apache.org/jira/browse/HADOOP-4190) | Changes 
to JobHistory makes it backward incompatible |  Blocker | . | Amar Kamat | Amar 
Kamat |
-| [HADOOP-4116](https://issues.apache.org/jira/browse/HADOOP-4116) | Balancer 
should provide better resource management |  Blocker | . | Raghu Angadi | 
Hairong Kuang |
-| [HADOOP-3981](https://issues.apache.org/jira/browse/HADOOP-3981) | Need a 
distributed file checksum algorithm for HDFS |  Major | . | Tsz Wo Nicholas Sze 
| Tsz Wo Nicholas Sze |
-| [HADOOP-3963](https://issues.apache.org/jira/browse/HADOOP-3963) | libhdfs 
should never exit on its own but rather return errors to the calling 
application |  Minor | . | Pete Wyckoff | Pete Wyckoff |
-| [HADOOP-3938](https://issues.apache.org/jira/browse/HADOOP-3938) | Quotas 
for disk space management |  Major | . | Robert Chansler | Raghu Angadi |
-| [HADOOP-3911](https://issues.apache.org/jira/browse/HADOOP-3911) | ' -blocks 
' option not being recognized |  Minor | fs, util | Koji Noguchi | Lohit 
Vijayarenu |
-| [HADOOP-3889](https://issues.apache.org/jira/browse/HADOOP-3889) | distcp: 
Better Error Message should be thrown when accessing source files/directory 
with no read permission |  Minor | . | Peeyush Bishnoi | Tsz Wo Nicholas Sze |
-| [HADOOP-3837](https://issues.apache.org/jira/browse/HADOOP-3837) | hadop 
streaming does not use progress reporting to detect hung tasks |  Major | . | 
dhruba borthakur | dhruba borthakur |
-| [HADOOP-3796](https://issues.apache.org/jira/browse/HADOOP-3796) | fuse-dfs 
should take rw,ro,trashon,trashoff,protected=blah mount arguments rather than 
them being compiled in |  Major | . | Pete Wyckoff | Pete Wyckoff |
-| [HADOOP-3792](https://issues.apache.org/jira/browse/HADOOP-3792) | exit code 
from "hadoop dfs -test ..." is wrong for Unix shell |  Minor | fs | Ben Slusky 
| Ben Slusky |
-| [HADOOP-3722](https://issues.apache.org/jira/browse/HADOOP-3722) | Provide a 
unified way to pass jobconf options from bin/hadoop |  Minor | conf | Matei 
Zaharia | Enis Soztutar |
+| [HADOOP-3595](https://issues.apache.org/jira/browse/HADOOP-3595) | Remove 
deprecated mapred.combine.once functionality |  Major | . | Chris Douglas | 
Chris Douglas |
 | [HADOOP-3667](https://issues.apache.org/jira/browse/HADOOP-3667) | Remove 
deprecated methods in JobConf |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
 | [HADOOP-3652](https://issues.apache.org/jira/browse/HADOOP-3652) | Remove 
deprecated class OutputFormatBase |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
-| [HADOOP-3595](https://issues.apache.org/jira/browse/HADOOP-3595) | Remove 
deprecated mapred.combine.once functionality |  Major | . | Chris Douglas | 
Chris Douglas |
-| [HADOOP-3245](https://issues.apache.org/jira/browse/HADOOP-3245) | Provide 
ability to persist running jobs (extend HADOOP-1876) |  Major | . | Devaraj Das 
| Amar Kamat |
-| [HADOOP-3150](https://issues.apache.org/jira/browse/HADOOP-3150) | Move task 
file promotion into the task |  Major | . | Owen O'Malley | Amareshwari 
Sriramadasu |
-| [HADOOP-3062](https://issues.apache.org/jira/browse/HADOOP-3062) | Need to 
capture the metrics for the network ios generate by dfs reads/writes and 
map/reduce shuffling  and break them down by racks |  Major | metrics | Runping 
Qi | Chris Douglas |
-| [HADOOP-2816](https://issues.apache.org/jira/browse/HADOOP-2816) | Cluster 
summary at name node web has confusing report for space utilization | 

[26/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/RELEASENOTES.0.23.1.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/RELEASENOTES.0.23.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/RELEASENOTES.0.23.1.md
index 569e1ed..19364d3 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/RELEASENOTES.0.23.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.1/RELEASENOTES.0.23.1.md
@@ -23,285 +23,280 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-8013](https://issues.apache.org/jira/browse/HADOOP-8013) | *Major* | 
**ViewFileSystem does not honor setVerifyChecksum**
+* [MAPREDUCE-2784](https://issues.apache.org/jira/browse/MAPREDUCE-2784) | 
*Major* | **[Gridmix] TestGridmixSummary fails with NPE when run in DEBUG 
mode.**
 
-**WARNING: No release note provided for this incompatible change.**
+Fixed bugs in ExecutionSummarizer and ResourceUsageMatcher.
 
 
 ---
 
-* [HADOOP-8009](https://issues.apache.org/jira/browse/HADOOP-8009) | 
*Critical* | **Create hadoop-client and hadoop-minicluster artifacts for 
downstream projects**
+* [MAPREDUCE-2950](https://issues.apache.org/jira/browse/MAPREDUCE-2950) | 
*Major* | **[Gridmix] TestUserResolve fails in trunk**
 
-Generate integration artifacts "org.apache.hadoop:hadoop-client" and 
"org.apache.hadoop:hadoop-minicluster" containing all the jars needed to use 
Hadoop client APIs, and to run Hadoop MiniClusters, respectively.  Push these 
artifacts to the maven repository when mvn-deploy, along with existing 
artifacts.
+Fixes bug in TestUserResolve.
 
 
 ---
 
-* [HADOOP-7986](https://issues.apache.org/jira/browse/HADOOP-7986) | *Major* | 
**Add config for History Server protocol in hadoop-policy for service level 
authorization.**
+* [HDFS-2130](https://issues.apache.org/jira/browse/HDFS-2130) | *Major* | 
**Switch default checksum to CRC32C**
 
-Adding config for MapReduce History Server protocol in hadoop-policy.xml for 
service level authorization.
+The default checksum algorithm used on HDFS is now CRC32C. Data from previous 
versions of Hadoop can still be read backwards-compatibly.
 
 
 ---
 
-* [HADOOP-7963](https://issues.apache.org/jira/browse/HADOOP-7963) | *Blocker* 
| **test failures: TestViewFileSystemWithAuthorityLocalFileSystem and 
TestViewFileSystemLocalFileSystem**
+* [HDFS-2129](https://issues.apache.org/jira/browse/HDFS-2129) | *Major* | 
**Simplify BlockReader to not inherit from FSInputChecker**
 
-Fix ViewFS to catch a null canonical service-name and pass tests 
TestViewFileSystem\*
+BlockReader has been reimplemented to use direct byte buffers. If you use a 
custom socket factory, it must generate sockets that have associated Channels.
 
 
 ---
 
-* [HADOOP-7851](https://issues.apache.org/jira/browse/HADOOP-7851) | *Major* | 
**Configuration.getClasses() never returns the default value.**
+* [MAPREDUCE-3297](https://issues.apache.org/jira/browse/MAPREDUCE-3297) | 
*Major* | **Move Log Related components from yarn-server-nodemanager to 
yarn-common**
 
-Fixed Configuration.getClasses() API to return the default value if the key is 
not set.
+Moved log related components into yarn-common so that HistoryServer and 
clients can use them without depending on the yarn-server-nodemanager module.
 
 
 ---
 
-* [HADOOP-7802](https://issues.apache.org/jira/browse/HADOOP-7802) | *Major* | 
**Hadoop scripts unconditionally source "$bin"/../libexec/hadoop-config.sh.**
+* [MAPREDUCE-3221](https://issues.apache.org/jira/browse/MAPREDUCE-3221) | 
*Minor* | **ant test TestSubmitJob failing on trunk**
 
-Here is a patch to enable this behavior
+Fixed a bug in TestSubmitJob.
 
 
 ---
 
-* [HADOOP-7470](https://issues.apache.org/jira/browse/HADOOP-7470) | *Minor* | 
**move up to Jackson 1.8.8**
+* [MAPREDUCE-3215](https://issues.apache.org/jira/browse/MAPREDUCE-3215) | 
*Minor* | **org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk**
 
-**WARNING: No release note provided for this incompatible change.**
+Reneabled and fixed bugs in the failing test TestNoJobSetupCleanup.
 
 
 ---
 
-* [HADOOP-7348](https://issues.apache.org/jira/browse/HADOOP-7348) | *Major* | 
**Modify the option of FsShell getmerge from [addnl] to [-nl] for consistency**
+* [MAPREDUCE-3219](https://issues.apache.org/jira/browse/MAPREDUCE-3219) | 
*Minor* | **ant test TestDelegationToken failing on trunk**
 
-The 'fs -getmerge' tool now uses a -nl flag to determine if adding a newline 
at end of each file is required, in favor of the 'addnl' boolean flag that was 
used earlier.
+Reenabled and fixed bugs in the failing test TestDelegationToken.
 
 
 ---
 
-* [HDFS-2316](https://issues.apache.org/jira/browse/HDFS-2316) | *Major* | 
**[umbrella] WebHDFS: a 

[18/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
index ceb86d0..5420e8e 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
@@ -25,244 +25,238 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
 | [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | Handle 
paths using back slash as path separator for windows only |  Major | fs | 
Suresh Srinivas | Daryn Sharp |
-| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
+| [MAPREDUCE-4629](https://issues.apache.org/jira/browse/MAPREDUCE-4629) | 
Remove JobHistory.DEBUG\_MODE |  Major | . | Karthik Kambatla | Karthik 
Kambatla |
 | [HDFS-4122](https://issues.apache.org/jira/browse/HDFS-4122) | Cleanup HDFS 
logs and reduce the size of logged messages |  Major | datanode, hdfs-client, 
namenode | Suresh Srinivas | Suresh Srinivas |
+| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
 | [MAPREDUCE-4737](https://issues.apache.org/jira/browse/MAPREDUCE-4737) |  
Hadoop does not close output file / does not call Mapper.cleanup if exception 
in map |  Major | . | Daniel Dai | Arun C Murthy |
-| [MAPREDUCE-4629](https://issues.apache.org/jira/browse/MAPREDUCE-4629) | 
Remove JobHistory.DEBUG\_MODE |  Major | . | Karthik Kambatla | Karthik 
Kambatla |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9090](https://issues.apache.org/jira/browse/HADOOP-9090) | Support 
on-demand publish of metrics |  Minor | metrics | Mostafa Elhemali | Mostafa 
Elhemali |
+| [MAPREDUCE-461](https://issues.apache.org/jira/browse/MAPREDUCE-461) | 
Enable ServicePlugins for the JobTracker |  Minor | . | Fredrik Hedberg | 
Fredrik Hedberg |
+| [HDFS-3515](https://issues.apache.org/jira/browse/HDFS-3515) | Port 
HDFS-1457 to branch-1 |  Major | namenode | Eli Collins | Eli Collins |
+| [HADOOP-8023](https://issues.apache.org/jira/browse/HADOOP-8023) | Add 
unset() method to Configuration |  Critical | conf | Alejandro Abdelnur | 
Alejandro Abdelnur |
+| [MAPREDUCE-4355](https://issues.apache.org/jira/browse/MAPREDUCE-4355) | Add 
RunningJob.getJobStatus() |  Major | mrv1, mrv2 | Karthik Kambatla | Karthik 
Kambatla |
+| [MAPREDUCE-987](https://issues.apache.org/jira/browse/MAPREDUCE-987) | 
Exposing MiniDFS and MiniMR clusters as a single process command-line |  Minor 
| build, test | Philip Zeyliger | Ahmed Radwan |
+| [MAPREDUCE-3678](https://issues.apache.org/jira/browse/MAPREDUCE-3678) | The 
Map tasks logs should have the value of input split it processed |  Major | 
mrv1, mrv2 | Bejoy KS | Harsh J |
 | [HADOOP-8988](https://issues.apache.org/jira/browse/HADOOP-8988) | Backport 
HADOOP-8343 to branch-1 |  Major | conf | Jing Zhao | Jing Zhao |
 | [HADOOP-8820](https://issues.apache.org/jira/browse/HADOOP-8820) | Backport 
HADOOP-8469 and HADOOP-8470: add "NodeGroup" layer in new NetworkTopology (also 
known as NetworkTopologyWithNodeGroup) |  Major | net | Junping Du | Junping Du 
|
-| [HADOOP-8023](https://issues.apache.org/jira/browse/HADOOP-8023) | Add 
unset() method to Configuration |  Critical | conf | Alejandro Abdelnur | 
Alejandro Abdelnur |
-| [HDFS-4776](https://issues.apache.org/jira/browse/HDFS-4776) | Backport 
SecondaryNameNode web ui to branch-1 |  Minor | namenode | Tsz Wo Nicholas Sze 
| Tsz Wo Nicholas Sze |
-| [HDFS-4774](https://issues.apache.org/jira/browse/HDFS-4774) | Backport 
HDFS-4525 'Provide an API for knowing whether file is closed or not' to 
branch-1 |  Major | hdfs-client, namenode | Ted Yu | Ted Yu |
-| [HDFS-4597](https://issues.apache.org/jira/browse/HDFS-4597) | Backport 
WebHDFS concat to branch-1 |  Major | webhdfs | Tsz Wo Nicholas Sze | Tsz Wo 
Nicholas Sze |
+| [HDFS-3941](https://issues.apache.org/jira/browse/HDFS-3941) | Backport 
HDFS-3498 and HDFS3601: update replica placement policy for new added 
"NodeGroup" layer topology |  Major | namenode | Junping Du | Junping Du |
 | [HDFS-4219](https://issues.apache.org/jira/browse/HDFS-4219) | Port slive to 
branch-1 |  Major | . | Arpit Gupta | Arpit 

[33/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
index 9f341c1..8a8bef3 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
@@ -23,298 +23,298 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-6813](https://issues.apache.org/jira/browse/HADOOP-6813) | *Blocker* 
| **Add a new newInstance method in FileSystem that takes a "user" as argument**
+* [HADOOP-4895](https://issues.apache.org/jira/browse/HADOOP-4895) | *Major* | 
**Remove deprecated methods in DFSClient**
 
-I've just committed this to 0.21.
+Removed deprecated methods DFSClient.getHints() and DFSClient.isDirectory().
 
 
 ---
 
-* [HADOOP-6748](https://issues.apache.org/jira/browse/HADOOP-6748) | *Major* | 
**Remove hadoop.cluster.administrators**
+* [HADOOP-4941](https://issues.apache.org/jira/browse/HADOOP-4941) | *Major* | 
**Remove getBlockSize(Path f), getLength(Path f) and getReplication(Path src)**
 
-Removed configuration property "hadoop.cluster.administrators". Added 
constructor public HttpServer(String name, String bindAddress, int port, 
boolean findPort, Configuration conf, AccessControlList adminsAcl) in 
HttpServer, which takes cluster administrators acl as a parameter.
+Removed deprecated FileSystem methods getBlockSize(Path f), getLength(Path f), 
and getReplication(Path src).
 
 
 ---
 
-* [HADOOP-6701](https://issues.apache.org/jira/browse/HADOOP-6701) | *Minor* | 
** Incorrect exit codes for "dfs -chown", "dfs -chgrp"**
+* [HADOOP-4268](https://issues.apache.org/jira/browse/HADOOP-4268) | *Major* | 
**Permission checking in fsck**
 
-Commands chmod, chown and chgrp now returns non zero exit code and an error 
message on failure instead of returning zero.
+Fsck now checks permissions as directories are traversed. Any user can now use 
fsck, but information is provided only for directories the user has permission 
to read.
 
 
 ---
 
-* [HADOOP-6692](https://issues.apache.org/jira/browse/HADOOP-6692) | *Major* | 
**Add FileContext#listStatus that returns an iterator**
+* [HADOOP-4648](https://issues.apache.org/jira/browse/HADOOP-4648) | *Major* | 
**Remove ChecksumDistriubtedFileSystem and InMemoryFileSystem**
 
-This issue adds Iterator\ listStatus(Path) to FileContext, moves 
FileStatus[] listStatus(Path) to FileContext#Util, and adds 
Iterator\ listStatusItor(Path) to AbstractFileSystem which 
provides a default implementation by using FileStatus[] listStatus(Path).
+Removed obsolete, deprecated subclasses of ChecksumFileSystem 
(InMemoryFileSystem, ChecksumDistributedFileSystem).
 
 
 ---
 
-* [HADOOP-6686](https://issues.apache.org/jira/browse/HADOOP-6686) | *Major* | 
**Remove redundant exception class name in unwrapped exceptions thrown at the 
RPC client**
+* [HADOOP-4940](https://issues.apache.org/jira/browse/HADOOP-4940) | *Major* | 
**Remove delete(Path f)**
 
-The exceptions thrown by the RPC client no longer carries a redundant 
exception class name in exception message.
+Removed deprecated method FileSystem.delete(Path).
 
 
 ---
 
-* [HADOOP-6577](https://issues.apache.org/jira/browse/HADOOP-6577) | *Major* | 
**IPC server response buffer reset threshold should be configurable**
+* [HADOOP-3953](https://issues.apache.org/jira/browse/HADOOP-3953) | *Major* | 
**Sticky bit for directories**
 
-Add hidden configuration option "ipc.server.max.response.size" to change the 
default 1 MB, the maximum size when large IPC handler response buffer is reset.
+UNIX-style sticky bit implemented for HDFS directories. When  the  sticky  bit 
 is set on a directory, files in that directory may be deleted or renamed only 
by a superuser or the file's owner.
 
 
 ---
 
-* [HADOOP-6569](https://issues.apache.org/jira/browse/HADOOP-6569) | *Major* | 
**FsShell#cat should avoid calling unecessary getFileStatus before opening a 
file to read**
+* [HADOOP-5022](https://issues.apache.org/jira/browse/HADOOP-5022) | *Blocker* 
| **[HOD] logcondense should delete all hod logs for a user, including 
jobtracker logs**
 
-**WARNING: No release note provided for this incompatible change.**
+New logcondense option retain-master-logs indicates whether the script should 
delete master logs as part of its cleanup process. By default this option is 
false; master logs are deleted. Earlier versions of logcondense did not delete 
master logs.
 
 
 ---
 
-* [HADOOP-6568](https://issues.apache.org/jira/browse/HADOOP-6568) | *Major* | 
**Authorization for 

[72/73] [abbrv] hadoop git commit: HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/485c7b93/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
new file mode 100644
index 000..170247f
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.synchronizeRecords;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.RouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.hdfs.server.federation.store.impl.MountTableStoreImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.util.Time;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * The administrator interface of the {@link Router} implemented by
+ * {@link RouterAdminServer}.
+ */
+public class TestRouterAdmin {
+
+  private static StateStoreDFSCluster cluster;
+  private static RouterContext routerContext;
+  public static final String RPC_BEAN =
+  "Hadoop:service=Router,name=FederationRPC";
+  private static List mockMountTable;
+  private static StateStoreService stateStore;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+cluster = new StateStoreDFSCluster(false, 1);
+// Build and start a router with State Store + admin + RPC
+Configuration conf = new RouterConfigBuilder()
+.stateStore()
+.admin()
+.rpc()
+.build();
+cluster.addRouterOverrides(conf);
+cluster.startRouters();
+routerContext = cluster.getRandomRouter();
+mockMountTable = cluster.generateMockMountTable();
+Router router = routerContext.getRouter();
+stateStore = router.getStateStore();
+  }
+
+  @AfterClass
+  public static void tearDown() {
+cluster.stopRouter(routerContext);
+  }
+
+  @Before
+  public void testSetup() throws Exception {
+assertTrue(
+synchronizeRecords(stateStore, mockMountTable, MountTable.class));
+  }
+
+  @Test
+  public void testAddMountTable() throws IOException {
+MountTable newEntry = MountTable.newInstance(
+"/testpath", Collections.singletonMap("ns0", "/testdir"),
+Time.now(), Time.now());
+
+RouterClient client = routerContext.getAdminClient();
+MountTableManager mountTable = client.getMountTableManager();
+
+// Existing mount table size
+List records = 

[25/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.2/CHANGES.0.23.2.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.2/CHANGES.0.23.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.2/CHANGES.0.23.2.md
index 37b85a5..5f1ac09 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.2/CHANGES.0.23.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.2/CHANGES.0.23.2.md
@@ -18,21 +18,15 @@
 -->
 # Apache Hadoop Changelog
 
-## Release 0.23.2 - Unreleased (as of 2016-03-04)
+## Release 0.23.2 - Unreleased (as of 2017-08-28)
 
 ### INCOMPATIBLE CHANGES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | Handle 
paths using back slash as path separator for windows only |  Major | fs | 
Suresh Srinivas | Daryn Sharp |
-| [HADOOP-8131](https://issues.apache.org/jira/browse/HADOOP-8131) | FsShell 
put doesn't correctly handle a non-existent dir |  Critical | . | Daryn Sharp | 
Daryn Sharp |
 | [HDFS-2887](https://issues.apache.org/jira/browse/HDFS-2887) | Define a 
FSVolume interface |  Major | datanode | Tsz Wo Nicholas Sze | Tsz Wo Nicholas 
Sze |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-8131](https://issues.apache.org/jira/browse/HADOOP-8131) | FsShell 
put doesn't correctly handle a non-existent dir |  Critical | . | Daryn Sharp | 
Daryn Sharp |
+| [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | Handle 
paths using back slash as path separator for windows only |  Major | fs | 
Suresh Srinivas | Daryn Sharp |
 
 
 ### NEW FEATURES:
@@ -46,131 +40,131 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8071](https://issues.apache.org/jira/browse/HADOOP-8071) | Avoid an 
extra packet in client code when nagling is disabled |  Minor | ipc | Todd 
Lipcon | Todd Lipcon |
+| [HDFS-1217](https://issues.apache.org/jira/browse/HDFS-1217) | Some methods 
in the NameNdoe should not be public |  Major | namenode | Tsz Wo Nicholas Sze 
| Laxman |
 | [HADOOP-8048](https://issues.apache.org/jira/browse/HADOOP-8048) | Allow 
merging of Credentials |  Major | util | Daryn Sharp | Daryn Sharp |
-| [HDFS-3066](https://issues.apache.org/jira/browse/HDFS-3066) | cap space 
usage of default log4j rolling policy (hdfs specific changes) |  Major | 
scripts | Patrick Hunt | Patrick Hunt |
-| [HDFS-3024](https://issues.apache.org/jira/browse/HDFS-3024) | Improve 
performance of stringification in addStoredBlock |  Minor | namenode | Todd 
Lipcon | Todd Lipcon |
-| [HDFS-2985](https://issues.apache.org/jira/browse/HDFS-2985) | Improve 
logging when replicas are marked as corrupt |  Minor | namenode | Todd Lipcon | 
Todd Lipcon |
-| [HDFS-2981](https://issues.apache.org/jira/browse/HDFS-2981) | The default 
value of dfs.client.block.write.replace-datanode-on-failure.enable should be 
true |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HDFS-2907](https://issues.apache.org/jira/browse/HDFS-2907) | Make 
FSDataset in Datanode Pluggable |  Minor | . | Sanjay Radia | Tsz Wo Nicholas 
Sze |
-| [HDFS-2655](https://issues.apache.org/jira/browse/HDFS-2655) | 
BlockReaderLocal#skip performs unnecessary IO |  Major | datanode | Eli Collins 
| Brandon Li |
 | [HDFS-2506](https://issues.apache.org/jira/browse/HDFS-2506) | Umbrella jira 
for tracking separation of wire protocol datatypes from the implementation 
types |  Major | datanode, namenode | Suresh Srinivas | Suresh Srinivas |
-| [HDFS-1217](https://issues.apache.org/jira/browse/HDFS-1217) | Some methods 
in the NameNdoe should not be public |  Major | namenode | Tsz Wo Nicholas Sze 
| Laxman |
-| [MAPREDUCE-3989](https://issues.apache.org/jira/browse/MAPREDUCE-3989) | cap 
space usage of default log4j rolling policy (mr specific changes) |  Major | . 
| Patrick Hunt | Patrick Hunt |
-| [MAPREDUCE-3922](https://issues.apache.org/jira/browse/MAPREDUCE-3922) | Fix 
the potential problem compiling 32 bit binaries on a x86\_64 host. |  Minor | 
build, mrv2 | Eugene Koontz | Hitesh Shah |
-| [MAPREDUCE-3901](https://issues.apache.org/jira/browse/MAPREDUCE-3901) | 
lazy load JobHistory Task and TaskAttempt details |  Major | jobhistoryserver, 
mrv2 | Siddharth Seth | Siddharth Seth |
+| [HADOOP-8071](https://issues.apache.org/jira/browse/HADOOP-8071) | Avoid an 
extra packet in client code when nagling is disabled |  Minor | ipc | Todd 
Lipcon | Todd Lipcon |
 | [MAPREDUCE-3864](https://issues.apache.org/jira/browse/MAPREDUCE-3864) | Fix 
cluster setup docs for correct SNN HTTPS parameters |  

[01/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build [Forced Update!]

2017-08-31 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-10467 fc2c25472 -> da654226d (forced update)


http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md
index 43fb3fa..4534a7f 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md
@@ -23,14 +23,23 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-10568](https://issues.apache.org/jira/browse/HADOOP-10568) | *Major* 
| **Add s3 server-side encryption**
+* [HADOOP-10342](https://issues.apache.org/jira/browse/HADOOP-10342) | *Major* 
| **Extend UserGroupInformation to return a UGI given a preauthenticated 
kerberos Subject**
 
-s3 server-side encryption is now supported.
+Add getUGIFromSubject to leverage an external kerberos authentication
 
-To enable this feature, specify the following in your client-side 
configuration:
 
-name: fs.s3n.server-side-encryption-algorithm
-value: AES256
+---
+
+* [HDFS-6164](https://issues.apache.org/jira/browse/HDFS-6164) | *Major* | 
**Remove lsr in OfflineImageViewer**
+
+The offlineimageviewer no longer generates lsr-style outputs. The 
functionality has been superseded by a tool that takes the fsimage and exposes 
WebHDFS-like API for user queries.
+
+
+---
+
+* [HDFS-6168](https://issues.apache.org/jira/browse/HDFS-6168) | *Major* | 
**Remove deprecated methods in DistributedFileSystem**
+
+**WARNING: No release note provided for this change.**
 
 
 ---
@@ -45,9 +54,9 @@ Map\ sasl\_props = 
saslPropsResolver.getDefaultProperties();
 
 ---
 
-* [HADOOP-10342](https://issues.apache.org/jira/browse/HADOOP-10342) | *Major* 
| **Extend UserGroupInformation to return a UGI given a preauthenticated 
kerberos Subject**
+* [HDFS-6153](https://issues.apache.org/jira/browse/HDFS-6153) | *Minor* | 
**Document "fileId" and "childrenNum" fields in the FileStatus Json schema**
 
-Add getUGIFromSubject to leverage an external kerberos authentication
+**WARNING: No release note provided for this change.**
 
 
 ---
@@ -59,13 +68,6 @@ Remove MRv1 settings from hadoop-metrics2.properties, add 
YARN settings instead.
 
 ---
 
-* [HDFS-6293](https://issues.apache.org/jira/browse/HDFS-6293) | *Blocker* | 
**Issues with OIV processing PB-based fsimages**
-
-Set "dfs.namenode.legacy-oiv-image.dir" to an appropriate directory to make 
standby name node or secondary name node save its file system state in the old 
fsimage format during checkpointing. This image can be used for offline 
analysis using the OfflineImageViewer.  Use the "hdfs oiv\_legacy" command to 
process the old fsimage format.
-
-

-
 * [HDFS-6273](https://issues.apache.org/jira/browse/HDFS-6273) | *Major* | 
**Config options to allow wildcard endpoints for namenode HTTP and HTTPS 
servers**
 
 HDFS-6273 introduces two new HDFS configuration keys: 
@@ -83,23 +85,21 @@ These keys complement the existing NameNode options:
 
 ---
 
-* [HDFS-6168](https://issues.apache.org/jira/browse/HDFS-6168) | *Major* | 
**Remove deprecated methods in DistributedFileSystem**
-
-**WARNING: No release note provided for this incompatible change.**
-
+* [HADOOP-10568](https://issues.apache.org/jira/browse/HADOOP-10568) | *Major* 
| **Add s3 server-side encryption**
 

+s3 server-side encryption is now supported.
 
-* [HDFS-6164](https://issues.apache.org/jira/browse/HDFS-6164) | *Major* | 
**Remove lsr in OfflineImageViewer**
+To enable this feature, specify the following in your client-side 
configuration:
 
-The offlineimageviewer no longer generates lsr-style outputs. The 
functionality has been superseded by a tool that takes the fsimage and exposes 
WebHDFS-like API for user queries.
+name: fs.s3n.server-side-encryption-algorithm
+value: AES256
 
 
 ---
 
-* [HDFS-6153](https://issues.apache.org/jira/browse/HDFS-6153) | *Minor* | 
**Document "fileId" and "childrenNum" fields in the FileStatus Json schema**
+* [HDFS-6293](https://issues.apache.org/jira/browse/HDFS-6293) | *Blocker* | 
**Issues with OIV processing PB-based fsimages**
 
-**WARNING: No release note provided for this incompatible change.**
+Set "dfs.namenode.legacy-oiv-image.dir" to an appropriate directory to make 
standby name node or secondary name node save its file system state in the old 
fsimage format during checkpointing. This image can be used for offline 
analysis using the OfflineImageViewer.  Use the "hdfs oiv\_legacy" command to 
process the old fsimage format.
 
 
 ---
@@ -114,16 +114,16 @@ dfs.datanode.slow.io.warning.threshold.ms 

[45/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
index c8c2794..4e2fee2 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
@@ -23,269 +23,265 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-3382](https://issues.apache.org/jira/browse/HADOOP-3382) | *Blocker* 
| **Memory leak when files are not cleanly closed**
+* [HADOOP-1593](https://issues.apache.org/jira/browse/HADOOP-1593) | *Major* | 
**FsShell should work with paths in non-default FileSystem**
 
-Fixed a memory leak associated with 'abandoned' files (i.e. not cleanly 
closed). This held up significant amounts of memory depending on activity and 
how long NameNode has been running.
+This bug allows non default path to specifeid in fsshell commands.
+
+So, you can now specify hadoop dfs -ls hdfs://remotehost1:port/path
+  and  hadoop dfs -ls hdfs://remotehost2:port/path without changing the config.
 
 
 ---
 
-* [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | *Blocker* 
| **virtual address space limits break streaming apps**
+* [HADOOP-2345](https://issues.apache.org/jira/browse/HADOOP-2345) | *Major* | 
**new transactions to support HDFS Appends**
 
-This patch adds the mapred.child.ulimit to limit the virtual memory for 
children processes to the given value.
+Introduce new namenode transactions to support appending to HDFS files.
 
 
 ---
 
-* [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | *Major* | 
**Remove HOD changes from CHANGES.txt, as they are now inside src/contrib/hod**
+* [HADOOP-2178](https://issues.apache.org/jira/browse/HADOOP-2178) | *Major* | 
**Job history on HDFS**
 
-Moved HOD change items from CHANGES.txt to a new file 
src/contrib/hod/CHANGES.txt.
+This feature provides facility to store job history on DFS. Now cluster admin 
can provide either localFS location or DFS location using configuration 
property "mapred.job.history.location"  to store job histroy. History will be 
logged in user specified location also. User can specify history location using 
configuration property "mapred.job.history.user.location" .
+The classes org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndex and 
org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndexParseListener, and 
public method org.apache.hadoop.mapred.DefaultJobHistoryParser.parseMasterIndex 
are not available.
+The signature of public method 
org.apache.hadoop.mapred.DefaultJobHistoryParser.parseJobTasks(File 
jobHistoryFile, JobHistory.JobInfo job) is changed to 
DefaultJobHistoryParser.parseJobTasks(String jobHistoryFile, JobHistory.JobInfo 
job, FileSystem fs).
+The signature of public method 
org.apache.hadoop.mapred.JobHistory.parseHistory(File path, Listener l) is 
changed to JobHistory.parseHistoryFromFS(String path, Listener l, FileSystem fs)
 
 
 ---
 
-* [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | *Major* | 
**exists() calls logs FileNotFoundException in namenode log**
+* [HADOOP-2192](https://issues.apache.org/jira/browse/HADOOP-2192) | *Major* | 
**dfs mv command differs from POSIX standards**
 
-getFileInfo returns null for File not found instead of throwing 
FileNotFoundException
+this patch makes dfs -mv more like linux mv command getting rid of unnecessary 
output in dfs -mv and returns an error message when moving non existent 
files/directories --- mv: cannot stat "filename": No such file or directory.
 
 
 ---
 
-* [HADOOP-3223](https://issues.apache.org/jira/browse/HADOOP-3223) | *Blocker* 
| **Hadoop dfs -help for permissions contains a typo**
+* [HADOOP-2873](https://issues.apache.org/jira/browse/HADOOP-2873) | *Major* | 
**Namenode fails to re-start after cluster shutdown - DFSClient: Could not 
obtain blocks even all datanodes were up & live**
 
-Minor typo fix in help message for chmod. impact : none.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-3204](https://issues.apache.org/jira/browse/HADOOP-3204) | *Blocker* 
| **LocalFSMerger needs to catch throwable**
+* [HADOOP-2063](https://issues.apache.org/jira/browse/HADOOP-2063) | *Blocker* 
| **Command to pull corrupted files**
 
-Fixes LocalFSMerger in ReduceTask.java to handle errors/exceptions better. 
Prior to this all exceptions except IOException would be silently ignored.
+Added a new option -ignoreCrc to fs -get, or equivalently, fs -copyToLocal, 
such that crc checksum will be ignored for the command.  The use of this 

[37/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
index 4e13959..55f65c0 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
@@ -23,325 +23,325 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-5565](https://issues.apache.org/jira/browse/HADOOP-5565) | *Major* | 
**The job instrumentation API needs to have a method for finalizeJob,**
+* [HADOOP-4234](https://issues.apache.org/jira/browse/HADOOP-4234) | *Minor* | 
**KFS: Allow KFS layer to interface with multiple KFS namenodes**
 
-Add finalizeJob & terminateJob methods to JobTrackerInstrumentation class
+Changed KFS glue layer to allow applications to interface with multiple KFS 
metaservers.
 
 
 ---
 
-* [HADOOP-5548](https://issues.apache.org/jira/browse/HADOOP-5548) | *Blocker* 
| **Observed negative running maps on the job tracker**
+* [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | *Major* | 
**Findbugs warnings are printed related to equals implementation of several 
classes**
 
-Adds synchronization for JobTracker methods in RecoveryManager.
+Changed public class org.apache.hadoop.mapreduce.ID to be an abstract class. 
Removed from class org.apache.hadoop.mapreduce.ID the methods  public static ID 
read(DataInput in) and public static ID forName(String str).
 
 
 ---
 
-* [HADOOP-5531](https://issues.apache.org/jira/browse/HADOOP-5531) | *Blocker* 
| **Remove Chukwa on branch-0.20**
+* [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | *Major* | 
**Fix warnings generated by FindBugs**
 
-Disabled Chukwa unit tests for 0.20 branch only.
+Removed  from class org.apache.hadoop.fs.RawLocalFileSystem deprecated methods 
public String getName(), public void lock(Path p, boolean shared) and public 
void release(Path p).
 
 
 ---
 
-* [HADOOP-5521](https://issues.apache.org/jira/browse/HADOOP-5521) | *Major* | 
**Remove dependency of testcases on RESTART\_COUNT**
+* [HADOOP-4284](https://issues.apache.org/jira/browse/HADOOP-4284) | *Major* | 
**Support for user configurable global filters on HttpServer**
 
-This patch makes TestJobHistory and its dependent testcases independent of 
RESTART\_COUNT.
+Introduced HttpServer method to support global filters.
 
 
 ---
 
-* [HADOOP-5468](https://issues.apache.org/jira/browse/HADOOP-5468) | *Major* | 
**Change Hadoop doc menu to sub-menus**
+* [HADOOP-4454](https://issues.apache.org/jira/browse/HADOOP-4454) | *Minor* | 
**Support comments in 'slaves'  file**
 
-Reformatted HTML documentation for Hadoop to use submenus at the left column.
+Changed processing of conf/slaves file to allow # to begin a comment.
 
 
 ---
 
-* [HADOOP-5030](https://issues.apache.org/jira/browse/HADOOP-5030) | *Major* | 
**Chukwa RPM build improvements**
+* [HADOOP-4572](https://issues.apache.org/jira/browse/HADOOP-4572) | *Major* | 
**INode and its sub-classes should be package private**
 
-Changed RPM install location to the value specified by build.properties file.
+Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to 
org.apache.hadoop.hdfs.server.namenode.
 
 
 ---
 
-* [HADOOP-4970](https://issues.apache.org/jira/browse/HADOOP-4970) | *Major* | 
**Use the full path when move files to .Trash/Current**
+* [HADOOP-4575](https://issues.apache.org/jira/browse/HADOOP-4575) | *Major* | 
**An independent HTTPS proxy for HDFS**
 
-Changed trash facility to use absolute path of the deleted file.
+Introduced independent HSFTP proxy server for authenticated access to clusters.
 
 
 ---
 
-* [HADOOP-4873](https://issues.apache.org/jira/browse/HADOOP-4873) | *Major* | 
**display minMaps/Reduces on advanced scheduler page**
+* [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | *Major* | 
**Move http server from FSNamesystem into NameNode.**
 
-Changed fair scheduler UI to display minMaps and minReduces variables.
+Moved HTTP server from FSNameSystem to NameNode. Removed 
FSNamesystem.getNameNodeInfoPort(). Replaced 
FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort() with 
new method  FSNamesystem.getDFSNameNodeAddress(). Removed constructor 
NameNode(bindAddress, conf).
 
 
 ---
 
-* [HADOOP-4843](https://issues.apache.org/jira/browse/HADOOP-4843) | *Major* | 
**Collect Job History log file and Job Conf file into Chukwa**
+* [HADOOP-4567](https://issues.apache.org/jira/browse/HADOOP-4567) | *Major* | 
**GetFileBlockLocations should return the NetworkTopology information of the 

[42/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.4/CHANGES.0.18.4.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.4/CHANGES.0.18.4.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.4/CHANGES.0.18.4.md
index da4f4c1..6139716 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.4/CHANGES.0.18.4.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.18.4/CHANGES.0.18.4.md
@@ -18,44 +18,22 @@
 -->
 # Apache Hadoop Changelog
 
-## Release 0.18.4 - Unreleased (as of 2016-03-04)
+## Release 0.18.4 - Unreleased (as of 2017-08-28)
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode 
and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker 
| . | Raghu Angadi | Tsz Wo Nicholas Sze |
-| [HADOOP-5644](https://issues.apache.org/jira/browse/HADOOP-5644) | Namnode 
is stuck in safe mode |  Major | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-5557](https://issues.apache.org/jira/browse/HADOOP-5557) | Two minor 
problems in TestOverReplicatedBlocks |  Minor | test | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
-| [HADOOP-5465](https://issues.apache.org/jira/browse/HADOOP-5465) | Blocks 
remain under-replicated |  Blocker | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5412](https://issues.apache.org/jira/browse/HADOOP-5412) | 
TestInjectionForSimulatedStorage occasionally fails on timeout |  Major | . | 
Hairong Kuang | Hairong Kuang |
-| [HADOOP-5311](https://issues.apache.org/jira/browse/HADOOP-5311) | Write 
pipeline recovery fails |  Blocker | . | Hairong Kuang | dhruba borthakur |
 | [HADOOP-5192](https://issues.apache.org/jira/browse/HADOOP-5192) | Block 
reciever should not remove a finalized block when block replication fails |  
Blocker | . | Hairong Kuang | Hairong Kuang |
 | [HADOOP-5134](https://issues.apache.org/jira/browse/HADOOP-5134) | 
FSNamesystem#commitBlockSynchronization adds under-construction block locations 
to blocksMap |  Blocker | . | Hairong Kuang | dhruba borthakur |
+| [HADOOP-5412](https://issues.apache.org/jira/browse/HADOOP-5412) | 
TestInjectionForSimulatedStorage occasionally fails on timeout |  Major | . | 
Hairong Kuang | Hairong Kuang |
+| [HADOOP-5311](https://issues.apache.org/jira/browse/HADOOP-5311) | Write 
pipeline recovery fails |  Blocker | . | Hairong Kuang | dhruba borthakur |
+| [HADOOP-5465](https://issues.apache.org/jira/browse/HADOOP-5465) | Blocks 
remain under-replicated |  Blocker | . | Hairong Kuang | Hairong Kuang |
+| [HADOOP-5557](https://issues.apache.org/jira/browse/HADOOP-5557) | Two minor 
problems in TestOverReplicatedBlocks |  Minor | test | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
+| [HADOOP-5644](https://issues.apache.org/jira/browse/HADOOP-5644) | Namnode 
is stuck in safe mode |  Major | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode 
and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker 
| . | Raghu Angadi | Tsz Wo Nicholas Sze |
 
 
 ### TESTS:
@@ -65,15 +43,3 @@
 | [HADOOP-5114](https://issues.apache.org/jira/browse/HADOOP-5114) | A bunch 
of mapred unit tests are failing on Windows |  Minor | test | Ramya Sunil | 
Raghu Angadi |
 
 
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[52/73] [abbrv] hadoop git commit: HDFS-10881. Federation State Store Driver API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10881. Federation State Store Driver API. Contributed by Jason Kace and 
Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b24a8da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b24a8da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b24a8da

Branch: refs/heads/HDFS-10467
Commit: 7b24a8da06014536013e421a4c93e4d3dae7095c
Parents: 25a1cad
Author: Inigo 
Authored: Wed Mar 29 19:35:06 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:53 2017 -0700

--
 .../store/StateStoreUnavailableException.java   |  33 
 .../federation/store/StateStoreUtils.java   |  72 +++
 .../store/driver/StateStoreDriver.java  | 172 +
 .../driver/StateStoreRecordOperations.java  | 164 
 .../store/driver/impl/StateStoreBaseImpl.java   |  69 +++
 .../store/driver/impl/package-info.java |  39 
 .../federation/store/driver/package-info.java   |  37 
 .../federation/store/protocol/package-info.java |  31 +++
 .../federation/store/records/BaseRecord.java| 189 +++
 .../federation/store/records/QueryResult.java   |  56 ++
 .../federation/store/records/package-info.java  |  36 
 11 files changed, 898 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b24a8da/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
new file mode 100644
index 000..4e6f8c8
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store;
+
+import java.io.IOException;
+
+/**
+ * Thrown when the state store is not reachable or available. Cached APIs and
+ * queries may succeed. Client should retry again later.
+ */
+public class StateStoreUnavailableException extends IOException {
+
+  private static final long serialVersionUID = 1L;
+
+  public StateStoreUnavailableException(String msg) {
+super(msg);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b24a8da/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
new file mode 100644
index 000..8c681df
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language 

[58/73] [abbrv] hadoop git commit: HDFS-12223. Rebasing HDFS-10467. Contributed by Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-12223. Rebasing HDFS-10467. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/904138c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/904138c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/904138c2

Branch: refs/heads/HDFS-10467
Commit: 904138c28e2e965cd773d05f809fcb6dde9d242b
Parents: 43a1a5f
Author: Inigo Goiri 
Authored: Fri Jul 28 15:55:10 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:54 2017 -0700

--
 .../federation/router/RouterRpcServer.java  | 59 +---
 1 file changed, 51 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/904138c2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 4bae71e..eaaab39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -64,8 +64,9 @@ import org.apache.hadoop.hdfs.AddBlockFlag;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.inotify.EventBatchList;
-import org.apache.hadoop.hdfs.protocol.AddingECPolicyResponse;
+import org.apache.hadoop.hdfs.protocol.AddECPolicyResponse;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.BlocksStats;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
 import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
@@ -75,6 +76,7 @@ import org.apache.hadoop.hdfs.protocol.CorruptFileBlocks;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.ECBlockGroupsStats;
 import org.apache.hadoop.hdfs.protocol.EncryptionZone;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
@@ -85,6 +87,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
 import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
@@ -1736,13 +1739,6 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol {
   }
 
   @Override // ClientProtocol
-  public AddingECPolicyResponse[] addErasureCodingPolicies(
-  ErasureCodingPolicy[] policies) throws IOException {
-checkOperation(OperationCategory.WRITE, false);
-return null;
-  }
-
-  @Override // ClientProtocol
   public void unsetErasureCodingPolicy(String src) throws IOException {
 checkOperation(OperationCategory.WRITE, false);
   }
@@ -1808,6 +1804,53 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol {
 return null;
   }
 
+  @Override
+  public AddECPolicyResponse[] addErasureCodingPolicies(
+  ErasureCodingPolicy[] arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+return null;
+  }
+
+  @Override
+  public void removeErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public void disableErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public void enableErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public ECBlockGroupsStats getECBlockGroupsStats() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public HashMap getErasureCodingCodecs() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public BlocksStats getBlocksStats() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public 

[23/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/RELEASENOTES.0.23.3.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/RELEASENOTES.0.23.3.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/RELEASENOTES.0.23.3.md
index 48e4473..23d60eb 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/RELEASENOTES.0.23.3.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.3/RELEASENOTES.0.23.3.md
@@ -23,93 +23,93 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-8703](https://issues.apache.org/jira/browse/HADOOP-8703) | *Major* | 
**distcpV2: turn CRC checking off for 0 byte size**
+* [MAPREDUCE-3348](https://issues.apache.org/jira/browse/MAPREDUCE-3348) | 
*Major* | **mapred job -status fails to give info even if the job is present in 
History**
 
-distcp skips CRC on 0 byte files.
+Fixed a bug in MR client to redirect to JobHistoryServer correctly when RM 
forgets the app.
 
 
 ---
 
-* [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | *Major* | 
**fs -mkdir creates parent directories without the -p option**
+* [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
*Major* | **User set java.library.path seems to overwrite default creating 
problems native lib loading**
 
-FsShell's "mkdir" no longer implicitly creates all non-existent parent 
directories.  The command adopts the posix compliant behavior of requiring the 
"-p" flag to auto-create parent directories.
+-Djava.library.path in mapred.child.java.opts can cause issues with native 
libraries.  LD\_LIBRARY\_PATH through mapred.child.env should be used instead.
 
 
 ---
 
-* [HADOOP-8327](https://issues.apache.org/jira/browse/HADOOP-8327) | *Major* | 
**distcpv2 and distcpv1 jars should not coexist**
+* [MAPREDUCE-4017](https://issues.apache.org/jira/browse/MAPREDUCE-4017) | 
*Trivial* | **Add jobname to jobsummary log**
 
-Resolve sporadic distcp issue due to having two DistCp classes (v1 & v2) in 
the classpath.
+The Job Summary log may contain commas in values that are escaped by a '\\' 
character.  This was true before, but is more likely to be exposed now.
 
 
 ---
 
-* [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | *Blocker* | 
**Hftp hangs on transfers \>2GB**
+* [MAPREDUCE-3812](https://issues.apache.org/jira/browse/MAPREDUCE-3812) | 
*Major* | **Lower default allocation sizes, fix allocation configurations and 
document them**
+
+Removes two sets of previously available config properties:
 
-**WARNING: No release note provided for this incompatible change.**
+1. ( yarn.scheduler.fifo.minimum-allocation-mb and 
yarn.scheduler.fifo.maximum-allocation-mb ) and,
+2. ( yarn.scheduler.capacity.minimum-allocation-mb and 
yarn.scheduler.capacity.maximum-allocation-mb )
 
+In favor of two new, generically named properties:
 

+1. yarn.scheduler.minimum-allocation-mb - This acts as the floor value of 
memory resource requests for containers.
+2. yarn.scheduler.maximum-allocation-mb - This acts as the ceiling value of 
memory resource requests for containers.
 
-* [MAPREDUCE-4311](https://issues.apache.org/jira/browse/MAPREDUCE-4311) | 
*Major* | **Capacity scheduler.xml does not accept decimal values for capacity 
and maximum-capacity settings**
+Both these properties need to be set at the ResourceManager (RM) to take 
effect, as the RM is where the scheduler resides.
 
-**WARNING: No release note provided for this incompatible change.**
+Also changes the default minimum and maximums to 128 MB and 10 GB respectively.
 
 
 ---
 
-* [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
*Major* | **User set java.library.path seems to overwrite default creating 
problems native lib loading**
+* [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | *Blocker* | 
**Hftp hangs on transfers \>2GB**
 
--Djava.library.path in mapred.child.java.opts can cause issues with native 
libraries.  LD\_LIBRARY\_PATH through mapred.child.env should be used instead.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [MAPREDUCE-4017](https://issues.apache.org/jira/browse/MAPREDUCE-4017) | 
*Trivial* | **Add jobname to jobsummary log**
+* [HADOOP-8327](https://issues.apache.org/jira/browse/HADOOP-8327) | *Major* | 
**distcpv2 and distcpv1 jars should not coexist**
 
-The Job Summary log may contain commas in values that are escaped by a '\' 
character.  This was true before, but is more likely to be exposed now.
+Resolve sporadic distcp issue due to having two DistCp classes (v1 & v2) in 
the classpath.
 
 
 ---
 
-* [MAPREDUCE-3940](https://issues.apache.org/jira/browse/MAPREDUCE-3940) | 
*Major* | **ContainerTokens should have an expiry interval**
+* 

[13/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/RELEASENOTES.2.0.2-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/RELEASENOTES.2.0.2-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/RELEASENOTES.2.0.2-alpha.md
index 7073417..77aa1a0 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/RELEASENOTES.2.0.2-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/RELEASENOTES.2.0.2-alpha.md
@@ -23,65 +23,63 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-8794](https://issues.apache.org/jira/browse/HADOOP-8794) | *Major* | 
**Modifiy bin/hadoop to point to HADOOP\_YARN\_HOME**
+* [HADOOP-7703](https://issues.apache.org/jira/browse/HADOOP-7703) | *Major* | 
**WebAppContext should also be stopped and cleared**
 
-**WARNING: No release note provided for this incompatible change.**
+Improved excpetion handling of shutting down web server. (Devaraj K via Eric 
Yang)
 
 
 ---
 
-* [HADOOP-8710](https://issues.apache.org/jira/browse/HADOOP-8710) | *Major* | 
**Remove ability for users to easily run the trash emptier**
+* [MAPREDUCE-3348](https://issues.apache.org/jira/browse/MAPREDUCE-3348) | 
*Major* | **mapred job -status fails to give info even if the job is present in 
History**
 
-The trash emptier may no longer be run using "hadoop 
org.apache.hadoop.fs.Trash". The trash emptier runs on the NameNode (if 
configured). Old trash checkpoints may be deleted using "hadoop fs -expunge".
+Fixed a bug in MR client to redirect to JobHistoryServer correctly when RM 
forgets the app.
 
 
 ---
 
-* [HADOOP-8703](https://issues.apache.org/jira/browse/HADOOP-8703) | *Major* | 
**distcpV2: turn CRC checking off for 0 byte size**
+* [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
*Major* | **User set java.library.path seems to overwrite default creating 
problems native lib loading**
 
-distcp skips CRC on 0 byte files.
+-Djava.library.path in mapred.child.java.opts can cause issues with native 
libraries.  LD\_LIBRARY\_PATH through mapred.child.env should be used instead.
 
 
 ---
 
-* [HADOOP-8689](https://issues.apache.org/jira/browse/HADOOP-8689) | *Major* | 
**Make trash a server side configuration option**
+* [HDFS-3110](https://issues.apache.org/jira/browse/HDFS-3110) | *Major* | 
**libhdfs implementation of direct read API**
 
-If fs.trash.interval is configured on the server then the client's value for 
this configuration is ignored.
+libhdfs is enhanced to read directly into user-supplied buffers when possible, 
reducing the number of memory copies.
 
 
 ---
 
-* [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | *Major* | 
**fs -mkdir creates parent directories without the -p option**
+* [MAPREDUCE-4017](https://issues.apache.org/jira/browse/MAPREDUCE-4017) | 
*Trivial* | **Add jobname to jobsummary log**
 
-FsShell's "mkdir" no longer implicitly creates all non-existent parent 
directories.  The command adopts the posix compliant behavior of requiring the 
"-p" flag to auto-create parent directories.
+The Job Summary log may contain commas in values that are escaped by a '\\' 
character.  This was true before, but is more likely to be exposed now.
 
 
 ---
 
-* [HADOOP-8533](https://issues.apache.org/jira/browse/HADOOP-8533) | *Major* | 
**Remove Parallel Call in IPC**
-
-Merged the change to branch-2
-
-

+* [MAPREDUCE-3812](https://issues.apache.org/jira/browse/MAPREDUCE-3812) | 
*Major* | **Lower default allocation sizes, fix allocation configurations and 
document them**
 
-* [HADOOP-8458](https://issues.apache.org/jira/browse/HADOOP-8458) | *Major* | 
**Add management hook to AuthenticationHandler to enable delegation token 
operations support**
+Removes two sets of previously available config properties:
 
-**WARNING: No release note provided for this incompatible change.**
+1. ( yarn.scheduler.fifo.minimum-allocation-mb and 
yarn.scheduler.fifo.maximum-allocation-mb ) and,
+2. ( yarn.scheduler.capacity.minimum-allocation-mb and 
yarn.scheduler.capacity.maximum-allocation-mb )
 
+In favor of two new, generically named properties:
 

+1. yarn.scheduler.minimum-allocation-mb - This acts as the floor value of 
memory resource requests for containers.
+2. yarn.scheduler.maximum-allocation-mb - This acts as the ceiling value of 
memory resource requests for containers.
 
-* [HADOOP-8388](https://issues.apache.org/jira/browse/HADOOP-8388) | *Minor* | 
**Remove unused BlockLocation serialization**
+Both these properties need to be set at the ResourceManager (RM) to take 
effect, as the RM is where the scheduler resides.
 
-**WARNING: No release note provided for this incompatible change.**
+Also changes 

[65/73] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo 
Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43a1a5fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43a1a5fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43a1a5fe

Branch: refs/heads/HDFS-10467
Commit: 43a1a5fedbf68f724ca8cdc6106d427c1a6da61b
Parents: ee74813
Author: Inigo Goiri 
Authored: Thu May 11 09:57:03 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:54 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   38 +
 .../resolver/FederationNamespaceInfo.java   |   46 +-
 .../federation/resolver/RemoteLocation.java |   46 +-
 .../federation/router/ConnectionContext.java|  104 +
 .../federation/router/ConnectionManager.java|  408 
 .../federation/router/ConnectionPool.java   |  314 +++
 .../federation/router/ConnectionPoolId.java |  117 ++
 .../router/RemoteLocationContext.java   |   38 +-
 .../server/federation/router/RemoteMethod.java  |  164 ++
 .../server/federation/router/RemoteParam.java   |   71 +
 .../hdfs/server/federation/router/Router.java   |   58 +-
 .../federation/router/RouterRpcClient.java  |  856 
 .../federation/router/RouterRpcServer.java  | 1867 +-
 .../src/main/resources/hdfs-default.xml |   95 +
 .../server/federation/FederationTestUtils.java  |   80 +-
 .../hdfs/server/federation/MockResolver.java|   90 +-
 .../server/federation/RouterConfigBuilder.java  |   20 +-
 .../server/federation/RouterDFSCluster.java |  535 +++--
 .../server/federation/router/TestRouter.java|   31 +-
 .../server/federation/router/TestRouterRpc.java |  869 
 .../router/TestRouterRpcMultiDestination.java   |  216 ++
 21 files changed, 5675 insertions(+), 388 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43a1a5fe/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 8cdd450..8b39e88 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1117,6 +1117,44 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   // HDFS Router-based federation
   public static final String FEDERATION_ROUTER_PREFIX =
   "dfs.federation.router.";
+  public static final String DFS_ROUTER_DEFAULT_NAMESERVICE =
+  FEDERATION_ROUTER_PREFIX + "default.nameserviceId";
+  public static final String DFS_ROUTER_HANDLER_COUNT_KEY =
+  FEDERATION_ROUTER_PREFIX + "handler.count";
+  public static final int DFS_ROUTER_HANDLER_COUNT_DEFAULT = 10;
+  public static final String DFS_ROUTER_READER_QUEUE_SIZE_KEY =
+  FEDERATION_ROUTER_PREFIX + "reader.queue.size";
+  public static final int DFS_ROUTER_READER_QUEUE_SIZE_DEFAULT = 100;
+  public static final String DFS_ROUTER_READER_COUNT_KEY =
+  FEDERATION_ROUTER_PREFIX + "reader.count";
+  public static final int DFS_ROUTER_READER_COUNT_DEFAULT = 1;
+  public static final String DFS_ROUTER_HANDLER_QUEUE_SIZE_KEY =
+  FEDERATION_ROUTER_PREFIX + "handler.queue.size";
+  public static final int DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT = 100;
+  public static final String DFS_ROUTER_RPC_BIND_HOST_KEY =
+  FEDERATION_ROUTER_PREFIX + "rpc-bind-host";
+  public static final int DFS_ROUTER_RPC_PORT_DEFAULT = ;
+  public static final String DFS_ROUTER_RPC_ADDRESS_KEY =
+  FEDERATION_ROUTER_PREFIX + "rpc-address";
+  public static final String DFS_ROUTER_RPC_ADDRESS_DEFAULT =
+  "0.0.0.0:" + DFS_ROUTER_RPC_PORT_DEFAULT;
+  public static final String DFS_ROUTER_RPC_ENABLE =
+  FEDERATION_ROUTER_PREFIX + "rpc.enable";
+  public static final boolean DFS_ROUTER_RPC_ENABLE_DEFAULT = true;
+
+  // HDFS Router NN client
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
+  FEDERATION_ROUTER_PREFIX + "connection.pool-size";
+  public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
+  64;
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_CLEAN =
+  FEDERATION_ROUTER_PREFIX + "connection.pool.clean.ms";
+  public static final long DFS_ROUTER_NAMENODE_CONNECTION_POOL_CLEAN_DEFAULT =
+  TimeUnit.MINUTES.toMillis(1);
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_CLEAN_MS =
+  FEDERATION_ROUTER_PREFIX + "connection.clean.ms";
+  public 

[53/73] [abbrv] hadoop git commit: HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee748132/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
new file mode 100644
index 000..7f0b36a
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
@@ -0,0 +1,483 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.driver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord;
+import org.apache.hadoop.hdfs.server.federation.store.records.Query;
+import org.apache.hadoop.hdfs.server.federation.store.records.QueryResult;
+import org.junit.AfterClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base tests for the driver. The particular implementations will use this to
+ * test their functionality.
+ */
+public class TestStateStoreDriverBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestStateStoreDriverBase.class);
+
+  private static StateStoreService stateStore;
+  private static Configuration conf;
+
+
+  /**
+   * Get the State Store driver.
+   * @return State Store driver.
+   */
+  protected StateStoreDriver getStateStoreDriver() {
+return stateStore.getDriver();
+  }
+
+  @AfterClass
+  public static void tearDownCluster() {
+if (stateStore != null) {
+  stateStore.stop();
+}
+  }
+
+  /**
+   * Get a new State Store using this configuration.
+   *
+   * @param config Configuration for the State Store.
+   * @throws Exception If we cannot get the State Store.
+   */
+  public static void getStateStore(Configuration config) throws Exception {
+conf = config;
+stateStore = FederationStateStoreTestUtils.getStateStore(conf);
+  }
+
+  private  T generateFakeRecord(Class recordClass)
+  throws IllegalArgumentException, IllegalAccessException, IOException {
+
+// TODO add record
+return null;
+  }
+
+  /**
+   * Validate if a record is the same.
+   *
+   * @param original
+   * @param committed
+   * @param assertEquals Assert if the records are equal or just return.
+   * @return
+   * @throws IllegalArgumentException
+   * @throws IllegalAccessException
+   */
+  private boolean validateRecord(
+  BaseRecord original, BaseRecord committed, boolean assertEquals)
+  throws IllegalArgumentException, IllegalAccessException {
+
+boolean ret = true;
+
+Map fields = getFields(original);
+for (String key : fields.keySet()) {
+  if (key.equals("dateModified") ||
+  key.equals("dateCreated") ||
+  key.equals("proto")) {
+// Fields are updated/set on commit and fetch and may not match
+// the fields that are initialized in a non-committed object.
+continue;
+  }
+  Object data1 = getField(original, key);
+  Object data2 = getField(committed, key);
+  if (assertEquals) {
+assertEquals("Field " + key + " does not match", data1, data2);
+  } else if (!data1.equals(data2)) {
+ret = false;
+  }
+}
+
+long now = 

[68/73] [abbrv] hadoop git commit: HDFS-10880. Federation Mount Table State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10880. Federation Mount Table State Store internal API. Contributed by 
Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f2f761d3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f2f761d3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f2f761d3

Branch: refs/heads/HDFS-10467
Commit: f2f761d357882e414c4402b63ff6099d8ff3ee37
Parents: 2a9235b
Author: Inigo Goiri 
Authored: Fri Aug 4 18:00:12 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:55 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   7 +-
 .../federation/resolver/MountTableManager.java  |  80 +++
 .../federation/resolver/MountTableResolver.java | 544 +++
 .../federation/resolver/PathLocation.java   | 124 -
 .../resolver/order/DestinationOrder.java|  29 +
 .../federation/resolver/order/package-info.java |  29 +
 .../federation/router/FederationUtil.java   |  56 +-
 .../hdfs/server/federation/router/Router.java   |   3 +-
 .../federation/store/MountTableStore.java   |  49 ++
 .../federation/store/StateStoreService.java |   2 +
 .../store/impl/MountTableStoreImpl.java | 116 
 .../protocol/AddMountTableEntryRequest.java |  47 ++
 .../protocol/AddMountTableEntryResponse.java|  42 ++
 .../protocol/GetMountTableEntriesRequest.java   |  49 ++
 .../protocol/GetMountTableEntriesResponse.java  |  53 ++
 .../protocol/RemoveMountTableEntryRequest.java  |  49 ++
 .../protocol/RemoveMountTableEntryResponse.java |  42 ++
 .../protocol/UpdateMountTableEntryRequest.java  |  51 ++
 .../protocol/UpdateMountTableEntryResponse.java |  43 ++
 .../pb/AddMountTableEntryRequestPBImpl.java |  84 +++
 .../pb/AddMountTableEntryResponsePBImpl.java|  76 +++
 .../pb/GetMountTableEntriesRequestPBImpl.java   |  76 +++
 .../pb/GetMountTableEntriesResponsePBImpl.java  | 104 
 .../pb/RemoveMountTableEntryRequestPBImpl.java  |  76 +++
 .../pb/RemoveMountTableEntryResponsePBImpl.java |  76 +++
 .../pb/UpdateMountTableEntryRequestPBImpl.java  |  96 
 .../pb/UpdateMountTableEntryResponsePBImpl.java |  76 +++
 .../federation/store/records/MountTable.java| 301 ++
 .../store/records/impl/pb/MountTablePBImpl.java | 213 
 .../src/main/proto/FederationProtocol.proto |  61 ++-
 .../hdfs/server/federation/MockResolver.java|   9 +-
 .../resolver/TestMountTableResolver.java| 396 ++
 .../store/FederationStateStoreTestUtils.java|  16 +
 .../store/TestStateStoreMountTable.java | 250 +
 .../store/driver/TestStateStoreDriverBase.java  |  12 +
 .../store/records/TestMountTable.java   | 176 ++
 36 files changed, 3437 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2f761d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d1c2b41..5433df3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -27,6 +27,8 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
@@ -1175,8 +1177,9 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   // HDFS Router State Store connection
   public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =
   FEDERATION_ROUTER_PREFIX + "file.resolver.client.class";
-  public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS_DEFAULT =
-  "org.apache.hadoop.hdfs.server.federation.MockResolver";
+  public static final Class
+  FEDERATION_FILE_RESOLVER_CLIENT_CLASS_DEFAULT =
+  MountTableResolver.class;
   public static final String FEDERATION_NAMENODE_RESOLVER_CLIENT_CLASS =
   

[70/73] [abbrv] hadoop git commit: HDFS-10631. Federation State Store ZooKeeper implementation. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10631. Federation State Store ZooKeeper implementation. Contributed by 
Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/da654226
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/da654226
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/da654226

Branch: refs/heads/HDFS-10467
Commit: da654226d16f9336a9841cd5b4017826667eb2af
Parents: 5d906b9
Author: Inigo Goiri 
Authored: Mon Aug 21 11:40:41 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:56 2017 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   9 +
 .../driver/impl/StateStoreSerializableImpl.java |  19 ++
 .../driver/impl/StateStoreZooKeeperImpl.java| 298 +++
 .../store/driver/TestStateStoreDriverBase.java  |   2 +-
 .../store/driver/TestStateStoreZK.java  | 105 +++
 5 files changed, 432 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/da654226/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 360aeae..27807ea 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -203,6 +203,15 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   com.fasterxml.jackson.core
   jackson-databind
 
+
+  org.apache.curator
+  curator-framework
+
+
+  org.apache.curator
+  curator-test
+  test
+
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/da654226/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java
index e9b3fdf..e2038fa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreSerializableImpl.java
@@ -30,6 +30,11 @@ import 
org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord;
  */
 public abstract class StateStoreSerializableImpl extends StateStoreBaseImpl {
 
+  /** Mark for slashes in path names. */
+  protected static final String SLASH_MARK = "0SLASH0";
+  /** Mark for colon in path names. */
+  protected static final String COLON_MARK = "_";
+
   /** Default serializer for this driver. */
   private StateStoreSerializer serializer;
 
@@ -74,4 +79,18 @@ public abstract class StateStoreSerializableImpl extends 
StateStoreBaseImpl {
   String data, Class clazz, boolean includeDates) throws IOException {
 return serializer.deserialize(data, clazz);
   }
+
+  /**
+   * Get the primary key for a record. If we don't want to store in folders, we
+   * need to remove / from the name.
+   *
+   * @param record Record to get the primary key for.
+   * @return Primary key for the record.
+   */
+  protected static String getPrimaryKey(BaseRecord record) {
+String primaryKey = record.getPrimaryKey();
+primaryKey = primaryKey.replaceAll("/", SLASH_MARK);
+primaryKey = primaryKey.replaceAll(":", COLON_MARK);
+return primaryKey;
+  }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/da654226/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java
new file mode 100644
index 000..ddcd537
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java
@@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you 

[64/73] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43a1a5fe/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
new file mode 100644
index 000..3a32be1
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -0,0 +1,856 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ThreadFactory;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.io.retry.RetryPolicy.RetryAction.RetryDecision;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.StandbyException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * A client proxy for Router -> NN communication using the NN ClientProtocol.
+ * 
+ * Provides routers to invoke remote ClientProtocol methods and handle
+ * retries/failover.
+ * 
+ * invokeSingle Make a single request to a single namespace
+ * invokeSequential Make a sequential series of requests to multiple
+ * ordered namespaces until a condition is met.
+ * invokeConcurrent Make concurrent requests to multiple namespaces and
+ * return all of the results.
+ * 
+ * Also maintains a cached pool of connections to NNs. Connections are managed
+ * by the ConnectionManager and are unique to each user + NN. The size of the
+ * connection pool can be configured. Larger pools allow for more simultaneous
+ * requests to a single NN from a single user.
+ */
+public class RouterRpcClient {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RouterRpcClient.class);
+
+
+  /** Router identifier. */
+  private final String routerId;
+
+  /** Interface to identify the active NN for a nameservice or blockpool ID. */
+  private final ActiveNamenodeResolver namenodeResolver;
+
+  /** Connection pool to the Namenodes per user for performance. */
+  private final ConnectionManager connectionManager;
+  /** Service to run asynchronous calls. */
+  private final ExecutorService executorService;
+  /** Retry policy for router -> NN communication. */
+  private final RetryPolicy retryPolicy;
+
+  /** Pattern to parse a stack trace line. */
+  private static final Pattern STACK_TRACE_PATTERN =
+  Pattern.compile("\\tat (.*)\\.(.*)\\((.*):(\\d*)\\)");
+
+
+  /**
+   * Create a router RPC 

[15/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/RELEASENOTES.2.0.0-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/RELEASENOTES.2.0.0-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/RELEASENOTES.2.0.0-alpha.md
index 06edd6d..674a8eb 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/RELEASENOTES.2.0.0-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/RELEASENOTES.2.0.0-alpha.md
@@ -23,100 +23,100 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-8314](https://issues.apache.org/jira/browse/HADOOP-8314) | *Major* | 
**HttpServer#hasAdminAccess should return false if authorization is enabled but 
user is not authenticated**
+* [HDFS-395](https://issues.apache.org/jira/browse/HDFS-395) | *Major* | **DFS 
Scalability: Incremental block reports**
 
-**WARNING: No release note provided for this incompatible change.**
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-8270](https://issues.apache.org/jira/browse/HADOOP-8270) | *Minor* | 
**hadoop-daemon.sh stop action should return 0 for an already stopped service**
+* [HADOOP-7524](https://issues.apache.org/jira/browse/HADOOP-7524) | *Major* | 
**Change RPC to allow multiple protocols including multiple versions of the 
same protocol**
 
-The daemon stop action no longer returns failure when stopping an already 
stopped service.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-8184](https://issues.apache.org/jira/browse/HADOOP-8184) | *Major* | 
**ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
layer reply packet.**
+* [HADOOP-7704](https://issues.apache.org/jira/browse/HADOOP-7704) | *Minor* | 
**JsonFactory can be created only once and used for every next request to 
create JsonGenerator inside JMXJsonServlet**
 
-This change will affect the output of errors for some Hadoop CLI commands. 
Specifically, the name of the exception class will no longer appear, and 
instead only the text of the exception message will appear.
+Reduce number of object created by JMXJsonServlet. (Devaraj K via Eric Yang)
 
 
 ---
 
-* [HADOOP-8154](https://issues.apache.org/jira/browse/HADOOP-8154) | *Major* | 
**DNS#getIPs shouldn't silently return the local host IP for bogus interface 
names**
+* [MAPREDUCE-3818](https://issues.apache.org/jira/browse/MAPREDUCE-3818) | 
*Blocker* | **Trunk MRV1 compilation is broken.**
 
-**WARNING: No release note provided for this incompatible change.**
+Fixed broken compilation in TestSubmitJob after the patch for HDFS-2895.
 
 
 ---
 
-* [HADOOP-8149](https://issues.apache.org/jira/browse/HADOOP-8149) | *Major* | 
**cap space usage of default log4j rolling policy**
+* [HDFS-2731](https://issues.apache.org/jira/browse/HDFS-2731) | *Major* | 
**HA: Autopopulate standby name dirs if they're empty**
 
-Hadoop log files are now rolled by size instead of date (daily) by default. 
Tools that depend on the log file name format will need to be updated. Users 
who would like to maintain the previous settings of hadoop.root.logger and 
hadoop.security.logger can use their current log4j.properties files and update 
the HADOOP\_ROOT\_LOGGER and HADOOP\_SECURITY\_LOGGER environment variables to 
use DRFA and DRFAS respectively.
+The HA NameNode may now be started with the "-bootstrapStandby" flag. This 
causes it to copy the namespace information and most recent checkpoint from its 
HA pair, and save it to local storage, allowing an HA setup to be bootstrapped 
without use of rsync or external tools.
 
 
 ---
 
-* [HADOOP-7704](https://issues.apache.org/jira/browse/HADOOP-7704) | *Minor* | 
**JsonFactory can be created only once and used for every next request to 
create JsonGenerator inside JMXJsonServlet**
+* [HDFS-2303](https://issues.apache.org/jira/browse/HDFS-2303) | *Major* | 
**Unbundle jsvc**
 
-Reduce number of object created by JMXJsonServlet. (Devaraj K via Eric Yang)
+To run secure Datanodes users must install jsvc for their platform and set 
JSVC\_HOME to point to the location of jsvc in their environment.
 
 
 ---
 
-* [HADOOP-7524](https://issues.apache.org/jira/browse/HADOOP-7524) | *Major* | 
**Change RPC to allow multiple protocols including multiple versions of the 
same protocol**
+* [HDFS-3044](https://issues.apache.org/jira/browse/HDFS-3044) | *Major* | 
**fsck move should be non-destructive by default**
 
-**WARNING: No release note provided for this incompatible change.**
+The fsck "move" option is no longer destructive. It copies the accessible 
blocks of corrupt files to lost and found as before, but no longer deletes the 
corrupt files after copying the blocks. The original, 

[62/73] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43a1a5fe/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
index ee6f57d..2875750 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -43,7 +43,7 @@ import org.apache.hadoop.util.Time;
 
 /**
  * In-memory cache/mock of a namenode and file resolver. Stores the most
- * recently updated NN information for each nameservice and block pool. Also
+ * recently updated NN information for each nameservice and block pool. It also
  * stores a virtual mount table for resolving global namespace paths to local 
NN
  * paths.
  */
@@ -51,82 +51,93 @@ public class MockResolver
 implements ActiveNamenodeResolver, FileSubclusterResolver {
 
   private Map resolver =
-  new HashMap();
-  private Map locations =
-  new HashMap();
-  private Set namespaces =
-  new HashSet();
+  new HashMap<>();
+  private Map locations = new HashMap<>();
+  private Set namespaces = new HashSet<>();
   private String defaultNamespace = null;
 
+
   public MockResolver(Configuration conf, StateStoreService store) {
 this.cleanRegistrations();
   }
 
-  public void addLocation(String mount, String nameservice, String location) {
-RemoteLocation remoteLocation = new RemoteLocation(nameservice, location);
-List locationsList = locations.get(mount);
+  public void addLocation(String mount, String nsId, String location) {
+List locationsList = this.locations.get(mount);
 if (locationsList == null) {
-  locationsList = new LinkedList();
-  locations.put(mount, locationsList);
+  locationsList = new LinkedList<>();
+  this.locations.put(mount, locationsList);
 }
+
+final RemoteLocation remoteLocation = new RemoteLocation(nsId, location);
 if (!locationsList.contains(remoteLocation)) {
   locationsList.add(remoteLocation);
 }
 
 if (this.defaultNamespace == null) {
-  this.defaultNamespace = nameservice;
+  this.defaultNamespace = nsId;
 }
   }
 
   public synchronized void cleanRegistrations() {
-this.resolver =
-new HashMap();
-this.namespaces = new HashSet();
+this.resolver = new HashMap<>();
+this.namespaces = new HashSet<>();
   }
 
   @Override
   public void updateActiveNamenode(
-  String ns, InetSocketAddress successfulAddress) {
+  String nsId, InetSocketAddress successfulAddress) {
 
 String address = successfulAddress.getHostName() + ":" +
 successfulAddress.getPort();
-String key = ns;
+String key = nsId;
 if (key != null) {
   // Update the active entry
   @SuppressWarnings("unchecked")
-  List iterator =
-  (List) resolver.get(key);
-  for (FederationNamenodeContext namenode : iterator) {
+  List namenodes =
+  (List) this.resolver.get(key);
+  for (FederationNamenodeContext namenode : namenodes) {
 if (namenode.getRpcAddress().equals(address)) {
   MockNamenodeContext nn = (MockNamenodeContext) namenode;
   nn.setState(FederationNamenodeServiceState.ACTIVE);
   break;
 }
   }
-  Collections.sort(iterator, new NamenodePriorityComparator());
+  // This operation modifies the list so we need to be careful
+  synchronized(namenodes) {
+Collections.sort(namenodes, new NamenodePriorityComparator());
+  }
 }
   }
 
   @Override
   public List
   getNamenodesForNameserviceId(String nameserviceId) {
-return resolver.get(nameserviceId);
+// Return a copy of the list because it is updated periodically
+List namenodes =
+this.resolver.get(nameserviceId);
+return Collections.unmodifiableList(new ArrayList<>(namenodes));
   }
 
   @Override
   public List getNamenodesForBlockPoolId(
   String blockPoolId) {
-return resolver.get(blockPoolId);
+// Return a copy of the list because it is updated periodically
+List namenodes =
+this.resolver.get(blockPoolId);
+return Collections.unmodifiableList(new ArrayList<>(namenodes));
   }
 
   private static class MockNamenodeContext
   implements FederationNamenodeContext {
+
+private String namenodeId;
+private String nameserviceId;
+
 private String webAddress;
 private String rpcAddress;
 private String serviceAddress;
 private String lifelineAddress;
-private String namenodeId;
-private String 

[06/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
index 71d6e77..5bcfe1d 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
@@ -24,648 +24,642 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | libhdfs 
doesn't return correct error codes in most cases |  Major | libhdfs | Colin 
Patrick McCabe | Colin Patrick McCabe |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | libhdfs 
doesn't return correct error codes in most cases |  Major | libhdfs | Colin P. 
McCabe | Colin P. McCabe |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | Add a 
directbuffer Decompressor API to hadoop |  Major | io | Gopal V | Gopal V |
-| [HADOOP-9848](https://issues.apache.org/jira/browse/HADOOP-9848) | Create a 
MiniKDC for use with security testing |  Major | security, test | Wei Yan | Wei 
Yan |
-| [HADOOP-9618](https://issues.apache.org/jira/browse/HADOOP-9618) | Add 
thread which detects JVM pauses |  Major | util | Todd Lipcon | Todd Lipcon |
 | [HADOOP-9432](https://issues.apache.org/jira/browse/HADOOP-9432) | Add 
support for markdown .md files in site documentation |  Minor | build, 
documentation | Steve Loughran | Steve Loughran |
+| [HADOOP-9618](https://issues.apache.org/jira/browse/HADOOP-9618) | Add 
thread which detects JVM pauses |  Major | util | Todd Lipcon | Todd Lipcon |
+| [MAPREDUCE-5265](https://issues.apache.org/jira/browse/MAPREDUCE-5265) | 
History server admin service to refresh user and superuser group mappings |  
Major | jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [MAPREDUCE-5266](https://issues.apache.org/jira/browse/MAPREDUCE-5266) | 
Ability to refresh retention settings on history server |  Major | 
jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [HADOOP-9848](https://issues.apache.org/jira/browse/HADOOP-9848) | Create a 
MiniKDC for use with security testing |  Major | security, test | Wei Yan | Wei 
Yan |
 | [HADOOP-8545](https://issues.apache.org/jira/browse/HADOOP-8545) | 
Filesystem Implementation for OpenStack Swift |  Major | fs | Tim Miller | 
Dmitry Mezhensky |
-| [HDFS-5703](https://issues.apache.org/jira/browse/HDFS-5703) | Add support 
for HTTPS and swebhdfs to HttpFS |  Major | webhdfs | Alejandro Abdelnur | 
Alejandro Abdelnur |
-| [HDFS-5260](https://issues.apache.org/jira/browse/HDFS-5260) | Merge 
zero-copy memory-mapped HDFS client reads to trunk and branch-2. |  Major | 
hdfs-client, libhdfs | Chris Nauroth | Chris Nauroth |
-| [HDFS-4949](https://issues.apache.org/jira/browse/HDFS-4949) | Centralized 
cache management in HDFS |  Major | datanode, namenode | Andrew Wang | Andrew 
Wang |
-| [HDFS-2832](https://issues.apache.org/jira/browse/HDFS-2832) | Enable 
support for heterogeneous storages in HDFS - DN as a collection of storages |  
Major | datanode, namenode | Suresh Srinivas | Arpit Agarwal |
 | [MAPREDUCE-5332](https://issues.apache.org/jira/browse/MAPREDUCE-5332) | 
Support token-preserving restart of history server |  Major | jobhistoryserver 
| Jason Lowe | Jason Lowe |
-| [MAPREDUCE-5266](https://issues.apache.org/jira/browse/MAPREDUCE-5266) | 
Ability to refresh retention settings on history server |  Major | 
jobhistoryserver | Jason Lowe | Ashwin Shankar |
-| [MAPREDUCE-5265](https://issues.apache.org/jira/browse/MAPREDUCE-5265) | 
History server admin service to refresh user and superuser group mappings |  
Major | jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [YARN-1021](https://issues.apache.org/jira/browse/YARN-1021) | Yarn 
Scheduler Load Simulator |  Major | scheduler | Wei Yan | Wei Yan |
+| [HDFS-5260](https://issues.apache.org/jira/browse/HDFS-5260) | Merge 
zero-copy memory-mapped HDFS client reads to trunk and branch-2. |  Major | 
hdfs-client, libhdfs | Chris Nauroth | Chris Nauroth |
+| [YARN-1253](https://issues.apache.org/jira/browse/YARN-1253) | Changes to 
LinuxContainerExecutor to run containers as a single dedicated user in 
non-secure mode |  Blocker | nodemanager | Alejandro Abdelnur | Roman 
Shaposhnik |
 | [MAPREDUCE-1176](https://issues.apache.org/jira/browse/MAPREDUCE-1176) | 

[31/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
index 40de51c..aa5e8cf 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
@@ -24,745 +24,739 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7229](https://issues.apache.org/jira/browse/HADOOP-7229) | Absolute 
path to kinit in auto-renewal thread |  Major | security | Aaron T. Myers | 
Aaron T. Myers |
-| [HADOOP-7137](https://issues.apache.org/jira/browse/HADOOP-7137) | Remove 
hod contrib |  Major | . | Nigel Daley | Nigel Daley |
-| [HADOOP-7013](https://issues.apache.org/jira/browse/HADOOP-7013) | Add 
boolean field isCorrupt to BlockLocation |  Major | . | Patrick Kling | Patrick 
Kling |
-| [HADOOP-6949](https://issues.apache.org/jira/browse/HADOOP-6949) | Reduces 
RPC packet size for primitive arrays, especially long[], which is used at block 
reporting |  Major | io | Navis | Matt Foley |
-| [HADOOP-6905](https://issues.apache.org/jira/browse/HADOOP-6905) | Better 
logging messages when a delegation token is invalid |  Major | security | Kan 
Zhang | Kan Zhang |
-| [HADOOP-6835](https://issues.apache.org/jira/browse/HADOOP-6835) | Support 
concatenated gzip files |  Major | io | Tom White | Greg Roelofs |
-| [HADOOP-6787](https://issues.apache.org/jira/browse/HADOOP-6787) | Factor 
out glob pattern code from FileContext and Filesystem |  Major | fs | Luke Lu | 
Luke Lu |
 | [HADOOP-6730](https://issues.apache.org/jira/browse/HADOOP-6730) | Bug in 
FileContext#copy and provide base class for FileContext tests |  Major | fs, 
test | Eli Collins | Ravi Phulari |
-| [HDFS-1825](https://issues.apache.org/jira/browse/HDFS-1825) | Remove 
thriftfs contrib |  Major | . | Nigel Daley | Nigel Daley |
-| [HDFS-1560](https://issues.apache.org/jira/browse/HDFS-1560) | dfs.data.dir 
permissions should default to 700 |  Minor | datanode | Todd Lipcon | Todd 
Lipcon |
-| [HDFS-1435](https://issues.apache.org/jira/browse/HDFS-1435) | Provide an 
option to store fsimage compressed |  Major | namenode | Hairong Kuang | 
Hairong Kuang |
-| [HDFS-1315](https://issues.apache.org/jira/browse/HDFS-1315) | Add fsck 
event to audit log and remove other audit log events corresponding to FSCK 
listStatus and open calls |  Major | namenode, tools | Suresh Srinivas | Suresh 
Srinivas |
 | [HDFS-1109](https://issues.apache.org/jira/browse/HDFS-1109) | HFTP and URL 
Encoding |  Major | contrib/hdfsproxy, datanode | Dmytro Molkov | Dmytro Molkov 
|
-| [HDFS-1080](https://issues.apache.org/jira/browse/HDFS-1080) | 
SecondaryNameNode image transfer should use the defined http address rather 
than local ip address |  Major | namenode | Jakob Homan | Jakob Homan |
 | [HDFS-1061](https://issues.apache.org/jira/browse/HDFS-1061) | Memory 
footprint optimization for INodeFile object. |  Minor | namenode | Bharath 
Mundlapudi | Bharath Mundlapudi |
-| [HDFS-903](https://issues.apache.org/jira/browse/HDFS-903) | NN should 
verify images and edit logs on startup |  Critical | namenode | Eli Collins | 
Hairong Kuang |
+| [MAPREDUCE-1683](https://issues.apache.org/jira/browse/MAPREDUCE-1683) | 
Remove JNI calls from ClusterStatus cstr |  Major | jobtracker | Chris Douglas 
| Luke Lu |
+| [HADOOP-6787](https://issues.apache.org/jira/browse/HADOOP-6787) | Factor 
out glob pattern code from FileContext and Filesystem |  Major | fs | Luke Lu | 
Luke Lu |
+| [HDFS-1080](https://issues.apache.org/jira/browse/HDFS-1080) | 
SecondaryNameNode image transfer should use the defined http address rather 
than local ip address |  Major | namenode | Jakob Homan | Jakob Homan |
+| [HADOOP-6835](https://issues.apache.org/jira/browse/HADOOP-6835) | Support 
concatenated gzip files |  Major | io | Tom White | Greg Roelofs |
+| [MAPREDUCE-1733](https://issues.apache.org/jira/browse/MAPREDUCE-1733) | 
Authentication between pipes processes and java counterparts. |  Major | . | 
Jitendra Nath Pandey | Jitendra Nath Pandey |
+| [HDFS-1315](https://issues.apache.org/jira/browse/HDFS-1315) | Add fsck 
event to audit log and remove other audit log events corresponding to FSCK 
listStatus and open calls |  Major | namenode, tools | Suresh Srinivas | Suresh 
Srinivas |
+| [MAPREDUCE-1866](https://issues.apache.org/jira/browse/MAPREDUCE-1866) | 
Remove deprecated class org.apache.hadoop.streaming.UTF8ByteArrayUtils |  Minor 
| contrib/streaming | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
 | 

[20/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
index 2b0e48c..ea968d6 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
@@ -20,16 +20,6 @@
 
 ## Release 0.9.0 - 2006-12-01
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
@@ -43,61 +33,61 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-725](https://issues.apache.org/jira/browse/HADOOP-725) | 
chooseTargets method in FSNamesystem is very inefficient |  Major | . | Milind 
Bhandarkar | Milind Bhandarkar |
-| [HADOOP-721](https://issues.apache.org/jira/browse/HADOOP-721) | jobconf.jsp 
shouldn't find the jobconf.xsl via http |  Major | . | Owen O'Malley | Arun C 
Murthy |
-| [HADOOP-689](https://issues.apache.org/jira/browse/HADOOP-689) | hadoop 
should provide a common way to wrap instances with different types into one 
type |  Major | io | Feng Jiang |  |
-| [HADOOP-688](https://issues.apache.org/jira/browse/HADOOP-688) | move dfs 
administrative interfaces to a separate command |  Major | . | dhruba borthakur 
| dhruba borthakur |
-| [HADOOP-677](https://issues.apache.org/jira/browse/HADOOP-677) | RPC should 
send a fixed header and version at the start of connection |  Major | ipc | 
Owen O'Malley | Owen O'Malley |
-| [HADOOP-668](https://issues.apache.org/jira/browse/HADOOP-668) | improvement 
to DFS browsing WI |  Minor | . | Yoram Arnon | Hairong Kuang |
-| [HADOOP-661](https://issues.apache.org/jira/browse/HADOOP-661) | JobConf for 
a job should be viewable from the web/ui |  Major | . | Owen O'Malley | Arun C 
Murthy |
 | [HADOOP-655](https://issues.apache.org/jira/browse/HADOOP-655) | remove 
deprecations |  Minor | . | Doug Cutting | Doug Cutting |
-| [HADOOP-613](https://issues.apache.org/jira/browse/HADOOP-613) | The final 
merge on the reduces should feed the reduce directly |  Major | . | Owen 
O'Malley | Devaraj Das |
 | [HADOOP-565](https://issues.apache.org/jira/browse/HADOOP-565) | Upgrade 
Jetty to 6.x |  Major | . | Owen O'Malley | Sanjay Dahiya |
-| [HADOOP-538](https://issues.apache.org/jira/browse/HADOOP-538) | Implement a 
nio's 'direct buffer' based wrapper over zlib to improve performance of 
java.util.zip.{De\|In}flater as a 'custom codec' |  Major | io | Arun C Murthy 
| Arun C Murthy |
+| [HADOOP-688](https://issues.apache.org/jira/browse/HADOOP-688) | move dfs 
administrative interfaces to a separate command |  Major | . | dhruba borthakur 
| dhruba borthakur |
+| [HADOOP-613](https://issues.apache.org/jira/browse/HADOOP-613) | The final 
merge on the reduces should feed the reduce directly |  Major | . | Owen 
O'Malley | Devaraj Das |
+| [HADOOP-661](https://issues.apache.org/jira/browse/HADOOP-661) | JobConf for 
a job should be viewable from the web/ui |  Major | . | Owen O'Malley | Arun C 
Murthy |
 | [HADOOP-489](https://issues.apache.org/jira/browse/HADOOP-489) | Seperating 
user logs from system logs in map reduce |  Minor | . | Mahadev konar | Arun C 
Murthy |
+| [HADOOP-668](https://issues.apache.org/jira/browse/HADOOP-668) | improvement 
to DFS browsing WI |  Minor | . | Yoram Arnon | Hairong Kuang |
+| [HADOOP-538](https://issues.apache.org/jira/browse/HADOOP-538) | Implement a 
nio's 'direct buffer' based wrapper over zlib to improve performance of 
java.util.zip.{De\|In}flater as a 'custom codec' |  Major | io | Arun C Murthy 
| Arun C Murthy |
+| [HADOOP-721](https://issues.apache.org/jira/browse/HADOOP-721) | jobconf.jsp 
shouldn't find the jobconf.xsl via http |  Major | . | Owen O'Malley | Arun C 
Murthy |
+| [HADOOP-725](https://issues.apache.org/jira/browse/HADOOP-725) | 
chooseTargets method in FSNamesystem is very inefficient |  Major | . | Milind 
Bhandarkar | Milind Bhandarkar |
+| [HADOOP-677](https://issues.apache.org/jira/browse/HADOOP-677) | RPC should 
send a fixed header and version at the start of connection |  Major | ipc | 
Owen O'Malley | Owen O'Malley |
 | [HADOOP-76](https://issues.apache.org/jira/browse/HADOOP-76) | Implement 
speculative re-execution of reduces |  Minor | . | Doug Cutting | Sanjay Dahiya 
|
+| [HADOOP-689](https://issues.apache.org/jira/browse/HADOOP-689) | hadoop 
should provide a common way to wrap instances with different types into one 
type |  

[09/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md
index 639afd9..5a17ea6 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md
@@ -23,735 +23,735 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-9832](https://issues.apache.org/jira/browse/HADOOP-9832) | *Blocker* 
| **Add RPC header to client ping**
+* [MAPREDUCE-3787](https://issues.apache.org/jira/browse/MAPREDUCE-3787) | 
*Major* | **[Gridmix] Improve STRESS mode**
 
-Client ping will be sent as a RPC header with a reserved callId instead of as 
a sentinel RPC packet length.
+JobMonitor can now deploy multiple threads for faster job-status polling. Use 
'gridmix.job-monitor.thread-count' to set the number of threads. Stress mode 
now relies on the updates from the job monitor instead of polling for job 
status. Failures in job submission now get reported to the statistics module 
and ultimately reported to the user via summary.
 
 
 ---
 
-* [HADOOP-9820](https://issues.apache.org/jira/browse/HADOOP-9820) | *Blocker* 
| **RPCv9 wire protocol is insufficient to support multiplexing**
+* [MAPREDUCE-2722](https://issues.apache.org/jira/browse/MAPREDUCE-2722) | 
*Major* | **Gridmix simulated job's map's hdfsBytesRead counter is wrong when 
compressed input is used**
 
-**WARNING: No release note provided for this incompatible change.**
+Makes Gridmix use the uncompressed input data size while simulating map tasks 
in the case where compressed input data was used in original job.
 
 
 ---
 
-* [HADOOP-9698](https://issues.apache.org/jira/browse/HADOOP-9698) | *Blocker* 
| **RPCv9 client must honor server's SASL negotiate response**
+* [MAPREDUCE-3829](https://issues.apache.org/jira/browse/MAPREDUCE-3829) | 
*Major* | **[Gridmix] Gridmix should give better error message when input-data 
directory already exists and -generate option is given**
 
-The RPC client now waits for the Server's SASL negotiate response before 
instantiating its SASL client.
+Makes Gridmix emit out correct error message when the input data directory 
already exists and -generate option is used. Makes Gridmix exit with proper 
exit codes when Gridmix fails in args-processing, startup/setup.
 
 
 ---
 
-* [HADOOP-9683](https://issues.apache.org/jira/browse/HADOOP-9683) | *Blocker* 
| **Wrap IpcConnectionContext in RPC headers**
+* [MAPREDUCE-3953](https://issues.apache.org/jira/browse/MAPREDUCE-3953) | 
*Major* | **Gridmix throws NPE and does not simulate a job if the trace 
contains null taskStatus for a task**
 
-Connection context is now sent as a rpc header wrapped protobuf.
+Fixes NPE and makes Gridmix simulate succeeded-jobs-with-failed-tasks. All 
tasks of such simulated jobs(including the failed ones of original job) will 
succeed.
 
 
 ---
 
-* [HADOOP-9649](https://issues.apache.org/jira/browse/HADOOP-9649) | *Blocker* 
| **Promote YARN service life-cycle libraries into Hadoop Common**
+* [MAPREDUCE-3757](https://issues.apache.org/jira/browse/MAPREDUCE-3757) | 
*Major* | **Rumen Folder is not adjusting the shuffleFinished and sortFinished 
times of reduce task attempts**
 
-**WARNING: No release note provided for this incompatible change.**
+Fixed the sortFinishTime and shuffleFinishTime adjustments in Rumen Folder.
 
 
 ---
 
-* [HADOOP-9630](https://issues.apache.org/jira/browse/HADOOP-9630) | *Major* | 
**Remove IpcSerializationType**
+* [MAPREDUCE-4083](https://issues.apache.org/jira/browse/MAPREDUCE-4083) | 
*Major* | **GridMix emulated job tasks.resource-usage emulator for CPU usage 
throws NPE when Trace contains cumulativeCpuUsage value of 0 at attempt level**
 
-**WARNING: No release note provided for this incompatible change.**
+Fixes NPE in cpu emulation in Gridmix
 
 
 ---
 
-* [HADOOP-9425](https://issues.apache.org/jira/browse/HADOOP-9425) | *Major* | 
**Add error codes to rpc-response**
+* [MAPREDUCE-4149](https://issues.apache.org/jira/browse/MAPREDUCE-4149) | 
*Major* | **Rumen fails to parse certain counter strings**
 
-**WARNING: No release note provided for this incompatible change.**
+Fixes Rumen to parse counter strings containing the special characters "{" and 
"}".
 
 
 ---
 
-* [HADOOP-9421](https://issues.apache.org/jira/browse/HADOOP-9421) | *Blocker* 
| **Convert SASL to use ProtoBuf and provide negotiation capabilities**
+* [MAPREDUCE-4100](https://issues.apache.org/jira/browse/MAPREDUCE-4100) | 
*Minor* | **Sometimes gridmix emulates 

[10/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/CHANGES.2.1.0-beta.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/CHANGES.2.1.0-beta.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/CHANGES.2.1.0-beta.md
index 192229f..3880304 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/CHANGES.2.1.0-beta.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/CHANGES.2.1.0-beta.md
@@ -24,895 +24,888 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9832](https://issues.apache.org/jira/browse/HADOOP-9832) | Add RPC 
header to client ping |  Blocker | ipc | Daryn Sharp | Daryn Sharp |
-| [HADOOP-9820](https://issues.apache.org/jira/browse/HADOOP-9820) | RPCv9 
wire protocol is insufficient to support multiplexing |  Blocker | ipc, 
security | Daryn Sharp | Daryn Sharp |
-| [HADOOP-9698](https://issues.apache.org/jira/browse/HADOOP-9698) | RPCv9 
client must honor server's SASL negotiate response |  Blocker | ipc | Daryn 
Sharp | Daryn Sharp |
-| [HADOOP-9683](https://issues.apache.org/jira/browse/HADOOP-9683) | Wrap 
IpcConnectionContext in RPC headers |  Blocker | ipc | Luke Lu | Daryn Sharp |
-| [HADOOP-9649](https://issues.apache.org/jira/browse/HADOOP-9649) | Promote 
YARN service life-cycle libraries into Hadoop Common |  Blocker | . | Zhijie 
Shen | Zhijie Shen |
-| [HADOOP-9630](https://issues.apache.org/jira/browse/HADOOP-9630) | Remove 
IpcSerializationType |  Major | ipc | Luke Lu | Junping Du |
-| [HADOOP-9425](https://issues.apache.org/jira/browse/HADOOP-9425) | Add error 
codes to rpc-response |  Major | ipc | Sanjay Radia | Sanjay Radia |
-| [HADOOP-9421](https://issues.apache.org/jira/browse/HADOOP-9421) | Convert 
SASL to use ProtoBuf and provide negotiation capabilities |  Blocker | . | 
Sanjay Radia | Daryn Sharp |
-| [HADOOP-9380](https://issues.apache.org/jira/browse/HADOOP-9380) | Add 
totalLength to rpc response |  Major | ipc | Sanjay Radia | Sanjay Radia |
-| [HADOOP-9194](https://issues.apache.org/jira/browse/HADOOP-9194) | RPC 
Support for QoS |  Major | ipc | Luke Lu | Junping Du |
+| [HADOOP-8886](https://issues.apache.org/jira/browse/HADOOP-8886) | Remove 
KFS support |  Major | fs | Eli Collins | Eli Collins |
 | [HADOOP-9163](https://issues.apache.org/jira/browse/HADOOP-9163) | The rpc 
msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra copy |  
Major | ipc | Sanjay Radia | Sanjay Radia |
 | [HADOOP-9151](https://issues.apache.org/jira/browse/HADOOP-9151) | Include 
RPC error info in RpcResponseHeader instead of sending it separately |  Major | 
ipc | Sanjay Radia | Sanjay Radia |
-| [HADOOP-8886](https://issues.apache.org/jira/browse/HADOOP-8886) | Remove 
KFS support |  Major | fs | Eli Collins | Eli Collins |
-| [HDFS-5083](https://issues.apache.org/jira/browse/HDFS-5083) | Update the 
HDFS compatibility version range |  Blocker | . | Kihwal Lee | Kihwal Lee |
-| [HDFS-4866](https://issues.apache.org/jira/browse/HDFS-4866) | Protocol 
buffer support cannot compile under C |  Blocker | namenode | Ralph Castain | 
Arpit Agarwal |
+| [YARN-396](https://issues.apache.org/jira/browse/YARN-396) | Rationalize 
AllocateResponse in RM scheduler API |  Major | . | Bikas Saha | Zhijie Shen |
+| [HADOOP-9380](https://issues.apache.org/jira/browse/HADOOP-9380) | Add 
totalLength to rpc response |  Major | ipc | Sanjay Radia | Sanjay Radia |
+| [YARN-439](https://issues.apache.org/jira/browse/YARN-439) | Flatten 
NodeHeartbeatResponse |  Major | . | Siddharth Seth | Xuan Gong |
+| [YARN-440](https://issues.apache.org/jira/browse/YARN-440) | Flatten 
RegisterNodeManagerResponse |  Major | . | Siddharth Seth | Xuan Gong |
+| [HADOOP-9194](https://issues.apache.org/jira/browse/HADOOP-9194) | RPC 
Support for QoS |  Major | ipc | Luke Lu | Junping Du |
+| [YARN-536](https://issues.apache.org/jira/browse/YARN-536) | Remove 
ContainerStatus, ContainerState from Container api interface as they will not 
be called by the container object |  Major | . | Xuan Gong | Xuan Gong |
+| [YARN-561](https://issues.apache.org/jira/browse/YARN-561) | Nodemanager 
should set some key information into the environment of every container that it 
launches. |  Major | . | Hitesh Shah | Xuan Gong |
+| [MAPREDUCE-4737](https://issues.apache.org/jira/browse/MAPREDUCE-4737) |  
Hadoop does not close output file / does not call Mapper.cleanup if exception 
in map |  Major | . | Daniel Dai | Arun C Murthy |
+| [HDFS-4305](https://issues.apache.org/jira/browse/HDFS-4305) | Add a 
configurable limit on number of blocks per file, and min block size |  Minor | 
namenode | Todd Lipcon | Andrew Wang |
+| 

[54/73] [abbrv] hadoop git commit: HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace 
and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ee748132
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ee748132
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ee748132

Branch: refs/heads/HDFS-10467
Commit: ee74813241fc0b0a0c19b3ada64426c7b2c29cf4
Parents: 4319a10
Author: Inigo Goiri 
Authored: Tue May 2 15:49:53 2017 -0700
Committer: Inigo Goiri 
Committed: Thu Aug 31 19:39:53 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  14 +
 .../federation/router/PeriodicService.java  | 198 
 .../StateStoreConnectionMonitorService.java |  67 +++
 .../federation/store/StateStoreService.java | 152 +-
 .../federation/store/StateStoreUtils.java   |  51 +-
 .../store/driver/StateStoreDriver.java  |  31 +-
 .../driver/StateStoreRecordOperations.java  |  17 +-
 .../store/driver/impl/StateStoreBaseImpl.java   |  31 +-
 .../driver/impl/StateStoreFileBaseImpl.java | 429 
 .../store/driver/impl/StateStoreFileImpl.java   | 161 +++
 .../driver/impl/StateStoreFileSystemImpl.java   | 178 +++
 .../driver/impl/StateStoreSerializableImpl.java |  77 +++
 .../federation/store/records/BaseRecord.java|  20 +-
 .../server/federation/store/records/Query.java  |  66 +++
 .../src/main/resources/hdfs-default.xml |  16 +
 .../store/FederationStateStoreTestUtils.java| 232 +
 .../store/driver/TestStateStoreDriverBase.java  | 483 +++
 .../store/driver/TestStateStoreFile.java|  64 +++
 .../store/driver/TestStateStoreFileSystem.java  |  88 
 19 files changed, 2329 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee748132/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 7623839..8cdd450 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hdfs;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -25,6 +27,8 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import org.apache.hadoop.http.HttpConfig;
 
@@ -1134,6 +1138,16 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   FEDERATION_STORE_SERIALIZER_CLASS_DEFAULT =
   StateStoreSerializerPBImpl.class;
 
+  public static final String FEDERATION_STORE_DRIVER_CLASS =
+  FEDERATION_STORE_PREFIX + "driver.class";
+  public static final Class
+  FEDERATION_STORE_DRIVER_CLASS_DEFAULT = StateStoreFileImpl.class;
+
+  public static final String FEDERATION_STORE_CONNECTION_TEST_MS =
+  FEDERATION_STORE_PREFIX + "connection.test";
+  public static final long FEDERATION_STORE_CONNECTION_TEST_MS_DEFAULT =
+  TimeUnit.MINUTES.toMillis(1);
+
   // dfs.client.retry confs are moved to HdfsClientConfigKeys.Retry 
   @Deprecated
   public static final String  DFS_CLIENT_RETRY_POLICY_ENABLED_KEY

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee748132/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
new file mode 100644
index 000..5e1
--- /dev/null
+++ 

[35/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.205.0/CHANGES.0.20.205.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.205.0/CHANGES.0.20.205.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.205.0/CHANGES.0.20.205.0.md
index 1e26cb4..a130003 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.205.0/CHANGES.0.20.205.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.205.0/CHANGES.0.20.205.0.md
@@ -24,26 +24,20 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HDFS-2202](https://issues.apache.org/jira/browse/HDFS-2202) | Changes to 
balancer bandwidth should not require datanode restart. |  Major | balancer & 
mover, datanode | Eric Payne | Eric Payne |
-| [HDFS-1554](https://issues.apache.org/jira/browse/HDFS-1554) | Append 0.20: 
New semantics for recoverLease |  Major | . | Hairong Kuang | Hairong Kuang |
 | [HDFS-630](https://issues.apache.org/jira/browse/HDFS-630) | In 
DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
datanodes when locating the next block. |  Major | hdfs-client, namenode | 
Ruyue Ma | Cosmin Lehene |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HDFS-1554](https://issues.apache.org/jira/browse/HDFS-1554) | Append 0.20: 
New semantics for recoverLease |  Major | . | Hairong Kuang | Hairong Kuang |
+| [HDFS-2202](https://issues.apache.org/jira/browse/HDFS-2202) | Changes to 
balancer bandwidth should not require datanode restart. |  Major | balancer & 
mover, datanode | Eric Payne | Eric Payne |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
+| [HDFS-200](https://issues.apache.org/jira/browse/HDFS-200) | In HDFS, sync() 
not yet guarantees data available to the new readers |  Blocker | . | Tsz Wo 
Nicholas Sze | dhruba borthakur |
+| [HDFS-1520](https://issues.apache.org/jira/browse/HDFS-1520) | HDFS 20 
append: Lightweight NameNode operation to trigger lease recovery |  Major | 
namenode | Hairong Kuang | Hairong Kuang |
 | [HADOOP-7594](https://issues.apache.org/jira/browse/HADOOP-7594) | Support 
HTTP REST in HttpServer |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas 
Sze |
 | [HADOOP-7119](https://issues.apache.org/jira/browse/HADOOP-7119) | add 
Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles 
|  Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
 | [HADOOP-6889](https://issues.apache.org/jira/browse/HADOOP-6889) | Make RPC 
to have an option to timeout |  Major | ipc | Hairong Kuang | John George |
-| [HDFS-1520](https://issues.apache.org/jira/browse/HDFS-1520) | HDFS 20 
append: Lightweight NameNode operation to trigger lease recovery |  Major | 
namenode | Hairong Kuang | Hairong Kuang |
-| [HDFS-200](https://issues.apache.org/jira/browse/HDFS-200) | In HDFS, sync() 
not yet guarantees data available to the new readers |  Blocker | . | Tsz Wo 
Nicholas Sze | dhruba borthakur |
 | [MAPREDUCE-2777](https://issues.apache.org/jira/browse/MAPREDUCE-2777) | 
Backport MAPREDUCE-220 to Hadoop 20 security branch |  Major | . | Jonathan 
Eagles | Amar Kamat |
 
 
@@ -51,140 +45,140 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7720](https://issues.apache.org/jira/browse/HADOOP-7720) | improve 
the hadoop-setup-conf.sh to read in the hbase user and setup the configs |  
Major | conf | Arpit Gupta | Arpit Gupta |
-| [HADOOP-7707](https://issues.apache.org/jira/browse/HADOOP-7707) | improve 
config generator to allow users to specify proxy user, turn append on or off, 
turn webhdfs on or off |  Major | conf | Arpit Gupta | Arpit Gupta |
-| [HADOOP-7655](https://issues.apache.org/jira/browse/HADOOP-7655) | provide a 
small validation script that smoke tests the installed cluster |  Major | . | 
Arpit Gupta | Arpit Gupta |
-| [HADOOP-7472](https://issues.apache.org/jira/browse/HADOOP-7472) | RPC 
client should deal with the IP address changes |  Minor | ipc | Kihwal Lee | 
Kihwal Lee |
-| [HADOOP-7432](https://issues.apache.org/jira/browse/HADOOP-7432) | Back-port 
HADOOP-7110 to 0.20-security |  Major | . | Sherry Chen | Sherry Chen |
-| [HADOOP-7343](https://issues.apache.org/jira/browse/HADOOP-7343) | backport 
HADOOP-7008 and HADOOP-7042 to branch-0.20-security |  Minor | test | Thomas 
Graves | Thomas Graves |
-| [HADOOP-7314](https://issues.apache.org/jira/browse/HADOOP-7314) | Add 
support for throwing UnknownHostException when a host doesn't resolve |  Major 
| . | Jeffrey Naisbitt | Jeffrey Naisbitt |
-| 

[67/73] [abbrv] hadoop git commit: HDFS-10880. Federation Mount Table State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2f761d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
new file mode 100644
index 000..7f7c998
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProtoOrBuilder;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * RemoveMountTableEntryRequest.
+ */
+public class RemoveMountTableEntryRequestPBImpl
+extends RemoveMountTableEntryRequest implements PBRecord {
+
+  private FederationProtocolPBTranslator translator =
+  new FederationProtocolPBTranslator(
+  RemoveMountTableEntryRequestProto.class);
+
+  public RemoveMountTableEntryRequestPBImpl() {
+  }
+
+  public RemoveMountTableEntryRequestPBImpl(
+  RemoveMountTableEntryRequestProto proto) {
+this.setProto(proto);
+  }
+
+  @Override
+  public RemoveMountTableEntryRequestProto getProto() {
+return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+this.translator.readInstance(base64String);
+  }
+
+  @Override
+  public String getSrcPath() {
+return this.translator.getProtoOrBuilder().getSrcPath();
+  }
+
+  @Override
+  public void setSrcPath(String path) {
+this.translator.getBuilder().setSrcPath(path);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2f761d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
new file mode 100644
index 000..0c943ac
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * 

[14/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
index 1d032c7..54f1663 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
@@ -24,697 +24,691 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8794](https://issues.apache.org/jira/browse/HADOOP-8794) | Modifiy 
bin/hadoop to point to HADOOP\_YARN\_HOME |  Major | . | Vinod Kumar 
Vavilapalli | Vinod Kumar Vavilapalli |
-| [HADOOP-8710](https://issues.apache.org/jira/browse/HADOOP-8710) | Remove 
ability for users to easily run the trash emptier |  Major | fs | Eli Collins | 
Eli Collins |
-| [HADOOP-8689](https://issues.apache.org/jira/browse/HADOOP-8689) | Make 
trash a server side configuration option |  Major | fs | Eli Collins | Eli 
Collins |
-| [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | fs -mkdir 
creates parent directories without the -p option |  Major | fs | Robert Joseph 
Evans | John George |
-| [HADOOP-8458](https://issues.apache.org/jira/browse/HADOOP-8458) | Add 
management hook to AuthenticationHandler to enable delegation token operations 
support |  Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
-| [HADOOP-8388](https://issues.apache.org/jira/browse/HADOOP-8388) | Remove 
unused BlockLocation serialization |  Minor | . | Colin Patrick McCabe | Colin 
Patrick McCabe |
-| [HADOOP-8368](https://issues.apache.org/jira/browse/HADOOP-8368) | Use CMake 
rather than autotools to build native code |  Minor | . | Colin Patrick McCabe 
| Colin Patrick McCabe |
-| [HDFS-3675](https://issues.apache.org/jira/browse/HDFS-3675) | libhdfs: 
follow documented return codes |  Minor | libhdfs | Colin Patrick McCabe | 
Colin Patrick McCabe |
+| [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
User set java.library.path seems to overwrite default creating problems native 
lib loading |  Major | mrv2 | Anupam Seth | Anupam Seth |
+| [MAPREDUCE-3812](https://issues.apache.org/jira/browse/MAPREDUCE-3812) | 
Lower default allocation sizes, fix allocation configurations and document them 
|  Major | mrv2, performance | Vinod Kumar Vavilapalli | Harsh J |
+| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
+| [HADOOP-8388](https://issues.apache.org/jira/browse/HADOOP-8388) | Remove 
unused BlockLocation serialization |  Minor | . | Colin P. McCabe | Colin P. 
McCabe |
+| [HADOOP-8368](https://issues.apache.org/jira/browse/HADOOP-8368) | Use CMake 
rather than autotools to build native code |  Minor | . | Colin P. McCabe | 
Colin P. McCabe |
 | [HDFS-3522](https://issues.apache.org/jira/browse/HDFS-3522) | If NN is in 
safemode, it should throw SafeModeException when getBlockLocations has zero 
locations |  Major | namenode | Brandon Li | Brandon Li |
+| [HADOOP-8458](https://issues.apache.org/jira/browse/HADOOP-8458) | Add 
management hook to AuthenticationHandler to enable delegation token operations 
support |  Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
+| [MAPREDUCE-4311](https://issues.apache.org/jira/browse/MAPREDUCE-4311) | 
Capacity scheduler.xml does not accept decimal values for capacity and 
maximum-capacity settings |  Major | capacity-sched, mrv2 | Thomas Graves | 
Karthik Kambatla |
 | [HDFS-3446](https://issues.apache.org/jira/browse/HDFS-3446) | 
HostsFileReader silently ignores bad includes/excludes |  Major | namenode | 
Matthew Jacobs | Matthew Jacobs |
-| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
-| [HDFS-2727](https://issues.apache.org/jira/browse/HDFS-2727) | libhdfs 
should get the default block size from the server |  Minor | libhdfs | Sho 
Shimauchi | Colin Patrick McCabe |
-| [HDFS-2686](https://issues.apache.org/jira/browse/HDFS-2686) | Remove 
DistributedUpgrade related code |  Major | datanode, namenode | Todd Lipcon | 
Suresh Srinivas |
+| [HDFS-3675](https://issues.apache.org/jira/browse/HDFS-3675) | libhdfs: 
follow documented return codes |  Minor | libhdfs | Colin P. McCabe | Colin P. 
McCabe |
 | [HDFS-2617](https://issues.apache.org/jira/browse/HDFS-2617) | Replaced 
Kerberized SSL for image transfer and fsck with SPNEGO-based solution |  Major 
| security | Jakob Homan | Jakob Homan |
+| 

[30/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
index 41ffd77..1678634 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
@@ -23,609 +23,609 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-7302](https://issues.apache.org/jira/browse/HADOOP-7302) | *Major* | 
**webinterface.private.actions should not be in common**
+* [MAPREDUCE-478](https://issues.apache.org/jira/browse/MAPREDUCE-478) | 
*Minor* | **separate jvm param for mapper and reducer**
 
-Option webinterface.private.actions has been renamed to 
mapreduce.jobtracker.webinterface.trusted and should be specified in 
mapred-site.xml instead of core-site.xml
+Allow map and reduce jvm parameters, environment variables and ulimit to be 
set separately.
+
+Configuration changes:
+  add mapred.map.child.java.opts
+  add mapred.reduce.child.java.opts
+  add mapred.map.child.env
+  add mapred.reduce.child.ulimit
+  add mapred.map.child.env
+  add mapred.reduce.child.ulimit
+  deprecated mapred.child.java.opts
+  deprecated mapred.child.env
+  deprecated mapred.child.ulimit
 
 
 ---
 
-* [HADOOP-7229](https://issues.apache.org/jira/browse/HADOOP-7229) | *Major* | 
**Absolute path to kinit in auto-renewal thread**
+* [HADOOP-6344](https://issues.apache.org/jira/browse/HADOOP-6344) | *Major* | 
**rm and rmr fail to correctly move the user's files to the trash prior to 
deleting when they are over quota.**
 
-When Hadoop's Kerberos integration is enabled, it is now required that either 
{{kinit}} be on the path for user accounts running the Hadoop client, or that 
the {{hadoop.kerberos.kinit.command}} configuration option be manually set to 
the absolute path to {{kinit}}.
+Trash feature notifies user of over-quota condition rather than silently 
deleting files/directories; deletion can be compelled with "rm -skiptrash".
 
 
 ---
 
-* [HADOOP-7193](https://issues.apache.org/jira/browse/HADOOP-7193) | *Minor* | 
**Help message is wrong for touchz command.**
+* [HADOOP-6599](https://issues.apache.org/jira/browse/HADOOP-6599) | *Major* | 
**Split RPC metrics into summary and detailed metrics**
 
-Updated the help for the touchz command.
+Split existing RpcMetrics into RpcMetrics and RpcDetailedMetrics. The new 
RpcDetailedMetrics has per method usage details and is available under context 
name "rpc" and record name "detailed-metrics"
 
 
 ---
 
-* [HADOOP-7192](https://issues.apache.org/jira/browse/HADOOP-7192) | *Trivial* 
| **fs -stat docs aren't updated to reflect the format features**
+* [MAPREDUCE-927](https://issues.apache.org/jira/browse/MAPREDUCE-927) | 
*Major* | **Cleanup of task-logs should happen in TaskTracker instead of the 
Child**
 
-Updated the web documentation to reflect the formatting abilities of 'fs 
-stat'.
+Moved Task log cleanup into a separate thread in TaskTracker.
+Added configuration "mapreduce.job.userlog.retain.hours" to specify the 
time(in hours) for which the user-logs are to be retained after the job 
completion.
 
 
 ---
 
-* [HADOOP-7156](https://issues.apache.org/jira/browse/HADOOP-7156) | 
*Critical* | **getpwuid\_r is not thread-safe on RHEL6**
+* [HADOOP-6730](https://issues.apache.org/jira/browse/HADOOP-6730) | *Major* | 
**Bug in FileContext#copy and provide base class for FileContext tests**
 
-Adds a new configuration hadoop.work.around.non.threadsafe.getpwuid which can 
be used to enable a mutex around this call to workaround thread-unsafe 
implementations of getpwuid\_r. Users should consult 
http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations for a list of 
such systems.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-7137](https://issues.apache.org/jira/browse/HADOOP-7137) | *Major* | 
**Remove hod contrib**
+* [MAPREDUCE-1707](https://issues.apache.org/jira/browse/MAPREDUCE-1707) | 
*Major* | **TaskRunner can get NPE in getting ugi from TaskTracker**
 
-Removed contrib related build targets.
+Fixed a bug that causes TaskRunner to get NPE in getting ugi from TaskTracker 
and subsequently crashes it resulting in a failing task after task-timeout 
period.
 
 
 ---
 
-* [HADOOP-7134](https://issues.apache.org/jira/browse/HADOOP-7134) | *Major* | 
**configure files that are generated as part of the released tarball need to 
have executable bit set**
+* [MAPREDUCE-1680](https://issues.apache.org/jira/browse/MAPREDUCE-1680) | 
*Major* | **Add a metrics to track the number 

[48/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.15.0/CHANGES.0.15.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.15.0/CHANGES.0.15.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.15.0/CHANGES.0.15.0.md
index cb85716..f511ccf 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.15.0/CHANGES.0.15.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.15.0/CHANGES.0.15.0.md
@@ -20,31 +20,21 @@
 
 ## Release 0.15.0 - 2007-10-19
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1963](https://issues.apache.org/jira/browse/HADOOP-1963) | Code 
contribution of Kosmos Filesystem implementation of Hadoop Filesystem interface 
|  Major | fs | Sriram Rao | Sriram Rao |
-| [HADOOP-1914](https://issues.apache.org/jira/browse/HADOOP-1914) | HDFS 
should have a NamenodeProtocol to allow  secondary namenodes and rebalancing 
processes to communicate with a primary namenode |  Major | . | Hairong Kuang | 
Hairong Kuang |
-| [HADOOP-1894](https://issues.apache.org/jira/browse/HADOOP-1894) | Add fancy 
graphs for mapred task statuses |  Major | . | Enis Soztutar | Enis Soztutar |
+| [HADOOP-1727](https://issues.apache.org/jira/browse/HADOOP-1727) | Make 
...hbase.io.MapWritable more generic so that it can be included in ...hadoop.io 
|  Minor | io | Jim Kellerman | Jim Kellerman |
+| [HADOOP-1351](https://issues.apache.org/jira/browse/HADOOP-1351) | Want to 
kill a particular task or attempt |  Major | . | Owen O'Malley | Enis Soztutar |
 | [HADOOP-1880](https://issues.apache.org/jira/browse/HADOOP-1880) | SleepJob 
|  Major | . | Enis Soztutar | Enis Soztutar |
+| [HADOOP-1809](https://issues.apache.org/jira/browse/HADOOP-1809) | Add link 
to irc channel #hadoop |  Major | . | Enis Soztutar | Enis Soztutar |
+| [HADOOP-1894](https://issues.apache.org/jira/browse/HADOOP-1894) | Add fancy 
graphs for mapred task statuses |  Major | . | Enis Soztutar | Enis Soztutar |
+| [HADOOP-1914](https://issues.apache.org/jira/browse/HADOOP-1914) | HDFS 
should have a NamenodeProtocol to allow  secondary namenodes and rebalancing 
processes to communicate with a primary namenode |  Major | . | Hairong Kuang | 
Hairong Kuang |
 | [HADOOP-1851](https://issues.apache.org/jira/browse/HADOOP-1851) | Map 
output compression codec cannot be set independently of job output compression 
codec |  Major | . | Riccardo Boscolo | Arun C Murthy |
+| [HADOOP-1963](https://issues.apache.org/jira/browse/HADOOP-1963) | Code 
contribution of Kosmos Filesystem implementation of Hadoop Filesystem interface 
|  Major | fs | Sriram Rao | Sriram Rao |
 | [HADOOP-1822](https://issues.apache.org/jira/browse/HADOOP-1822) | Allow 
SOCKS proxy configuration to remotely access the DFS and submit Jobs |  Minor | 
ipc | Christophe Taton | Christophe Taton |
-| [HADOOP-1809](https://issues.apache.org/jira/browse/HADOOP-1809) | Add link 
to irc channel #hadoop |  Major | . | Enis Soztutar | Enis Soztutar |
-| [HADOOP-1727](https://issues.apache.org/jira/browse/HADOOP-1727) | Make 
...hbase.io.MapWritable more generic so that it can be included in ...hadoop.io 
|  Minor | io | Jim Kellerman | Jim Kellerman |
-| [HADOOP-1351](https://issues.apache.org/jira/browse/HADOOP-1351) | Want to 
kill a particular task or attempt |  Major | . | Owen O'Malley | Enis Soztutar |
 | [HADOOP-789](https://issues.apache.org/jira/browse/HADOOP-789) | DFS shell 
should return a list of nodes for a file saying that where the blocks for these 
files are located. |  Minor | . | Mahadev konar | Mahadev konar |
 
 
@@ -52,155 +42,143 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-2046](https://issues.apache.org/jira/browse/HADOOP-2046) | 
Documentation: improve mapred javadocs |  Blocker | documentation | Arun C 
Murthy | Arun C Murthy |
-| [HADOOP-1971](https://issues.apache.org/jira/browse/HADOOP-1971) | 
Constructing a JobConf without a class leads to a very misleading error 
message. |  Minor | . | Ted Dunning | Enis Soztutar |
-| [HADOOP-1968](https://issues.apache.org/jira/browse/HADOOP-1968) | Wildcard 
input syntax (glob) should support {} |  Major | fs | eric baldeschwieler | 
Hairong Kuang |
-| [HADOOP-1942](https://issues.apache.org/jira/browse/HADOOP-1942) | Increase 
the concurrency of transaction logging to edits log |  Blocker | . | dhruba 
borthakur | dhruba borthakur |
-| 

[59/73] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad7865e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
new file mode 100644
index 000..2d74505
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
@@ -0,0 +1,284 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.resolver;
+
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMENODES;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMESERVICES;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.ROUTERS;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createNamenodeReport;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.verifyException;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.clearRecords;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.getStateStoreConfiguration;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.newStateStore;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.waitStateStore;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import 
org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException;
+import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test the basic {@link ActiveNamenodeResolver} functionality.
+ */
+public class TestNamenodeResolver {
+
+  private static StateStoreService stateStore;
+  private static ActiveNamenodeResolver namenodeResolver;
+
+  @BeforeClass
+  public static void create() throws Exception {
+
+Configuration conf = getStateStoreConfiguration();
+
+// Reduce expirations to 5 seconds
+conf.setLong(
+DFSConfigKeys.FEDERATION_STORE_MEMBERSHIP_EXPIRATION_MS,
+TimeUnit.SECONDS.toMillis(5));
+
+stateStore = newStateStore(conf);
+assertNotNull(stateStore);
+
+namenodeResolver = new MembershipNamenodeResolver(conf, stateStore);
+namenodeResolver.setRouterId(ROUTERS[0]);
+  }
+
+  @AfterClass
+  public static void destroy() throws Exception {
+stateStore.stop();
+stateStore.close();
+  }
+
+  @Before
+  public void setup() throws IOException, InterruptedException {
+// Wait for state store to connect
+stateStore.loadDriver();
+waitStateStore(stateStore, 1);
+
+// Clear NN registrations
+boolean cleared = clearRecords(stateStore, MembershipState.class);
+assertTrue(cleared);
+  }
+
+  @Test
+  public void testStateStoreDisconnected() throws Exception {
+
+// Add an entry to the store
+NamenodeStatusReport report = createNamenodeReport(
+NAMESERVICES[0], NAMENODES[0], HAServiceState.ACTIVE);
+assertTrue(namenodeResolver.registerNamenode(report));
+
+// Close the data store driver
+stateStore.closeDriver();
+assertFalse(stateStore.isDriverReady());
+
+// Flush the caches
+stateStore.refreshCaches(true);
+
+// Verify commands 

[02/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/CHANGES.2.5.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/CHANGES.2.5.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/CHANGES.2.5.0.md
index f62f96b..b27686a 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/CHANGES.2.5.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/CHANGES.2.5.0.md
@@ -24,534 +24,528 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HDFS-6168](https://issues.apache.org/jira/browse/HDFS-6168) | Remove 
deprecated methods in DistributedFileSystem |  Major | hdfs-client | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
 | [HDFS-6164](https://issues.apache.org/jira/browse/HDFS-6164) | Remove lsr in 
OfflineImageViewer |  Major | tools | Haohui Mai | Haohui Mai |
-| [HDFS-6153](https://issues.apache.org/jira/browse/HDFS-6153) | Document 
"fileId" and "childrenNum" fields in the FileStatus Json schema |  Minor | 
documentation, webhdfs | Akira AJISAKA | Akira AJISAKA |
-| [MAPREDUCE-5777](https://issues.apache.org/jira/browse/MAPREDUCE-5777) | 
Support utf-8 text with BOM (byte order marker) |  Major | . | bc Wong | zhihai 
xu |
+| [HDFS-6168](https://issues.apache.org/jira/browse/HDFS-6168) | Remove 
deprecated methods in DistributedFileSystem |  Major | hdfs-client | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HDFS-6153](https://issues.apache.org/jira/browse/HDFS-6153) | Document 
"fileId" and "childrenNum" fields in the FileStatus Json schema |  Minor | 
documentation, webhdfs | Akira Ajisaka | Akira Ajisaka |
 | [YARN-2107](https://issues.apache.org/jira/browse/YARN-2107) | Refactor 
timeline classes into server.timeline package |  Major | . | Vinod Kumar 
Vavilapalli | Vinod Kumar Vavilapalli |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [MAPREDUCE-5777](https://issues.apache.org/jira/browse/MAPREDUCE-5777) | 
Support utf-8 text with BOM (byte order marker) |  Major | . | bc Wong | zhihai 
xu |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10514](https://issues.apache.org/jira/browse/HADOOP-10514) | Common 
side changes to support  HDFS extended attributes (HDFS-2006) |  Major | fs | 
Uma Maheswara Rao G | Yi Liu |
 | [HADOOP-10498](https://issues.apache.org/jira/browse/HADOOP-10498) | Add 
support for proxy server |  Major | util | Daryn Sharp | Daryn Sharp |
-| [HADOOP-9704](https://issues.apache.org/jira/browse/HADOOP-9704) | Write 
metrics sink plugin for Hadoop/Graphite |  Major | . | Chu Tong |  |
-| [HDFS-6435](https://issues.apache.org/jira/browse/HDFS-6435) | Add support 
for specifying a static uid/gid mapping for the NFS gateway |  Major | nfs | 
Aaron T. Myers | Aaron T. Myers |
-| [HDFS-6406](https://issues.apache.org/jira/browse/HDFS-6406) | Add 
capability for NFS gateway to reject connections from unprivileged ports |  
Major | nfs | Aaron T. Myers | Aaron T. Myers |
 | [HDFS-6281](https://issues.apache.org/jira/browse/HDFS-6281) | Provide 
option to use the NFS Gateway without having to use the Hadoop portmapper |  
Major | nfs | Aaron T. Myers | Aaron T. Myers |
 | [YARN-1864](https://issues.apache.org/jira/browse/YARN-1864) | Fair 
Scheduler Dynamic Hierarchical User Queues |  Major | scheduler | Ashwin 
Shankar | Ashwin Shankar |
+| [HDFS-6406](https://issues.apache.org/jira/browse/HDFS-6406) | Add 
capability for NFS gateway to reject connections from unprivileged ports |  
Major | nfs | Aaron T. Myers | Aaron T. Myers |
+| [HDFS-6435](https://issues.apache.org/jira/browse/HDFS-6435) | Add support 
for specifying a static uid/gid mapping for the NFS gateway |  Major | nfs | 
Aaron T. Myers | Aaron T. Myers |
+| [HADOOP-9704](https://issues.apache.org/jira/browse/HADOOP-9704) | Write 
metrics sink plugin for Hadoop/Graphite |  Major | . | Chu Tong |  |
+| [HADOOP-10514](https://issues.apache.org/jira/browse/HADOOP-10514) | Common 
side changes to support  HDFS extended attributes (HDFS-2006) |  Major | fs | 
Uma Maheswara Rao G | Yi Liu |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10896](https://issues.apache.org/jira/browse/HADOOP-10896) | Update 
compatibility doc to capture visibility of un-annotated classes/ methods |  
Blocker | documentation | Karthik Kambatla | Karthik Kambatla |
-| [HADOOP-10782](https://issues.apache.org/jira/browse/HADOOP-10782) | Typo in 
DataChecksum classs |  Trivial | . | Jingguo Yao | Jingguo Yao |
-| 

[47/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.0/CHANGES.0.16.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.0/CHANGES.0.16.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.0/CHANGES.0.16.0.md
index 4be7a96..1746fd5 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.0/CHANGES.0.16.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.0/CHANGES.0.16.0.md
@@ -20,228 +20,206 @@
 
 ## Release 0.16.0 - 2008-02-07
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-2603](https://issues.apache.org/jira/browse/HADOOP-2603) | 
SequenceFileAsBinaryInputFormat |  Major | . | Chris Douglas | Chris Douglas |
-| [HADOOP-2567](https://issues.apache.org/jira/browse/HADOOP-2567) | add 
FileSystem#getHomeDirectory() method |  Major | fs | Doug Cutting | Doug 
Cutting |
-| [HADOOP-2543](https://issues.apache.org/jira/browse/HADOOP-2543) | 
No-permission-checking mode for smooth transition to 0.16's permissions 
features. |  Major | . | Sanjay Radia | Hairong Kuang |
-| [HADOOP-2529](https://issues.apache.org/jira/browse/HADOOP-2529) | DFS User 
Guide |  Major | documentation | Raghu Angadi | Raghu Angadi |
-| [HADOOP-2514](https://issues.apache.org/jira/browse/HADOOP-2514) | Trash and 
permissions don't mix |  Major | . | Robert Chansler | Doug Cutting |
-| [HADOOP-2487](https://issues.apache.org/jira/browse/HADOOP-2487) | Provide 
an option to get job status for all jobs run by or submitted to a job tracker | 
 Major | . | Hemanth Yamijala | Amareshwari Sriramadasu |
-| [HADOOP-2447](https://issues.apache.org/jira/browse/HADOOP-2447) | HDFS 
should be capable of limiting the total number of inodes in the system |  Major 
| . | Sameer Paranjpye | dhruba borthakur |
-| [HADOOP-2398](https://issues.apache.org/jira/browse/HADOOP-2398) | 
Additional Instrumentation for NameNode, RPC Layer and JMX support |  Major | . 
| Sanjay Radia | Sanjay Radia |
-| [HADOOP-2381](https://issues.apache.org/jira/browse/HADOOP-2381) | Support 
permission information in FileStatus |  Major | fs | Tsz Wo Nicholas Sze | 
Raghu Angadi |
-| [HADOOP-2367](https://issues.apache.org/jira/browse/HADOOP-2367) | Get 
representative hprof information from tasks |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-2336](https://issues.apache.org/jira/browse/HADOOP-2336) | Shell 
commands to access and modify file permissions |  Major | fs | Raghu Angadi | 
Raghu Angadi |
+| [HADOOP-2045](https://issues.apache.org/jira/browse/HADOOP-2045) | credits 
page should have more information |  Major | documentation | Doug Cutting | 
Doug Cutting |
+| [HADOOP-1604](https://issues.apache.org/jira/browse/HADOOP-1604) | admins 
should be able to finalize namenode upgrades without running the cluster |  
Critical | . | Owen O'Malley | Konstantin Shvachko |
+| [HADOOP-1912](https://issues.apache.org/jira/browse/HADOOP-1912) | Datanode 
should support block replacement |  Major | . | Hairong Kuang | Hairong Kuang |
 | [HADOOP-2288](https://issues.apache.org/jira/browse/HADOOP-2288) | Change 
FileSystem API to support access control. |  Major | fs | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
 | [HADOOP-2229](https://issues.apache.org/jira/browse/HADOOP-2229) | Provide a 
simple login implementation |  Major | fs | Tsz Wo Nicholas Sze | Hairong Kuang 
|
 | [HADOOP-2184](https://issues.apache.org/jira/browse/HADOOP-2184) | RPC 
Support for user permissions and authentication. |  Major | ipc | Tsz Wo 
Nicholas Sze | Raghu Angadi |
+| [HADOOP-1652](https://issues.apache.org/jira/browse/HADOOP-1652) | Rebalance 
data blocks when new data nodes added or data nodes become full |  Major | . | 
Hairong Kuang | Hairong Kuang |
 | [HADOOP-2145](https://issues.apache.org/jira/browse/HADOOP-2145) | need 
'doc' target that runs forrest |  Major | build | Doug Cutting | Doug Cutting |
 | [HADOOP-2085](https://issues.apache.org/jira/browse/HADOOP-2085) | Map-side 
joins on sorted, equally-partitioned datasets |  Major | . | Chris Douglas | 
Chris Douglas |
-| [HADOOP-2045](https://issues.apache.org/jira/browse/HADOOP-2045) | credits 
page should have more information |  Major | documentation | Doug Cutting | 
Doug Cutting |
-| [HADOOP-2012](https://issues.apache.org/jira/browse/HADOOP-2012) | Periodic 
verification at the Datanode |  Major | . | Raghu Angadi | Raghu Angadi |
-| 

[03/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
index a86f1e0..ea93496 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
@@ -23,102 +23,102 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-10295](https://issues.apache.org/jira/browse/HADOOP-10295) | *Major* 
| **Allow distcp to automatically identify the checksum type of source files 
and use it for the target**
+* [HDFS-5790](https://issues.apache.org/jira/browse/HDFS-5790) | *Major* | 
**LeaseManager.findPath is very slow when many leases need recovery**
 
-Add option for distcp to preserve the checksum type of the source files. Users 
can use "-pc" as distcp command option to preserve the checksum type.
+Committed to branch-2 and trunk.
 
 
 ---
 
-* [HADOOP-10221](https://issues.apache.org/jira/browse/HADOOP-10221) | *Major* 
| **Add a plugin to specify SaslProperties for RPC protocol based on connection 
properties**
-
-SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
"hadoop.security.saslproperties.resolver.class" configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.
+* [HADOOP-10295](https://issues.apache.org/jira/browse/HADOOP-10295) | *Major* 
| **Allow distcp to automatically identify the checksum type of source files 
and use it for the target**
 
-Note that this change, effectively removes SaslRpcServer.SASL\_PROPS which was 
a public field. Any use of this variable  should be replaced with the following 
code:
-SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
-Map\ sasl\_props = saslPropsResolver.getDefaultProperties();
+Add option for distcp to preserve the checksum type of the source files. Users 
can use "-pc" as distcp command option to preserve the checksum type.
 
 
 ---
 
-* [HADOOP-10211](https://issues.apache.org/jira/browse/HADOOP-10211) | *Major* 
| **Enable RPC protocol to negotiate SASL-QOP values between clients and 
servers**
+* [HDFS-5804](https://issues.apache.org/jira/browse/HDFS-5804) | *Major* | 
**HDFS NFS Gateway fails to mount and proxy when using Kerberos**
 
-The hadoop.rpc.protection configuration property previously supported 
specifying a single value: one of authentication, integrity or privacy.  An 
unrecognized value was silently assumed to mean authentication.  This 
configuration property now accepts a comma-separated list of any of the 3 
values, and unrecognized values are rejected with an error. Existing 
configurations containing an invalid value must be corrected. If the property 
is empty or not specified, authentication is assumed.
+Fixes NFS on Kerberized cluster.
 
 
 ---
 
-* [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | *Minor* | 
**FsShell can print "Found xxx items" unnecessarily often**
+* [HDFS-5698](https://issues.apache.org/jira/browse/HDFS-5698) | *Major* | 
**Use protobuf to serialize / deserialize FSImage**
 
-The `ls` command only prints "Found foo items" once when listing the 
directories recursively.
+Use protobuf to serialize/deserialize the FSImage.
 
 
 ---
 
-* [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | *Blocker* | 
**Lower the default maximum items per directory to fix PB fsimage loading**
+* [HDFS-4370](https://issues.apache.org/jira/browse/HDFS-4370) | *Major* | 
**Fix typo Blanacer in DataNode**
 
-**WARNING: No release note provided for this incompatible change.**
+I just committed this. Thank you Chu.
 
 
 ---
 
-* [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | *Major* | 
**Change default configuration to limit file name length in HDFS**
+* [HDFS-5776](https://issues.apache.org/jira/browse/HDFS-5776) | *Major* | 
**Support 'hedged' reads in DFSClient**
 
-The default configuration of HDFS now sets 
dfs.namenode.fs-limits.max-component-length to 255 for improved 
interoperability with other file system implementations.  This limits each 
component of a file system path to a maximum of 255 bytes in UTF-8 encoding.  
Attempts to create new files that violate this rule will fail with an error.  
Existing files that violate the rule are not effected.  Previously, 

[07/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
index e47ecd8..cf4d6c9 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
@@ -24,91 +24,79 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10020](https://issues.apache.org/jira/browse/HADOOP-10020) | disable 
symlinks temporarily |  Blocker | fs | Colin Patrick McCabe | Sanjay Radia |
 | [YARN-1229](https://issues.apache.org/jira/browse/YARN-1229) | Define 
constraints on Auxiliary Service names. Change ShuffleHandler service name from 
mapreduce.shuffle to mapreduce\_shuffle. |  Blocker | nodemanager | Tassapol 
Athiapinya | Xuan Gong |
 | [YARN-1228](https://issues.apache.org/jira/browse/YARN-1228) | Clean up Fair 
Scheduler configuration loading |  Major | scheduler | Sandy Ryza | Sandy Ryza |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-10020](https://issues.apache.org/jira/browse/HADOOP-10020) | disable 
symlinks temporarily |  Blocker | fs | Colin P. McCabe | Sanjay Radia |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
+| [HDFS-4817](https://issues.apache.org/jira/browse/HDFS-4817) | make HDFS 
advisory caching configurable on a per-file basis |  Minor | hdfs-client | 
Colin P. McCabe | Colin P. McCabe |
 | [HADOOP-9758](https://issues.apache.org/jira/browse/HADOOP-9758) | Provide 
configuration option for FileSystem/FileContext symlink resolution |  Major | . 
| Andrew Wang | Andrew Wang |
-| [HADOOP-8315](https://issues.apache.org/jira/browse/HADOOP-8315) | Support 
SASL-authenticated ZooKeeper in ActiveStandbyElector |  Major | auto-failover, 
ha | Todd Lipcon | Todd Lipcon |
-| [HDFS-5308](https://issues.apache.org/jira/browse/HDFS-5308) | Replace 
HttpConfig#getSchemePrefix with implicit schemes in HDFS JSP |  Major | . | 
Haohui Mai | Haohui Mai |
-| [HDFS-5256](https://issues.apache.org/jira/browse/HDFS-5256) | Use guava 
LoadingCache to implement DFSClientCache |  Major | nfs | Haohui Mai | Haohui 
Mai |
 | [HDFS-5139](https://issues.apache.org/jira/browse/HDFS-5139) | Remove 
redundant -R option from setrep |  Major | tools | Arpit Agarwal | Arpit 
Agarwal |
-| [HDFS-4817](https://issues.apache.org/jira/browse/HDFS-4817) | make HDFS 
advisory caching configurable on a per-file basis |  Minor | hdfs-client | 
Colin Patrick McCabe | Colin Patrick McCabe |
 | [YARN-1246](https://issues.apache.org/jira/browse/YARN-1246) | Log 
application status in the rm log when app is done running |  Minor | . | Arpit 
Gupta | Arpit Gupta |
+| [HDFS-5256](https://issues.apache.org/jira/browse/HDFS-5256) | Use guava 
LoadingCache to implement DFSClientCache |  Major | nfs | Haohui Mai | Haohui 
Mai |
+| [HADOOP-8315](https://issues.apache.org/jira/browse/HADOOP-8315) | Support 
SASL-authenticated ZooKeeper in ActiveStandbyElector |  Major | auto-failover, 
ha | Todd Lipcon | Todd Lipcon |
 | [YARN-1213](https://issues.apache.org/jira/browse/YARN-1213) | Restore 
config to ban submitting to undeclared pools in the Fair Scheduler |  Major | 
scheduler | Sandy Ryza | Sandy Ryza |
+| [HDFS-5308](https://issues.apache.org/jira/browse/HDFS-5308) | Replace 
HttpConfig#getSchemePrefix with implicit schemes in HDFS JSP |  Major | . | 
Haohui Mai | Haohui Mai |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10012](https://issues.apache.org/jira/browse/HADOOP-10012) | Secure 
Oozie jobs fail with delegation token renewal exception in Namenode HA setup |  
Blocker | ha | Arpit Gupta | Suresh Srinivas |
-| [HADOOP-10003](https://issues.apache.org/jira/browse/HADOOP-10003) | 
HarFileSystem.listLocatedStatus() fails |  Major | fs | Jason Dere |  |
-| [HADOOP-9976](https://issues.apache.org/jira/browse/HADOOP-9976) | Different 
versions of avro and avro-maven-plugin |  Major | . | Karthik Kambatla | 
Karthik Kambatla |
+| [HDFS-5031](https://issues.apache.org/jira/browse/HDFS-5031) | BlockScanner 
scans the block multiple times and on restart scans everything |  Blocker | 
datanode | Vinayakumar B | Vinayakumar B |
+| 

[17/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/RELEASENOTES.1.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/RELEASENOTES.1.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/RELEASENOTES.1.2.0.md
index 3fa573c..e7aad10 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/RELEASENOTES.1.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/RELEASENOTES.1.2.0.md
@@ -23,154 +23,154 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-8971](https://issues.apache.org/jira/browse/HADOOP-8971) | *Major* | 
**Backport: hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data 
(HADOOP-8926)**
+* [HDFS-385](https://issues.apache.org/jira/browse/HDFS-385) | *Major* | 
**Design a pluggable interface to place replicas of blocks in HDFS**
 
-Backport cache-aware improvements for PureJavaCrc32 from trunk (HADOOP-8926)
+New experimental API BlockPlacementPolicy allows investigating alternate rules 
for locating block replicas.
 
 
 ---
 
-* [HADOOP-8817](https://issues.apache.org/jira/browse/HADOOP-8817) | *Major* | 
**Backport Network Topology Extension for Virtualization (HADOOP-8468) to 
branch-1**
+* [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | *Major* | 
**Handle paths using back slash as path separator for windows only**
 
-A new 4-layer network topology NetworkToplogyWithNodeGroup is available to 
make Hadoop more robust and efficient in virtualized environment.
+This jira only allows providing paths using back slash as separator on 
Windows. The back slash on \*nix system will be used as escape character. The 
support for paths using back slash as path separator will be removed in 
HADOOP-8139 in release 23.3.
 
 
 ---
 
-* [HADOOP-8470](https://issues.apache.org/jira/browse/HADOOP-8470) | *Major* | 
**Implementation of 4-layer subclass of NetworkTopology 
(NetworkTopologyWithNodeGroup)**
+* [MAPREDUCE-4415](https://issues.apache.org/jira/browse/MAPREDUCE-4415) | 
*Major* | **Backport the Job.getInstance methods from MAPREDUCE-1505 to 
branch-1**
 
-This patch should be checked in together (or after) with JIRA Hadoop-8469: 
https://issues.apache.org/jira/browse/HADOOP-8469
+Backported new APIs to get a Job object to 1.2.0 from 2.0.0. Job API static 
methods Job.getInstance(), Job.getInstance(Configuration) and 
Job.getInstance(Configuration, jobName) are now available across both releases 
to avoid porting pain.
 
 
 ---
 
-* [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | *Major* | 
**Handle paths using back slash as path separator for windows only**
+* [HDFS-3697](https://issues.apache.org/jira/browse/HDFS-3697) | *Minor* | 
**Enable fadvise readahead by default**
 
-This jira only allows providing paths using back slash as separator on 
Windows. The back slash on \*nix system will be used as escape character. The 
support for paths using back slash as path separator will be removed in 
HADOOP-8139 in release 23.3.
+The datanode now performs 4MB readahead by default when reading data from its 
disks, if the native libraries are present. This has been shown to improve 
performance in many workloads. The feature may be disabled by setting 
dfs.datanode.readahead.bytes to "0".
 
 
 ---
 
-* [HADOOP-7698](https://issues.apache.org/jira/browse/HADOOP-7698) | 
*Critical* | **jsvc target fails on x86\_64**
+* [MAPREDUCE-4565](https://issues.apache.org/jira/browse/MAPREDUCE-4565) | 
*Major* | **Backport MR-2855 to branch-1: ResourceBundle lookup during counter 
name resolution takes a lot of time**
 
-The jsvc build target is now supported for Mac OSX and other platforms as well.
+Passing a cached class-loader to ResourceBundle creator to minimize counter 
names lookup time.
 
 
 ---
 
-* [HDFS-4519](https://issues.apache.org/jira/browse/HDFS-4519) | *Major* | 
**Support override of jsvc binary and log file locations when launching secure 
datanode.**
-
-With this improvement the following options are available in release 1.2.0 and 
later on 1.x release stream:
-1. jsvc location can be overridden by setting environment variable JSVC\_HOME. 
Defaults to jsvc binary packaged within the Hadoop distro.
-2. jsvc log output is directed to the file defined by JSVC\_OUTFILE. Defaults 
to $HADOOP\_LOG\_DIR/jsvc.out.
-3. jsvc error output is directed to the file defined by JSVC\_ERRFILE file.  
Defaults to $HADOOP\_LOG\_DIR/jsvc.err.
-
-With this improvement the following options are available in release 2.0.4 and 
later on 2.x release stream:
-1. jsvc log output is directed to the file defined by JSVC\_OUTFILE. Defaults 
to $HADOOP\_LOG\_DIR/jsvc.out.
-2. jsvc error output is directed to the file defined by JSVC\_ERRFILE file.  
Defaults to 

[12/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/CHANGES.2.0.3-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/CHANGES.2.0.3-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/CHANGES.2.0.3-alpha.md
index 6ec5fbb..8508485 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/CHANGES.2.0.3-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/CHANGES.2.0.3-alpha.md
@@ -24,571 +24,565 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9070](https://issues.apache.org/jira/browse/HADOOP-9070) | Kerberos 
SASL server cannot find kerberos key |  Blocker | ipc | Daryn Sharp | Daryn 
Sharp |
+| [MAPREDUCE-4123](https://issues.apache.org/jira/browse/MAPREDUCE-4123) | 
./mapred groups gives NoClassDefFoundError |  Critical | mrv2 | Nishan Shetty | 
Devaraj K |
+| [HDFS-4122](https://issues.apache.org/jira/browse/HDFS-4122) | Cleanup HDFS 
logs and reduce the size of logged messages |  Major | datanode, hdfs-client, 
namenode | Suresh Srinivas | Suresh Srinivas |
+| [HDFS-1331](https://issues.apache.org/jira/browse/HDFS-1331) | dfs -test 
should work like /bin/test |  Minor | tools | Allen Wittenauer | Andy Isaacson |
+| [HDFS-4080](https://issues.apache.org/jira/browse/HDFS-4080) | Add a 
separate logger for block state change logs to enable turning off those logs |  
Major | namenode | Kihwal Lee | Kihwal Lee |
 | [HADOOP-8999](https://issues.apache.org/jira/browse/HADOOP-8999) | SASL 
negotiation is flawed |  Major | ipc | Daryn Sharp | Daryn Sharp |
-| [HDFS-4451](https://issues.apache.org/jira/browse/HDFS-4451) | hdfs balancer 
command returns exit code 1 on success instead of 0 |  Major | balancer & mover 
| Joshua Blatt |  |
-| [HDFS-4369](https://issues.apache.org/jira/browse/HDFS-4369) | 
GetBlockKeysResponseProto does not handle null response |  Blocker | namenode | 
Suresh Srinivas | Suresh Srinivas |
+| [HDFS-4362](https://issues.apache.org/jira/browse/HDFS-4362) | 
GetDelegationTokenResponseProto does not handle null token |  Critical | . | 
Suresh Srinivas | Suresh Srinivas |
 | [HDFS-4367](https://issues.apache.org/jira/browse/HDFS-4367) | 
GetDataEncryptionKeyResponseProto  does not handle null response |  Blocker | 
namenode | Suresh Srinivas | Suresh Srinivas |
 | [HDFS-4364](https://issues.apache.org/jira/browse/HDFS-4364) | 
GetLinkTargetResponseProto does not handle null path |  Blocker | . | Suresh 
Srinivas | Suresh Srinivas |
-| [HDFS-4362](https://issues.apache.org/jira/browse/HDFS-4362) | 
GetDelegationTokenResponseProto does not handle null token |  Critical | . | 
Suresh Srinivas | Suresh Srinivas |
-| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
-| [HDFS-4122](https://issues.apache.org/jira/browse/HDFS-4122) | Cleanup HDFS 
logs and reduce the size of logged messages |  Major | datanode, hdfs-client, 
namenode | Suresh Srinivas | Suresh Srinivas |
-| [HDFS-4080](https://issues.apache.org/jira/browse/HDFS-4080) | Add a 
separate logger for block state change logs to enable turning off those logs |  
Major | namenode | Kihwal Lee | Kihwal Lee |
-| [HDFS-1331](https://issues.apache.org/jira/browse/HDFS-1331) | dfs -test 
should work like /bin/test |  Minor | tools | Allen Wittenauer | Andy Isaacson |
+| [HDFS-4369](https://issues.apache.org/jira/browse/HDFS-4369) | 
GetBlockKeysResponseProto does not handle null response |  Blocker | namenode | 
Suresh Srinivas | Suresh Srinivas |
 | [MAPREDUCE-4928](https://issues.apache.org/jira/browse/MAPREDUCE-4928) | Use 
token request messages defined in hadoop common |  Major | applicationmaster, 
security | Suresh Srinivas | Suresh Srinivas |
-| [MAPREDUCE-4123](https://issues.apache.org/jira/browse/MAPREDUCE-4123) | 
./mapred groups gives NoClassDefFoundError |  Critical | mrv2 | Nishan Shetty | 
Devaraj K |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HDFS-4451](https://issues.apache.org/jira/browse/HDFS-4451) | hdfs balancer 
command returns exit code 1 on success instead of 0 |  Major | balancer & mover 
| Joshua Blatt |  |
+| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
+| [HADOOP-9070](https://issues.apache.org/jira/browse/HADOOP-9070) | Kerberos 
SASL server cannot find kerberos key |  Blocker | ipc | Daryn Sharp | Daryn 
Sharp |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | 

[11/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/RELEASENOTES.2.0.3-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/RELEASENOTES.2.0.3-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/RELEASENOTES.2.0.3-alpha.md
index f924b91..e22f889 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/RELEASENOTES.2.0.3-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.3-alpha/RELEASENOTES.2.0.3-alpha.md
@@ -23,44 +23,42 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-9147](https://issues.apache.org/jira/browse/HADOOP-9147) | *Trivial* 
| **Add missing fields to FIleStatus.toString**
+* [HDFS-3703](https://issues.apache.org/jira/browse/HDFS-3703) | *Major* | 
**Decrease the datanode failure detection time**
 
-Update FileStatus.toString to include missing fields
+This jira adds a new DataNode state called "stale" at the NameNode. DataNodes 
are marked as stale if it does not send heartbeat message to NameNode within 
the timeout configured using the configuration parameter 
"dfs.namenode.stale.datanode.interval" in seconds (default value is 30 
seconds). NameNode picks a stale datanode as the last target to read from when 
returning block locations for reads.
+
+This feature is by default turned \* off \*. To turn on the feature, set the 
HDFS configuration "dfs.namenode.check.stale.datanode" to true.
 
 
 ---
 
-* [HADOOP-9119](https://issues.apache.org/jira/browse/HADOOP-9119) | *Minor* | 
**Add test to FileSystemContractBaseTest to verify integrity of overwritten 
files**
+* [MAPREDUCE-4123](https://issues.apache.org/jira/browse/MAPREDUCE-4123) | 
*Critical* | **./mapred groups gives NoClassDefFoundError**
 
-Patches adds more tests to verify overwritten and more complex operations 
-write-delete-overwrite. By using differently sized datasets and different data 
inside, these tests verify that the overwrite really did take place. While HDFS 
meets all these requirements directly, eventually consistent object stores may 
not -hence these tests.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-9118](https://issues.apache.org/jira/browse/HADOOP-9118) | *Trivial* 
| **FileSystemContractBaseTest test data for read/write isn't rigorous enough**
+* [MAPREDUCE-3678](https://issues.apache.org/jira/browse/MAPREDUCE-3678) | 
*Major* | **The Map tasks logs should have the value of input split it 
processed**
 
-Resolved as part of HADOOP-9119 -it's test data generator creates more bits in 
every test byte
+A map-task's syslogs now carries basic info on the InputSplit it processed.
 
 
 ---
 
-* [HADOOP-9106](https://issues.apache.org/jira/browse/HADOOP-9106) | *Major* | 
**Allow configuration of IPC connect timeout**
-
-This jira introduces a new configuration parameter 
"ipc.client.connect.timeout". This configuration defines the Hadoop RPC 
connection timeout in milliseconds for a client to connect to a server. For 
details see the description associated with this configuration in 
core-default.xml.
+* [HDFS-4059](https://issues.apache.org/jira/browse/HDFS-4059) | *Minor* | 
**Add number of stale DataNodes to metrics**
 
+This jira adds a new metric with name "StaleDataNodes" under metrics context 
"dfs" of type Gauge. This tracks the number of DataNodes marked as stale. A 
DataNode is marked stale when the heartbeat message from the DataNode is not 
received within the configured time ""dfs.namenode.stale.datanode.interval".
 

 
-* [HADOOP-9070](https://issues.apache.org/jira/browse/HADOOP-9070) | *Blocker* 
| **Kerberos SASL server cannot find kerberos key**
-
-**WARNING: No release note provided for this incompatible change.**
+Please see hdfs-default.xml documentation corresponding to 
""dfs.namenode.stale.datanode.interval"  for more details on how to configure 
this feature. When this feature is not configured, this metrics would return 
zero.
 
 
 ---
 
-* [HADOOP-8999](https://issues.apache.org/jira/browse/HADOOP-8999) | *Major* | 
**SASL negotiation is flawed**
+* [HADOOP-8922](https://issues.apache.org/jira/browse/HADOOP-8922) | *Trivial* 
| **Provide alternate JSONP output for JMXJsonServlet to allow javascript in 
browser dashboard**
 
-The RPC SASL negotiation now always ends with final response.  If the SASL 
mechanism does not have a final response (GSSAPI, PLAIN), then an empty success 
response is sent to the client.  The client will now always expect a final 
response to definitively know if negotiation is complete/successful.
+Add a JSONP alternative outpout for /jmx HTTP interface to provide a 
Javascript polling ability in browsers.
 
 
 ---
@@ -72,109 +70,100 @@ Speed up Crc32 by improving the cache hit-ratio of 

[16/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/CHANGES.2.0.0-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/CHANGES.2.0.0-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/CHANGES.2.0.0-alpha.md
index 13179e4..6599034 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/CHANGES.2.0.0-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.0-alpha/CHANGES.2.0.0-alpha.md
@@ -24,48 +24,42 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8314](https://issues.apache.org/jira/browse/HADOOP-8314) | 
HttpServer#hasAdminAccess should return false if authorization is enabled but 
user is not authenticated |  Major | security | Alejandro Abdelnur | Alejandro 
Abdelnur |
-| [HADOOP-8270](https://issues.apache.org/jira/browse/HADOOP-8270) | 
hadoop-daemon.sh stop action should return 0 for an already stopped service |  
Minor | scripts | Roman Shaposhnik | Roman Shaposhnik |
-| [HADOOP-8184](https://issues.apache.org/jira/browse/HADOOP-8184) | ProtoBuf 
RPC engine does not need it own reply packet - it can use the IPC layer reply 
packet. |  Major | ipc | Sanjay Radia | Sanjay Radia |
+| [HDFS-395](https://issues.apache.org/jira/browse/HDFS-395) | DFS 
Scalability: Incremental block reports |  Major | datanode, namenode | dhruba 
borthakur | Tomasz Nykiel |
+| [HADOOP-7524](https://issues.apache.org/jira/browse/HADOOP-7524) | Change 
RPC to allow multiple protocols including multiple versions of the same 
protocol |  Major | ipc | Sanjay Radia | Sanjay Radia |
+| [HDFS-2303](https://issues.apache.org/jira/browse/HDFS-2303) | Unbundle jsvc 
|  Major | build, scripts | Roman Shaposhnik | Mingjie Lai |
+| [HDFS-3044](https://issues.apache.org/jira/browse/HDFS-3044) | fsck move 
should be non-destructive by default |  Major | namenode | Eli Collins | Colin 
P. McCabe |
 | [HADOOP-8154](https://issues.apache.org/jira/browse/HADOOP-8154) | 
DNS#getIPs shouldn't silently return the local host IP for bogus interface 
names |  Major | conf | Eli Collins | Eli Collins |
+| [HADOOP-8184](https://issues.apache.org/jira/browse/HADOOP-8184) | ProtoBuf 
RPC engine does not need it own reply packet - it can use the IPC layer reply 
packet. |  Major | ipc | Sanjay Radia | Sanjay Radia |
 | [HADOOP-8149](https://issues.apache.org/jira/browse/HADOOP-8149) | cap space 
usage of default log4j rolling policy |  Major | conf | Patrick Hunt | Patrick 
Hunt |
-| [HADOOP-7524](https://issues.apache.org/jira/browse/HADOOP-7524) | Change 
RPC to allow multiple protocols including multiple versions of the same 
protocol |  Major | ipc | Sanjay Radia | Sanjay Radia |
-| [HDFS-3286](https://issues.apache.org/jira/browse/HDFS-3286) | When the 
threshold value for balancer is 0(zero) ,unexpected output is displayed |  
Major | balancer & mover | J.Andreina | Ashish Singhi |
+| [HDFS-3137](https://issues.apache.org/jira/browse/HDFS-3137) | Bump 
LAST\_UPGRADABLE\_LAYOUT\_VERSION to -16 |  Major | namenode | Eli Collins | 
Eli Collins |
+| [HDFS-3138](https://issues.apache.org/jira/browse/HDFS-3138) | Move 
DatanodeInfo#ipcPort to DatanodeID |  Major | . | Eli Collins | Eli Collins |
 | [HDFS-3164](https://issues.apache.org/jira/browse/HDFS-3164) | Move 
DatanodeInfo#hostName to DatanodeID |  Major | datanode | Eli Collins | Eli 
Collins |
 | [HDFS-3144](https://issues.apache.org/jira/browse/HDFS-3144) | Refactor 
DatanodeID#getName by use |  Major | datanode | Eli Collins | Eli Collins |
-| [HDFS-3138](https://issues.apache.org/jira/browse/HDFS-3138) | Move 
DatanodeInfo#ipcPort to DatanodeID |  Major | . | Eli Collins | Eli Collins |
-| [HDFS-3137](https://issues.apache.org/jira/browse/HDFS-3137) | Bump 
LAST\_UPGRADABLE\_LAYOUT\_VERSION to -16 |  Major | namenode | Eli Collins | 
Eli Collins |
-| [HDFS-3044](https://issues.apache.org/jira/browse/HDFS-3044) | fsck move 
should be non-destructive by default |  Major | namenode | Eli Collins | Colin 
Patrick McCabe |
-| [HDFS-2303](https://issues.apache.org/jira/browse/HDFS-2303) | Unbundle jsvc 
|  Major | build, scripts | Roman Shaposhnik | Mingjie Lai |
-| [HDFS-395](https://issues.apache.org/jira/browse/HDFS-395) | DFS 
Scalability: Incremental block reports |  Major | datanode, namenode | dhruba 
borthakur | Tomasz Nykiel |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-8270](https://issues.apache.org/jira/browse/HADOOP-8270) | 
hadoop-daemon.sh stop action should return 0 for an already stopped service |  
Minor | scripts | Roman Shaposhnik | Roman Shaposhnik |
+| 

[05/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
index 43dc922..ad29c29 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
@@ -23,13 +23,6 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | *Major* 
| **Add a directbuffer Decompressor API to hadoop**
-
-Direct Bytebuffer decompressors for Zlib (Deflate & Gzip) and Snappy
-
-

-
 * [HADOOP-9241](https://issues.apache.org/jira/browse/HADOOP-9241) | *Trivial* 
| **DU refresh interval is not configurable**
 
 The 'du' (disk usage command from Unix) script refresh monitor is now 
configurable in the same way as its 'df' counterpart, via the property 
'fs.du.interval', the default of which is 10 minute (in ms).
@@ -73,21 +66,32 @@ Additional information specified on github: 
https://github.com/DmitryMezhensky/H
 
 ---
 
-* [HDFS-5704](https://issues.apache.org/jira/browse/HDFS-5704) | *Major* | 
**Change OP\_UPDATE\_BLOCKS  with a new OP\_ADD\_BLOCK**
+* [MAPREDUCE-1176](https://issues.apache.org/jira/browse/MAPREDUCE-1176) | 
*Major* | **FixedLengthInputFormat and FixedLengthRecordReader**
 
-Add a new editlog record (OP\_ADD\_BLOCK) that only records allocation of the 
new block instead of the entire block list, on every block allocation.
+Addition of FixedLengthInputFormat and FixedLengthRecordReader in the 
org.apache.hadoop.mapreduce.lib.input package. These two classes can be used 
when you need to read data from files containing fixed length (fixed width) 
records. Such files have no CR/LF (or any combination thereof), no delimiters 
etc, but each record is a fixed length, and extra data is padded with spaces. 
The data is one gigantic line within a file. When creating a job that specifies 
this input format, the job must have the 
"mapreduce.input.fixedlengthinputformat.record.length" property set as follows 
myJobConf.setInt("mapreduce.input.fixedlengthinputformat.record.length",[myFixedRecordLength]);
+
+Please see javadoc for more details.
 
 
 ---
 
-* [HDFS-5663](https://issues.apache.org/jira/browse/HDFS-5663) | *Major* | 
**make the retry time and interval value configurable in openInfo()**
+* [HDFS-5502](https://issues.apache.org/jira/browse/HDFS-5502) | *Major* | 
**Fix HTTPS support in HsftpFileSystem**
 
-Makes the retries and time between retries getting the length of the last 
block on file configurable.  Below are the new configurations.
+Fix the https support in HsftpFileSystem. With the change the client now 
verifies the server certificate. In particular, client side will verify the 
Common Name of the certificate using a strategy specified by the configuration 
property "hadoop.ssl.hostname.verifier".
 
-dfs.client.retry.times.get-last-block-length
-dfs.client.retry.interval-ms.get-last-block-length
 
-They are set to the 3 and 4000 respectively, these being what was previously 
hardcoded.
+---
+
+* [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | *Major* 
| **Add a directbuffer Decompressor API to hadoop**
+
+Direct Bytebuffer decompressors for Zlib (Deflate & Gzip) and Snappy
+
+
+---
+
+* [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | *Major* | 
**libhdfs doesn't return correct error codes in most cases**
+
+libhdfs now returns correct codes in errno. Previously, due to a bug, many 
functions set errno to 255 instead of the more specific error code.
 
 
 ---
@@ -108,32 +112,28 @@ hadoop.ssl.enabled and dfs.https.enabled are deprecated. 
When the deprecated con
 
 ---
 
-* [HDFS-5502](https://issues.apache.org/jira/browse/HDFS-5502) | *Major* | 
**Fix HTTPS support in HsftpFileSystem**
+* [HDFS-4983](https://issues.apache.org/jira/browse/HDFS-4983) | *Major* | 
**Numeric usernames do not work with WebHDFS FS**
 
-Fix the https support in HsftpFileSystem. With the change the client now 
verifies the server certificate. In particular, client side will verify the 
Common Name of the certificate using a strategy specified by the configuration 
property "hadoop.ssl.hostname.verifier".
+Add a new configuration property "dfs.webhdfs.user.provider.user.pattern" for 
specifying user name filters for WebHDFS.
 
 
 ---
 
-* [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | *Major* | 
**libhdfs doesn't return correct error codes in most cases**
-
-libhdfs now returns correct codes in errno. Previously, due to a bug, many 
functions set errno to 255 instead of the more 

[04/73] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
index 06e9c9b..4426ba9 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
@@ -24,27 +24,21 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | FsShell 
can print "Found xxx items" unnecessarily often |  Minor | fs | Jason Lowe | 
Daryn Sharp |
-| [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | Lower the 
default maximum items per directory to fix PB fsimage loading |  Blocker | 
namenode | Andrew Wang | Andrew Wang |
-| [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | Change 
default configuration to limit file name length in HDFS |  Major | namenode | 
Suresh Srinivas | Chris Nauroth |
 | [HDFS-5804](https://issues.apache.org/jira/browse/HDFS-5804) | HDFS NFS 
Gateway fails to mount and proxy when using Kerberos |  Major | nfs | Abin 
Shahab | Abin Shahab |
+| [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | FsShell 
can print "Found xxx items" unnecessarily often |  Minor | fs | Jason Lowe | 
Daryn Sharp |
 | [HDFS-5321](https://issues.apache.org/jira/browse/HDFS-5321) | Clean up the 
HTTP-related configuration in HDFS |  Major | . | Haohui Mai | Haohui Mai |
+| [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | Change 
default configuration to limit file name length in HDFS |  Major | namenode | 
Suresh Srinivas | Chris Nauroth |
+| [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | Lower the 
default maximum items per directory to fix PB fsimage loading |  Blocker | 
namenode | Andrew Wang | Andrew Wang |
 | [HDFS-5138](https://issues.apache.org/jira/browse/HDFS-5138) | Support HDFS 
upgrade in HA |  Blocker | . | Kihwal Lee | Aaron T. Myers |
 | [MAPREDUCE-5036](https://issues.apache.org/jira/browse/MAPREDUCE-5036) | 
Default shuffle handler port should not be 8080 |  Major | . | Sandy Ryza | 
Sandy Ryza |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10184](https://issues.apache.org/jira/browse/HADOOP-10184) | Hadoop 
Common changes required to support HDFS ACLs. |  Major | fs, security | Chris 
Nauroth | Chris Nauroth |
 | [HDFS-5535](https://issues.apache.org/jira/browse/HDFS-5535) | Umbrella jira 
for improved HDFS rolling upgrades |  Major | datanode, ha, hdfs-client, 
namenode | Nathan Roberts | Tsz Wo Nicholas Sze |
+| [HADOOP-10184](https://issues.apache.org/jira/browse/HADOOP-10184) | Hadoop 
Common changes required to support HDFS ACLs. |  Major | fs, security | Chris 
Nauroth | Chris Nauroth |
 | [HDFS-4685](https://issues.apache.org/jira/browse/HDFS-4685) | 
Implementation of ACLs in HDFS |  Major | hdfs-client, namenode, security | 
Sachin Jose | Chris Nauroth |
 
 
@@ -52,432 +46,432 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10423](https://issues.apache.org/jira/browse/HADOOP-10423) | Clarify 
compatibility policy document for combination of new client and old server. |  
Minor | documentation | Chris Nauroth | Chris Nauroth |
-| [HADOOP-10386](https://issues.apache.org/jira/browse/HADOOP-10386) | Log 
proxy hostname in various exceptions being thrown in a HA setup |  Minor | ha | 
Arpit Gupta | Haohui Mai |
-| [HADOOP-10383](https://issues.apache.org/jira/browse/HADOOP-10383) | 
InterfaceStability annotations should have RetentionPolicy.RUNTIME |  Major | . 
| Enis Soztutar | Enis Soztutar |
-| [HADOOP-10379](https://issues.apache.org/jira/browse/HADOOP-10379) | Protect 
authentication cookies with the HttpOnly and Secure flags |  Major | . | Haohui 
Mai | Haohui Mai |
-| [HADOOP-10374](https://issues.apache.org/jira/browse/HADOOP-10374) | 
InterfaceAudience annotations should have RetentionPolicy.RUNTIME |  Major | . 
| Enis Soztutar | Enis Soztutar |
-| [HADOOP-10348](https://issues.apache.org/jira/browse/HADOOP-10348) | 
Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk |  Major 
| . | Haohui Mai | Haohui Mai |
-| [HADOOP-10343](https://issues.apache.org/jira/browse/HADOOP-10343) | Change 
info to debug log in LossyRetryInvocationHandler |  Minor | . | Arpit Gupta | 
Arpit Gupta |
-| 

hadoop git commit: YARN-6721. container-executor should have stack checking

2017-08-31 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 190410085 -> 0adc3a053


YARN-6721. container-executor should have stack checking

Signed-off-by: Chris Douglas 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0adc3a05
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0adc3a05
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0adc3a05

Branch: refs/heads/trunk
Commit: 0adc3a0533e90c8a42c5924be4847753e7f8d281
Parents: 1904100
Author: Allen Wittenauer 
Authored: Fri Jun 23 11:39:37 2017 -0700
Committer: Allen Wittenauer 
Committed: Thu Aug 31 19:39:31 2017 -0700

--
 .../hadoop-common/HadoopCommon.cmake|  7 ++-
 .../src/CMakeLists.txt  | 45 
 2 files changed, 48 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0adc3a05/hadoop-common-project/hadoop-common/HadoopCommon.cmake
--
diff --git a/hadoop-common-project/hadoop-common/HadoopCommon.cmake 
b/hadoop-common-project/hadoop-common/HadoopCommon.cmake
index faabeed..63de1de 100644
--- a/hadoop-common-project/hadoop-common/HadoopCommon.cmake
+++ b/hadoop-common-project/hadoop-common/HadoopCommon.cmake
@@ -121,7 +121,9 @@ endmacro()
 # set the shared compiler flags
 # support for GNU C/C++, add other compilers as necessary
 
-if (CMAKE_C_COMPILER_ID STREQUAL "GNU")
+if (CMAKE_C_COMPILER_ID STREQUAL "GNU" OR
+CMAKE_C_COMPILER_ID STREQUAL "Clang" OR
+CMAKE_C_COMPILER_ID STREQUAL "AppleClang")
   if(NOT DEFINED GCC_SHARED_FLAGS)
 find_package(Threads REQUIRED)
 if(CMAKE_USE_PTHREADS_INIT)
@@ -130,9 +132,6 @@ if (CMAKE_C_COMPILER_ID STREQUAL "GNU")
   set(GCC_SHARED_FLAGS "-g -O2 -Wall -D_FILE_OFFSET_BITS=64")
 endif()
   endif()
-elseif (CMAKE_C_COMPILER_ID STREQUAL "Clang" OR
-CMAKE_C_COMPILER_ID STREQUAL "AppleClang")
-  set(GCC_SHARED_FLAGS "-g -O2 -Wall -D_FILE_OFFSET_BITS=64")
 endif()
 
 # Set the shared linker flags.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0adc3a05/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
index 7f2b00d..3d5b506 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
@@ -53,6 +53,51 @@ if(APPLE)
   set(EXTRA_LIBS ${COCOA_LIBRARY})
 endif(APPLE)
 
+include(CheckCCompilerFlag)
+
+# Building setuid = attempt to enable stack protection.
+# assumption here is that the C compiler and the C++
+# compiler match.  need both so that gtest gets same
+# stack treatment that the real c-e does
+IF(CMAKE_C_COMPILER_ID STREQUAL "GNU")
+CHECK_C_COMPILER_FLAG("-fstack-check" STACKRESULT)
+IF(STACKRESULT)
+  SET (CMAKE_C_FLAGS "-fstack-check ${CMAKE_C_FLAGS}")
+  SET (CMAKE_CXX_FLAGS "-fstack-check ${CMAKE_CXX_FLAGS}")
+ENDIF()
+ELSEIF(CMAKE_C_COMPILER_ID STREQUAL "Clang" OR
+   CMAKE_C_COMPILER_ID STREQUAL "AppleClang")
+
+  # clang is a bit difficult here:
+  # - some versions don't support the flag
+  # - some versions support the flag, despite not having
+  #   the library that is actually required (!)
+  # Notably, Xcode is a problem here.
+  # In the end, this is needlessly complex. :(
+
+  SET(PRE_SANITIZE ${CMAKE_REQUIRED_FLAGS})
+  SET(CMAKE_REQUIRED_FLAGS "-fsanitize=safe-stack ${CMAKE_REQUIRED_FLAGS}")
+  CHECK_C_COMPILER_FLAG("" STACKRESULT)
+  SET(CMAKE_REQUIRED_FLAGS ${PRE_SANITIZE})
+  IF(STACKRESULT)
+ SET(CMAKE_C_FLAGS "-fsanitize=safe-stack ${CMAKE_C_FLAGS}")
+ SET(CMAKE_CXX_FLAGS "-fsanitize=safe-stack ${CMAKE_CXX_FLAGS}")
+  ENDIF()
+ELSEIF(CMAKE_C_COMPILER_ID STREQUAL "SunPro")
+
+  # this appears to only be supported on SPARC, for some reason
+
+  CHECK_C_COMPILER_FLAG("-xcheck=stkovf" STACKRESULT)
+  IF(STACKRESULT)
+SET (CMAKE_C_FLAGS "-xcheck=stkovf ${CMAKE_C_FLAGS}")
+SET (CMAKE_CXX_FLAGS "-xcheck=stkovf ${CMAKE_CXX_FLAGS}")
+  ENDIF()
+ENDIF()
+
+IF(NOT STACKRESULT)
+   MESSAGE(WARNING "Stack Clash security protection is not suported.")
+ENDIF()
+
 function(output_directory TGT DIR)
 set_target_properties(${TGT} PROPERTIES
 RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/${DIR}")


-
To unsubscribe, e-mail: 

hadoop git commit: HDFS-12317. HDFS metrics render error in the page of Github. Contributed by Yiqun Lin.

2017-08-31 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 004231dc0 -> 41d8e4e9b


HDFS-12317. HDFS metrics render error in the page of Github. Contributed by 
Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/41d8e4e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/41d8e4e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/41d8e4e9

Branch: refs/heads/branch-2
Commit: 41d8e4e9b3d575316143134689dc6fc041f6ad8c
Parents: 004231d
Author: Yiqun Lin 
Authored: Fri Sep 1 10:13:01 2017 +0800
Committer: Yiqun Lin 
Committed: Fri Sep 1 10:13:01 2017 +0800

--
 .../hadoop-common/src/site/markdown/Metrics.md  | 24 ++--
 1 file changed, 12 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/41d8e4e9/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index cc46148..dcf7b10 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -180,8 +180,8 @@ Each metrics record contains tags such as ProcessName, 
SessionId, and Hostname a
 | `GenerateEDEKTimeAvgTime` | Average time of generating EDEK in milliseconds |
 | `WarmUpEDEKTimeNumOps` | Total number of warming up EDEK |
 | `WarmUpEDEKTimeAvgTime` | Average time of warming up EDEK in milliseconds |
-| `ResourceCheckTime`*num*`s(50|75|90|95|99)thPercentileLatency` | The 
50/75/90/95/99th percentile of NameNode resource check latency in milliseconds. 
Percentile measurement is off by default, by watching no intervals. The 
intervals are specified by `dfs.metrics.percentiles.intervals`. |
-| `BlockReport`*num*`s(50|75|90|95|99)thPercentileLatency` | The 
50/75/90/95/99th percentile of storage block report latency in milliseconds. 
Percentile measurement is off by default, by watching no intervals. The 
intervals are specified by `dfs.metrics.percentiles.intervals`. |
+| `ResourceCheckTime`*num*`s(50/75/90/95/99)thPercentileLatency` | The 
50/75/90/95/99th percentile of NameNode resource check latency in milliseconds. 
Percentile measurement is off by default, by watching no intervals. The 
intervals are specified by `dfs.metrics.percentiles.intervals`. |
+| `BlockReport`*num*`s(50/75/90/95/99)thPercentileLatency` | The 
50/75/90/95/99th percentile of storage block report latency in milliseconds. 
Percentile measurement is off by default, by watching no intervals. The 
intervals are specified by `dfs.metrics.percentiles.intervals`. |
 
 FSNamesystem
 
@@ -242,8 +242,8 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | `NumInMaintenanceLiveDataNodes` | Number of live Datanodes which are in 
maintenance state |
 | `NumInMaintenanceDeadDataNodes` | Number of dead Datanodes which are in 
maintenance state |
 | `NumEnteringMaintenanceDataNodes` | Number of Datanodes that are entering 
the maintenance state |
-| `FSN(Read|Write)Lock`*OperationName*`NumOps` | Total number of acquiring 
lock by operations |
-| `FSN(Read|Write)Lock`*OperationName*`AvgTime` | Average time of holding the 
lock by operations in milliseconds |
+| `FSN(Read/Write)Lock`*OperationName*`NumOps` | Total number of acquiring 
lock by operations |
+| `FSN(Read/Write)Lock`*OperationName*`AvgTime` | Average time of holding the 
lock by operations in milliseconds |
 
 JournalNode
 ---
@@ -310,13 +310,13 @@ Each metrics record contains tags such as SessionId and 
Hostname as additional i
 | `RamDiskBlocksEvictedWithoutRead` | Total number of blocks evicted in memory 
without ever being read from memory |
 | `RamDiskBlocksEvictionWindowMsNumOps` | Number of blocks evicted in memory|
 | `RamDiskBlocksEvictionWindowMsAvgTime` | Average time of blocks in memory 
before being evicted in milliseconds |
-| `RamDiskBlocksEvictionWindows`*num*`s(50|75|90|95|99)thPercentileLatency` | 
The 50/75/90/95/99th percentile of latency between memory write and eviction in 
milliseconds. Percentile measurement is off by default, by watching no 
intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. |
+| `RamDiskBlocksEvictionWindows`*num*`s(50/75/90/95/99)thPercentileLatency` | 
The 50/75/90/95/99th percentile of latency between memory write and eviction in 
milliseconds. Percentile measurement is off by default, by watching no 
intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. |
 | `RamDiskBlocksLazyPersisted` | Total number of blocks written to disk by 
lazy writer |
 | 

[34/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
index 75c62a1..1026058 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/CHANGES.0.21.0.md
@@ -24,1343 +24,1337 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-6701](https://issues.apache.org/jira/browse/HADOOP-6701) |  
Incorrect exit codes for "dfs -chown", "dfs -chgrp" |  Minor | fs | Ravi 
Phulari | Ravi Phulari |
-| [HADOOP-6686](https://issues.apache.org/jira/browse/HADOOP-6686) | Remove 
redundant exception class name in unwrapped exceptions thrown at the RPC client 
|  Major | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-6577](https://issues.apache.org/jira/browse/HADOOP-6577) | IPC 
server response buffer reset threshold should be configurable |  Major | . | 
Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-6569](https://issues.apache.org/jira/browse/HADOOP-6569) | 
FsShell#cat should avoid calling unecessary getFileStatus before opening a file 
to read |  Major | fs | Hairong Kuang | Hairong Kuang |
-| [HADOOP-6367](https://issues.apache.org/jira/browse/HADOOP-6367) | Move 
Access Token implementation from Common to HDFS |  Major | security | Kan Zhang 
| Kan Zhang |
-| [HADOOP-6299](https://issues.apache.org/jira/browse/HADOOP-6299) | Use JAAS 
LoginContext for our login |  Major | security | Arun C Murthy | Owen O'Malley |
-| [HADOOP-6230](https://issues.apache.org/jira/browse/HADOOP-6230) | Move 
process tree, and memory calculator classes out of Common into Map/Reduce. |  
Major | util | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
-| [HADOOP-6203](https://issues.apache.org/jira/browse/HADOOP-6203) | Improve 
error message when moving to trash fails due to quota issue |  Major | fs | 
Jakob Homan | Boris Shkolnik |
-| [HADOOP-6201](https://issues.apache.org/jira/browse/HADOOP-6201) | 
FileSystem::ListStatus should throw FileNotFoundException |  Major | fs | Jakob 
Homan | Jakob Homan |
-| [HADOOP-5913](https://issues.apache.org/jira/browse/HADOOP-5913) | Allow 
administrators to be able to start and stop queues |  Major | . | rahul k singh 
| rahul k singh |
-| [HADOOP-5879](https://issues.apache.org/jira/browse/HADOOP-5879) | GzipCodec 
should read compression level etc from configuration |  Major | io | Zheng Shao 
| He Yongqiang |
-| [HADOOP-5861](https://issues.apache.org/jira/browse/HADOOP-5861) | s3n files 
are not getting split by default |  Major | fs/s3 | Joydeep Sen Sarma | Tom 
White |
-| [HADOOP-5738](https://issues.apache.org/jira/browse/HADOOP-5738) | Split 
waiting tasks field in JobTracker metrics to individual tasks |  Major | 
metrics | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
-| [HADOOP-5679](https://issues.apache.org/jira/browse/HADOOP-5679) | Resolve 
findbugs warnings in core/streaming/pipes/examples |  Major | . | Jothi 
Padmanabhan | Jothi Padmanabhan |
-| [HADOOP-5620](https://issues.apache.org/jira/browse/HADOOP-5620) | discp can 
preserve modification times of files |  Major | . | dhruba borthakur | Rodrigo 
Schmidt |
-| [HADOOP-5485](https://issues.apache.org/jira/browse/HADOOP-5485) | 
Authorisation machanism required for acceesing jobtracker url :- 
jobtracker.com:port/scheduler |  Major | . | Aroop Maliakkal | Vinod Kumar 
Vavilapalli |
-| [HADOOP-5464](https://issues.apache.org/jira/browse/HADOOP-5464) | DFSClient 
does not treat write timeout of 0 properly |  Major | . | Raghu Angadi | Raghu 
Angadi |
-| [HADOOP-5438](https://issues.apache.org/jira/browse/HADOOP-5438) | Merge 
FileSystem.create and FileSystem.append |  Major | fs | He Yongqiang | He 
Yongqiang |
-| [HADOOP-5258](https://issues.apache.org/jira/browse/HADOOP-5258) | Provide 
dfsadmin functionality to report on namenode's view of network topology |  
Major | . | Jakob Homan | Jakob Homan |
-| [HADOOP-5219](https://issues.apache.org/jira/browse/HADOOP-5219) | 
SequenceFile is using mapred property |  Major | io | Sharad Agarwal | Sharad 
Agarwal |
-| [HADOOP-5176](https://issues.apache.org/jira/browse/HADOOP-5176) | TestDFSIO 
reports itself as TestFDSIO |  Trivial | benchmarks | Bryan Duxbury | Ravi 
Phulari |
-| [HADOOP-5094](https://issues.apache.org/jira/browse/HADOOP-5094) | Show dead 
nodes information in dfsadmin -report |  Minor | . | Jim Huang | Jakob Homan |
-| [HADOOP-5022](https://issues.apache.org/jira/browse/HADOOP-5022) | [HOD] 
logcondense should delete all hod logs for a user, including jobtracker logs |  
Blocker | contrib/hod | Hemanth Yamijala | 

[14/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
index 1d032c7..54f1663 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.0.2-alpha/CHANGES.2.0.2-alpha.md
@@ -24,697 +24,691 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8794](https://issues.apache.org/jira/browse/HADOOP-8794) | Modifiy 
bin/hadoop to point to HADOOP\_YARN\_HOME |  Major | . | Vinod Kumar 
Vavilapalli | Vinod Kumar Vavilapalli |
-| [HADOOP-8710](https://issues.apache.org/jira/browse/HADOOP-8710) | Remove 
ability for users to easily run the trash emptier |  Major | fs | Eli Collins | 
Eli Collins |
-| [HADOOP-8689](https://issues.apache.org/jira/browse/HADOOP-8689) | Make 
trash a server side configuration option |  Major | fs | Eli Collins | Eli 
Collins |
-| [HADOOP-8551](https://issues.apache.org/jira/browse/HADOOP-8551) | fs -mkdir 
creates parent directories without the -p option |  Major | fs | Robert Joseph 
Evans | John George |
-| [HADOOP-8458](https://issues.apache.org/jira/browse/HADOOP-8458) | Add 
management hook to AuthenticationHandler to enable delegation token operations 
support |  Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
-| [HADOOP-8388](https://issues.apache.org/jira/browse/HADOOP-8388) | Remove 
unused BlockLocation serialization |  Minor | . | Colin Patrick McCabe | Colin 
Patrick McCabe |
-| [HADOOP-8368](https://issues.apache.org/jira/browse/HADOOP-8368) | Use CMake 
rather than autotools to build native code |  Minor | . | Colin Patrick McCabe 
| Colin Patrick McCabe |
-| [HDFS-3675](https://issues.apache.org/jira/browse/HDFS-3675) | libhdfs: 
follow documented return codes |  Minor | libhdfs | Colin Patrick McCabe | 
Colin Patrick McCabe |
+| [MAPREDUCE-4072](https://issues.apache.org/jira/browse/MAPREDUCE-4072) | 
User set java.library.path seems to overwrite default creating problems native 
lib loading |  Major | mrv2 | Anupam Seth | Anupam Seth |
+| [MAPREDUCE-3812](https://issues.apache.org/jira/browse/MAPREDUCE-3812) | 
Lower default allocation sizes, fix allocation configurations and document them 
|  Major | mrv2, performance | Vinod Kumar Vavilapalli | Harsh J |
+| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
+| [HADOOP-8388](https://issues.apache.org/jira/browse/HADOOP-8388) | Remove 
unused BlockLocation serialization |  Minor | . | Colin P. McCabe | Colin P. 
McCabe |
+| [HADOOP-8368](https://issues.apache.org/jira/browse/HADOOP-8368) | Use CMake 
rather than autotools to build native code |  Minor | . | Colin P. McCabe | 
Colin P. McCabe |
 | [HDFS-3522](https://issues.apache.org/jira/browse/HDFS-3522) | If NN is in 
safemode, it should throw SafeModeException when getBlockLocations has zero 
locations |  Major | namenode | Brandon Li | Brandon Li |
+| [HADOOP-8458](https://issues.apache.org/jira/browse/HADOOP-8458) | Add 
management hook to AuthenticationHandler to enable delegation token operations 
support |  Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
+| [MAPREDUCE-4311](https://issues.apache.org/jira/browse/MAPREDUCE-4311) | 
Capacity scheduler.xml does not accept decimal values for capacity and 
maximum-capacity settings |  Major | capacity-sched, mrv2 | Thomas Graves | 
Karthik Kambatla |
 | [HDFS-3446](https://issues.apache.org/jira/browse/HDFS-3446) | 
HostsFileReader silently ignores bad includes/excludes |  Major | namenode | 
Matthew Jacobs | Matthew Jacobs |
-| [HDFS-3318](https://issues.apache.org/jira/browse/HDFS-3318) | Hftp hangs on 
transfers \>2GB |  Blocker | hdfs-client | Daryn Sharp | Daryn Sharp |
-| [HDFS-2727](https://issues.apache.org/jira/browse/HDFS-2727) | libhdfs 
should get the default block size from the server |  Minor | libhdfs | Sho 
Shimauchi | Colin Patrick McCabe |
-| [HDFS-2686](https://issues.apache.org/jira/browse/HDFS-2686) | Remove 
DistributedUpgrade related code |  Major | datanode, namenode | Todd Lipcon | 
Suresh Srinivas |
+| [HDFS-3675](https://issues.apache.org/jira/browse/HDFS-3675) | libhdfs: 
follow documented return codes |  Minor | libhdfs | Colin P. McCabe | Colin P. 
McCabe |
 | [HDFS-2617](https://issues.apache.org/jira/browse/HDFS-2617) | Replaced 
Kerberized SSL for image transfer and fsck with SPNEGO-based solution |  Major 
| security | Jakob Homan | Jakob Homan |
+| 

[39/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
index 72e7d42..04ccabb 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.2.0/CHANGES.0.2.0.md
@@ -20,49 +20,39 @@
 
 ## Release 0.2.0 - 2006-05-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-191](https://issues.apache.org/jira/browse/HADOOP-191) | add 
hadoopStreaming to src/contrib |  Major | . | Michel Tourn | Doug Cutting |
-| [HADOOP-189](https://issues.apache.org/jira/browse/HADOOP-189) | Add job jar 
lib, classes, etc. to CLASSPATH when in standalone mode |  Major | . | stack | 
Doug Cutting |
+| [HADOOP-51](https://issues.apache.org/jira/browse/HADOOP-51) | per-file 
replication counts |  Major | . | Doug Cutting | Konstantin Shvachko |
 | [HADOOP-148](https://issues.apache.org/jira/browse/HADOOP-148) | add a 
failure count to task trackers |  Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-132](https://issues.apache.org/jira/browse/HADOOP-132) | An API for 
reporting performance metrics |  Major | . | David Bowen |  |
+| [HADOOP-189](https://issues.apache.org/jira/browse/HADOOP-189) | Add job jar 
lib, classes, etc. to CLASSPATH when in standalone mode |  Major | . | stack | 
Doug Cutting |
 | [HADOOP-65](https://issues.apache.org/jira/browse/HADOOP-65) | add a record 
I/O framework to hadoop |  Minor | io, ipc | Sameer Paranjpye |  |
-| [HADOOP-51](https://issues.apache.org/jira/browse/HADOOP-51) | per-file 
replication counts |  Major | . | Doug Cutting | Konstantin Shvachko |
+| [HADOOP-191](https://issues.apache.org/jira/browse/HADOOP-191) | add 
hadoopStreaming to src/contrib |  Major | . | Michel Tourn | Doug Cutting |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-198](https://issues.apache.org/jira/browse/HADOOP-198) | adding 
owen's examples to exampledriver |  Minor | . | Mahadev konar | Mahadev konar |
-| [HADOOP-178](https://issues.apache.org/jira/browse/HADOOP-178) | piggyback 
block work requests to heartbeats and move block replication/deletion startup 
delay from datanodes to namenode |  Major | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-177](https://issues.apache.org/jira/browse/HADOOP-177) | improvement 
to browse through the map/reduce tasks |  Minor | . | Mahadev konar | Mahadev 
konar |
-| [HADOOP-173](https://issues.apache.org/jira/browse/HADOOP-173) | optimize 
allocation of tasks w/ local data |  Major | . | Doug Cutting | Doug Cutting |
-| [HADOOP-170](https://issues.apache.org/jira/browse/HADOOP-170) | 
setReplication and related bug fixes |  Major | fs | Konstantin Shvachko | 
Konstantin Shvachko |
-| [HADOOP-167](https://issues.apache.org/jira/browse/HADOOP-167) | reducing 
the number of Configuration & JobConf objects created |  Major | conf | Owen 
O'Malley | Owen O'Malley |
-| [HADOOP-166](https://issues.apache.org/jira/browse/HADOOP-166) | IPC is 
unable to invoke methods that use interfaces as parameter |  Minor | ipc | 
Stefan Groschupf | Doug Cutting |
-| [HADOOP-150](https://issues.apache.org/jira/browse/HADOOP-150) | tip and 
task names should reflect the job name |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-144](https://issues.apache.org/jira/browse/HADOOP-144) | the dfs 
client id isn't relatable to the map/reduce task ids |  Major | . | Owen 
O'Malley | Owen O'Malley |
-| [HADOOP-142](https://issues.apache.org/jira/browse/HADOOP-142) | failed 
tasks should be rescheduled on different hosts after other jobs |  Major | . | 
Owen O'Malley | Owen O'Malley |
-| [HADOOP-138](https://issues.apache.org/jira/browse/HADOOP-138) | stop all 
tasks |  Trivial | . | Stefan Groschupf | Doug Cutting |
+| [HADOOP-116](https://issues.apache.org/jira/browse/HADOOP-116) | cleaning up 
/tmp/hadoop/mapred/system |  Major | . | raghavendra prabhu | Doug Cutting |
 | [HADOOP-131](https://issues.apache.org/jira/browse/HADOOP-131) | Separate 
start/stop-dfs.sh and start/stop-mapred.sh scripts |  Minor | . | Chris A. 
Mattmann | Doug Cutting |
 | [HADOOP-129](https://issues.apache.org/jira/browse/HADOOP-129) | FileSystem 
should not name files with java.io.File | 

[07/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
index e47ecd8..cf4d6c9 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.2.0/CHANGES.2.2.0.md
@@ -24,91 +24,79 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10020](https://issues.apache.org/jira/browse/HADOOP-10020) | disable 
symlinks temporarily |  Blocker | fs | Colin Patrick McCabe | Sanjay Radia |
 | [YARN-1229](https://issues.apache.org/jira/browse/YARN-1229) | Define 
constraints on Auxiliary Service names. Change ShuffleHandler service name from 
mapreduce.shuffle to mapreduce\_shuffle. |  Blocker | nodemanager | Tassapol 
Athiapinya | Xuan Gong |
 | [YARN-1228](https://issues.apache.org/jira/browse/YARN-1228) | Clean up Fair 
Scheduler configuration loading |  Major | scheduler | Sandy Ryza | Sandy Ryza |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-10020](https://issues.apache.org/jira/browse/HADOOP-10020) | disable 
symlinks temporarily |  Blocker | fs | Colin P. McCabe | Sanjay Radia |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
+| [HDFS-4817](https://issues.apache.org/jira/browse/HDFS-4817) | make HDFS 
advisory caching configurable on a per-file basis |  Minor | hdfs-client | 
Colin P. McCabe | Colin P. McCabe |
 | [HADOOP-9758](https://issues.apache.org/jira/browse/HADOOP-9758) | Provide 
configuration option for FileSystem/FileContext symlink resolution |  Major | . 
| Andrew Wang | Andrew Wang |
-| [HADOOP-8315](https://issues.apache.org/jira/browse/HADOOP-8315) | Support 
SASL-authenticated ZooKeeper in ActiveStandbyElector |  Major | auto-failover, 
ha | Todd Lipcon | Todd Lipcon |
-| [HDFS-5308](https://issues.apache.org/jira/browse/HDFS-5308) | Replace 
HttpConfig#getSchemePrefix with implicit schemes in HDFS JSP |  Major | . | 
Haohui Mai | Haohui Mai |
-| [HDFS-5256](https://issues.apache.org/jira/browse/HDFS-5256) | Use guava 
LoadingCache to implement DFSClientCache |  Major | nfs | Haohui Mai | Haohui 
Mai |
 | [HDFS-5139](https://issues.apache.org/jira/browse/HDFS-5139) | Remove 
redundant -R option from setrep |  Major | tools | Arpit Agarwal | Arpit 
Agarwal |
-| [HDFS-4817](https://issues.apache.org/jira/browse/HDFS-4817) | make HDFS 
advisory caching configurable on a per-file basis |  Minor | hdfs-client | 
Colin Patrick McCabe | Colin Patrick McCabe |
 | [YARN-1246](https://issues.apache.org/jira/browse/YARN-1246) | Log 
application status in the rm log when app is done running |  Minor | . | Arpit 
Gupta | Arpit Gupta |
+| [HDFS-5256](https://issues.apache.org/jira/browse/HDFS-5256) | Use guava 
LoadingCache to implement DFSClientCache |  Major | nfs | Haohui Mai | Haohui 
Mai |
+| [HADOOP-8315](https://issues.apache.org/jira/browse/HADOOP-8315) | Support 
SASL-authenticated ZooKeeper in ActiveStandbyElector |  Major | auto-failover, 
ha | Todd Lipcon | Todd Lipcon |
 | [YARN-1213](https://issues.apache.org/jira/browse/YARN-1213) | Restore 
config to ban submitting to undeclared pools in the Fair Scheduler |  Major | 
scheduler | Sandy Ryza | Sandy Ryza |
+| [HDFS-5308](https://issues.apache.org/jira/browse/HDFS-5308) | Replace 
HttpConfig#getSchemePrefix with implicit schemes in HDFS JSP |  Major | . | 
Haohui Mai | Haohui Mai |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10012](https://issues.apache.org/jira/browse/HADOOP-10012) | Secure 
Oozie jobs fail with delegation token renewal exception in Namenode HA setup |  
Blocker | ha | Arpit Gupta | Suresh Srinivas |
-| [HADOOP-10003](https://issues.apache.org/jira/browse/HADOOP-10003) | 
HarFileSystem.listLocatedStatus() fails |  Major | fs | Jason Dere |  |
-| [HADOOP-9976](https://issues.apache.org/jira/browse/HADOOP-9976) | Different 
versions of avro and avro-maven-plugin |  Major | . | Karthik Kambatla | 
Karthik Kambatla |
+| [HDFS-5031](https://issues.apache.org/jira/browse/HDFS-5031) | BlockScanner 
scans the block multiple times and on restart scans everything |  Blocker | 
datanode | Vinayakumar B | Vinayakumar B |
+| 

[36/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
index 2ebfdc0..5243c7e 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/RELEASENOTES.0.20.2.md
@@ -23,23 +23,16 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-6498](https://issues.apache.org/jira/browse/HADOOP-6498) | *Blocker* 
| **IPC client  bug may cause rpc call hang**
-
-Correct synchronization error in IPC where handler thread could hang if 
request reader got an error.
-
-

-
-* [HADOOP-6460](https://issues.apache.org/jira/browse/HADOOP-6460) | *Blocker* 
| **Namenode runs of out of memory due to memory leak in ipc Server**
+* [MAPREDUCE-826](https://issues.apache.org/jira/browse/MAPREDUCE-826) | 
*Trivial* | **harchive doesn't use ToolRunner / harchive returns 0 even if the 
job fails with exception**
 
-If an IPC server response buffer has grown to than 1MB, it is replaced by a 
smaller buffer to free up the Java heap that was used. This will improve the 
longevity of the name service.
+Use ToolRunner for archives job and return non zero error code on failure.
 
 
 ---
 
-* [HADOOP-6428](https://issues.apache.org/jira/browse/HADOOP-6428) | *Major* | 
**HttpServer sleeps with negative values**
+* [MAPREDUCE-112](https://issues.apache.org/jira/browse/MAPREDUCE-112) | 
*Blocker* | **Reduce Input Records and Reduce Output Records counters are not 
being set when using the new Mapreduce reducer API**
 
-Corrected arithmetic error that made sleep times less than zero.
+Updates of counters for reduce input and output records were added in the new 
API so they are available for jobs using the new API.
 
 
 ---
@@ -51,23 +44,23 @@ Allow a general mechanism to disable the cache on a per 
filesystem basis by usin
 
 ---
 
-* [HADOOP-6097](https://issues.apache.org/jira/browse/HADOOP-6097) | *Major* | 
**Multiple bugs w/ Hadoop archives**
+* [MAPREDUCE-979](https://issues.apache.org/jira/browse/MAPREDUCE-979) | 
*Blocker* | **JobConf.getMemoryFor{Map\|Reduce}Task doesn't fallback to newer 
config knobs when mapred.taskmaxvmem is set to DISABLED\_MEMORY\_LIMIT of -1**
 
-Bugs fixed for Hadoop archives: character escaping in paths, LineReader and 
file system caching.
+Added support to fallback to new task memory configuration when deprecated 
memory configuration values are set to disabled.
 
 
 ---
 
-* [HDFS-793](https://issues.apache.org/jira/browse/HDFS-793) | *Blocker* | 
**DataNode should first receive the whole packet ack message before it 
constructs and sends its own ack message for the packet**
+* [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | *Blocker* | 
**Rename failure due to quota results in deletion of src directory**
 
-**WARNING: No release note provided for this incompatible change.**
+Rename properly considers the case where both source and destination are over 
quota; operation will fail with error indication.
 
 
 ---
 
-* [HDFS-781](https://issues.apache.org/jira/browse/HDFS-781) | *Blocker* | 
**Metrics PendingDeletionBlocks is not decremented**
+* [HADOOP-6097](https://issues.apache.org/jira/browse/HADOOP-6097) | *Major* | 
**Multiple bugs w/ Hadoop archives**
 
-Correct PendingDeletionBlocks metric to properly decrement counts.
+Bugs fixed for Hadoop archives: character escaping in paths, LineReader and 
file system caching.
 
 
 ---
@@ -79,9 +72,9 @@ Corrected an error when checking quota policy that resulted 
in a failure to read
 
 ---
 
-* [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | *Blocker* | 
**Rename failure due to quota results in deletion of src directory**
+* [MAPREDUCE-1068](https://issues.apache.org/jira/browse/MAPREDUCE-1068) | 
*Major* | **In hadoop-0.20.0 streaming job do not throw proper verbose error 
message if file is not present**
 
-Rename properly considers the case where both source and destination are over 
quota; operation will fail with error indication.
+Fix streaming job to show proper message if file is is not present, for -file 
option.
 
 
 ---
@@ -93,44 +86,44 @@ Memory leak in function hdfsFreeFileInfo in libhdfs. This 
bug affects fuse-dfs s
 
 ---
 
-* [MAPREDUCE-1182](https://issues.apache.org/jira/browse/MAPREDUCE-1182) | 
*Blocker* | **Reducers fail with OutOfMemoryError while copying Map outputs**
+* [MAPREDUCE-1147](https://issues.apache.org/jira/browse/MAPREDUCE-1147) | 
*Blocker* | **Map output records counter missing for map-only jobs in new API**
 
-Modifies shuffle related memory parameters to use 

[06/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
index 71d6e77..5bcfe1d 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/CHANGES.2.3.0.md
@@ -24,648 +24,642 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | libhdfs 
doesn't return correct error codes in most cases |  Major | libhdfs | Colin 
Patrick McCabe | Colin Patrick McCabe |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | libhdfs 
doesn't return correct error codes in most cases |  Major | libhdfs | Colin P. 
McCabe | Colin P. McCabe |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | Add a 
directbuffer Decompressor API to hadoop |  Major | io | Gopal V | Gopal V |
-| [HADOOP-9848](https://issues.apache.org/jira/browse/HADOOP-9848) | Create a 
MiniKDC for use with security testing |  Major | security, test | Wei Yan | Wei 
Yan |
-| [HADOOP-9618](https://issues.apache.org/jira/browse/HADOOP-9618) | Add 
thread which detects JVM pauses |  Major | util | Todd Lipcon | Todd Lipcon |
 | [HADOOP-9432](https://issues.apache.org/jira/browse/HADOOP-9432) | Add 
support for markdown .md files in site documentation |  Minor | build, 
documentation | Steve Loughran | Steve Loughran |
+| [HADOOP-9618](https://issues.apache.org/jira/browse/HADOOP-9618) | Add 
thread which detects JVM pauses |  Major | util | Todd Lipcon | Todd Lipcon |
+| [MAPREDUCE-5265](https://issues.apache.org/jira/browse/MAPREDUCE-5265) | 
History server admin service to refresh user and superuser group mappings |  
Major | jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [MAPREDUCE-5266](https://issues.apache.org/jira/browse/MAPREDUCE-5266) | 
Ability to refresh retention settings on history server |  Major | 
jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [HADOOP-9848](https://issues.apache.org/jira/browse/HADOOP-9848) | Create a 
MiniKDC for use with security testing |  Major | security, test | Wei Yan | Wei 
Yan |
 | [HADOOP-8545](https://issues.apache.org/jira/browse/HADOOP-8545) | 
Filesystem Implementation for OpenStack Swift |  Major | fs | Tim Miller | 
Dmitry Mezhensky |
-| [HDFS-5703](https://issues.apache.org/jira/browse/HDFS-5703) | Add support 
for HTTPS and swebhdfs to HttpFS |  Major | webhdfs | Alejandro Abdelnur | 
Alejandro Abdelnur |
-| [HDFS-5260](https://issues.apache.org/jira/browse/HDFS-5260) | Merge 
zero-copy memory-mapped HDFS client reads to trunk and branch-2. |  Major | 
hdfs-client, libhdfs | Chris Nauroth | Chris Nauroth |
-| [HDFS-4949](https://issues.apache.org/jira/browse/HDFS-4949) | Centralized 
cache management in HDFS |  Major | datanode, namenode | Andrew Wang | Andrew 
Wang |
-| [HDFS-2832](https://issues.apache.org/jira/browse/HDFS-2832) | Enable 
support for heterogeneous storages in HDFS - DN as a collection of storages |  
Major | datanode, namenode | Suresh Srinivas | Arpit Agarwal |
 | [MAPREDUCE-5332](https://issues.apache.org/jira/browse/MAPREDUCE-5332) | 
Support token-preserving restart of history server |  Major | jobhistoryserver 
| Jason Lowe | Jason Lowe |
-| [MAPREDUCE-5266](https://issues.apache.org/jira/browse/MAPREDUCE-5266) | 
Ability to refresh retention settings on history server |  Major | 
jobhistoryserver | Jason Lowe | Ashwin Shankar |
-| [MAPREDUCE-5265](https://issues.apache.org/jira/browse/MAPREDUCE-5265) | 
History server admin service to refresh user and superuser group mappings |  
Major | jobhistoryserver | Jason Lowe | Ashwin Shankar |
+| [YARN-1021](https://issues.apache.org/jira/browse/YARN-1021) | Yarn 
Scheduler Load Simulator |  Major | scheduler | Wei Yan | Wei Yan |
+| [HDFS-5260](https://issues.apache.org/jira/browse/HDFS-5260) | Merge 
zero-copy memory-mapped HDFS client reads to trunk and branch-2. |  Major | 
hdfs-client, libhdfs | Chris Nauroth | Chris Nauroth |
+| [YARN-1253](https://issues.apache.org/jira/browse/YARN-1253) | Changes to 
LinuxContainerExecutor to run containers as a single dedicated user in 
non-secure mode |  Blocker | nodemanager | Alejandro Abdelnur | Roman 
Shaposhnik |
 | [MAPREDUCE-1176](https://issues.apache.org/jira/browse/MAPREDUCE-1176) | 

[21/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
index 9bf1d66..c9c200c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.1/CHANGES.0.3.1.md
@@ -20,56 +20,16 @@
 
 ## Release 0.3.1 - 2006-06-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-276](https://issues.apache.org/jira/browse/HADOOP-276) | No 
appenders could be found for logger |  Major | . | Owen O'Malley | Owen 
O'Malley |
-| [HADOOP-274](https://issues.apache.org/jira/browse/HADOOP-274) | The new 
logging framework puts application logs into server directory in hadoop.log |  
Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-272](https://issues.apache.org/jira/browse/HADOOP-272) | bin/hadoop 
dfs -rm \ crashes in log4j code |  Major | . | Owen O'Malley | Owen 
O'Malley |
+| [HADOOP-274](https://issues.apache.org/jira/browse/HADOOP-274) | The new 
logging framework puts application logs into server directory in hadoop.log |  
Major | . | Owen O'Malley | Owen O'Malley |
 | [HADOOP-262](https://issues.apache.org/jira/browse/HADOOP-262) | the reduce 
tasks do not report progress if they the map output locations is empty. |  
Major | . | Mahadev konar | Mahadev konar |
 | [HADOOP-245](https://issues.apache.org/jira/browse/HADOOP-245) | record io 
translator doesn't strip path names |  Major | record | Owen O'Malley | Milind 
Bhandarkar |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-276](https://issues.apache.org/jira/browse/HADOOP-276) | No 
appenders could be found for logger |  Major | . | Owen O'Malley | Owen 
O'Malley |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
index dd30d8c..cb69295 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.3.2/CHANGES.0.3.2.md
@@ -20,16 +20,6 @@
 
 ## Release 0.3.2 - 2006-06-09
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
@@ -51,33 +41,15 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-294](https://issues.apache.org/jira/browse/HADOOP-294) | dfs client 
error retries aren't happening (already being created and not replicated yet) | 
 Major | . | Owen O'Malley | Owen O'Malley |
-| [HADOOP-292](https://issues.apache.org/jira/browse/HADOOP-292) | hadoop dfs 
commands should not output superfluous data to stdout |  Minor | . | Yoram 
Arnon | Owen O'Malley |
-| [HADOOP-289](https://issues.apache.org/jira/browse/HADOOP-289) | Datanodes 
need to catch SocketTimeoutException and UnregisteredDatanodeException |  Major 
| . | Konstantin Shvachko | Konstantin Shvachko |
-| [HADOOP-285](https://issues.apache.org/jira/browse/HADOOP-285) | Data nodes 
cannot re-join the cluster once connection is lost |  Blocker | . | Konstantin 
Shvachko | Hairong Kuang |
-| [HADOOP-284](https://issues.apache.org/jira/browse/HADOOP-284) | dfs timeout 

[20/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
index 2b0e48c..ea968d6 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.9.0/CHANGES.0.9.0.md
@@ -20,16 +20,6 @@
 
 ## Release 0.9.0 - 2006-12-01
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
@@ -43,61 +33,61 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-725](https://issues.apache.org/jira/browse/HADOOP-725) | 
chooseTargets method in FSNamesystem is very inefficient |  Major | . | Milind 
Bhandarkar | Milind Bhandarkar |
-| [HADOOP-721](https://issues.apache.org/jira/browse/HADOOP-721) | jobconf.jsp 
shouldn't find the jobconf.xsl via http |  Major | . | Owen O'Malley | Arun C 
Murthy |
-| [HADOOP-689](https://issues.apache.org/jira/browse/HADOOP-689) | hadoop 
should provide a common way to wrap instances with different types into one 
type |  Major | io | Feng Jiang |  |
-| [HADOOP-688](https://issues.apache.org/jira/browse/HADOOP-688) | move dfs 
administrative interfaces to a separate command |  Major | . | dhruba borthakur 
| dhruba borthakur |
-| [HADOOP-677](https://issues.apache.org/jira/browse/HADOOP-677) | RPC should 
send a fixed header and version at the start of connection |  Major | ipc | 
Owen O'Malley | Owen O'Malley |
-| [HADOOP-668](https://issues.apache.org/jira/browse/HADOOP-668) | improvement 
to DFS browsing WI |  Minor | . | Yoram Arnon | Hairong Kuang |
-| [HADOOP-661](https://issues.apache.org/jira/browse/HADOOP-661) | JobConf for 
a job should be viewable from the web/ui |  Major | . | Owen O'Malley | Arun C 
Murthy |
 | [HADOOP-655](https://issues.apache.org/jira/browse/HADOOP-655) | remove 
deprecations |  Minor | . | Doug Cutting | Doug Cutting |
-| [HADOOP-613](https://issues.apache.org/jira/browse/HADOOP-613) | The final 
merge on the reduces should feed the reduce directly |  Major | . | Owen 
O'Malley | Devaraj Das |
 | [HADOOP-565](https://issues.apache.org/jira/browse/HADOOP-565) | Upgrade 
Jetty to 6.x |  Major | . | Owen O'Malley | Sanjay Dahiya |
-| [HADOOP-538](https://issues.apache.org/jira/browse/HADOOP-538) | Implement a 
nio's 'direct buffer' based wrapper over zlib to improve performance of 
java.util.zip.{De\|In}flater as a 'custom codec' |  Major | io | Arun C Murthy 
| Arun C Murthy |
+| [HADOOP-688](https://issues.apache.org/jira/browse/HADOOP-688) | move dfs 
administrative interfaces to a separate command |  Major | . | dhruba borthakur 
| dhruba borthakur |
+| [HADOOP-613](https://issues.apache.org/jira/browse/HADOOP-613) | The final 
merge on the reduces should feed the reduce directly |  Major | . | Owen 
O'Malley | Devaraj Das |
+| [HADOOP-661](https://issues.apache.org/jira/browse/HADOOP-661) | JobConf for 
a job should be viewable from the web/ui |  Major | . | Owen O'Malley | Arun C 
Murthy |
 | [HADOOP-489](https://issues.apache.org/jira/browse/HADOOP-489) | Seperating 
user logs from system logs in map reduce |  Minor | . | Mahadev konar | Arun C 
Murthy |
+| [HADOOP-668](https://issues.apache.org/jira/browse/HADOOP-668) | improvement 
to DFS browsing WI |  Minor | . | Yoram Arnon | Hairong Kuang |
+| [HADOOP-538](https://issues.apache.org/jira/browse/HADOOP-538) | Implement a 
nio's 'direct buffer' based wrapper over zlib to improve performance of 
java.util.zip.{De\|In}flater as a 'custom codec' |  Major | io | Arun C Murthy 
| Arun C Murthy |
+| [HADOOP-721](https://issues.apache.org/jira/browse/HADOOP-721) | jobconf.jsp 
shouldn't find the jobconf.xsl via http |  Major | . | Owen O'Malley | Arun C 
Murthy |
+| [HADOOP-725](https://issues.apache.org/jira/browse/HADOOP-725) | 
chooseTargets method in FSNamesystem is very inefficient |  Major | . | Milind 
Bhandarkar | Milind Bhandarkar |
+| [HADOOP-677](https://issues.apache.org/jira/browse/HADOOP-677) | RPC should 
send a fixed header and version at the start of connection |  Major | ipc | 
Owen O'Malley | Owen O'Malley |
 | [HADOOP-76](https://issues.apache.org/jira/browse/HADOOP-76) | Implement 
speculative re-execution of reduces |  Minor | . | Doug Cutting | Sanjay Dahiya 
|
+| [HADOOP-689](https://issues.apache.org/jira/browse/HADOOP-689) | hadoop 
should provide a common way to wrap instances with different types into one 
type |  

[18/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
index ceb86d0..5420e8e 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/1.2.0/CHANGES.1.2.0.md
@@ -25,244 +25,238 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
 | [HADOOP-8164](https://issues.apache.org/jira/browse/HADOOP-8164) | Handle 
paths using back slash as path separator for windows only |  Major | fs | 
Suresh Srinivas | Daryn Sharp |
-| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
+| [MAPREDUCE-4629](https://issues.apache.org/jira/browse/MAPREDUCE-4629) | 
Remove JobHistory.DEBUG\_MODE |  Major | . | Karthik Kambatla | Karthik 
Kambatla |
 | [HDFS-4122](https://issues.apache.org/jira/browse/HDFS-4122) | Cleanup HDFS 
logs and reduce the size of logged messages |  Major | datanode, hdfs-client, 
namenode | Suresh Srinivas | Suresh Srinivas |
+| [HDFS-4350](https://issues.apache.org/jira/browse/HDFS-4350) | Make enabling 
of stale marking on read and write paths independent |  Major | . | Andrew Wang 
| Andrew Wang |
 | [MAPREDUCE-4737](https://issues.apache.org/jira/browse/MAPREDUCE-4737) |  
Hadoop does not close output file / does not call Mapper.cleanup if exception 
in map |  Major | . | Daniel Dai | Arun C Murthy |
-| [MAPREDUCE-4629](https://issues.apache.org/jira/browse/MAPREDUCE-4629) | 
Remove JobHistory.DEBUG\_MODE |  Major | . | Karthik Kambatla | Karthik 
Kambatla |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9090](https://issues.apache.org/jira/browse/HADOOP-9090) | Support 
on-demand publish of metrics |  Minor | metrics | Mostafa Elhemali | Mostafa 
Elhemali |
+| [MAPREDUCE-461](https://issues.apache.org/jira/browse/MAPREDUCE-461) | 
Enable ServicePlugins for the JobTracker |  Minor | . | Fredrik Hedberg | 
Fredrik Hedberg |
+| [HDFS-3515](https://issues.apache.org/jira/browse/HDFS-3515) | Port 
HDFS-1457 to branch-1 |  Major | namenode | Eli Collins | Eli Collins |
+| [HADOOP-8023](https://issues.apache.org/jira/browse/HADOOP-8023) | Add 
unset() method to Configuration |  Critical | conf | Alejandro Abdelnur | 
Alejandro Abdelnur |
+| [MAPREDUCE-4355](https://issues.apache.org/jira/browse/MAPREDUCE-4355) | Add 
RunningJob.getJobStatus() |  Major | mrv1, mrv2 | Karthik Kambatla | Karthik 
Kambatla |
+| [MAPREDUCE-987](https://issues.apache.org/jira/browse/MAPREDUCE-987) | 
Exposing MiniDFS and MiniMR clusters as a single process command-line |  Minor 
| build, test | Philip Zeyliger | Ahmed Radwan |
+| [MAPREDUCE-3678](https://issues.apache.org/jira/browse/MAPREDUCE-3678) | The 
Map tasks logs should have the value of input split it processed |  Major | 
mrv1, mrv2 | Bejoy KS | Harsh J |
 | [HADOOP-8988](https://issues.apache.org/jira/browse/HADOOP-8988) | Backport 
HADOOP-8343 to branch-1 |  Major | conf | Jing Zhao | Jing Zhao |
 | [HADOOP-8820](https://issues.apache.org/jira/browse/HADOOP-8820) | Backport 
HADOOP-8469 and HADOOP-8470: add "NodeGroup" layer in new NetworkTopology (also 
known as NetworkTopologyWithNodeGroup) |  Major | net | Junping Du | Junping Du 
|
-| [HADOOP-8023](https://issues.apache.org/jira/browse/HADOOP-8023) | Add 
unset() method to Configuration |  Critical | conf | Alejandro Abdelnur | 
Alejandro Abdelnur |
-| [HDFS-4776](https://issues.apache.org/jira/browse/HDFS-4776) | Backport 
SecondaryNameNode web ui to branch-1 |  Minor | namenode | Tsz Wo Nicholas Sze 
| Tsz Wo Nicholas Sze |
-| [HDFS-4774](https://issues.apache.org/jira/browse/HDFS-4774) | Backport 
HDFS-4525 'Provide an API for knowing whether file is closed or not' to 
branch-1 |  Major | hdfs-client, namenode | Ted Yu | Ted Yu |
-| [HDFS-4597](https://issues.apache.org/jira/browse/HDFS-4597) | Backport 
WebHDFS concat to branch-1 |  Major | webhdfs | Tsz Wo Nicholas Sze | Tsz Wo 
Nicholas Sze |
+| [HDFS-3941](https://issues.apache.org/jira/browse/HDFS-3941) | Backport 
HDFS-3498 and HDFS3601: update replica placement policy for new added 
"NodeGroup" layer topology |  Major | namenode | Junping Du | Junping Du |
 | [HDFS-4219](https://issues.apache.org/jira/browse/HDFS-4219) | Port slive to 
branch-1 |  Major | . | Arpit Gupta | Arpit 

[03/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
index a86f1e0..ea93496 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/RELEASENOTES.2.4.0.md
@@ -23,102 +23,102 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-10295](https://issues.apache.org/jira/browse/HADOOP-10295) | *Major* 
| **Allow distcp to automatically identify the checksum type of source files 
and use it for the target**
+* [HDFS-5790](https://issues.apache.org/jira/browse/HDFS-5790) | *Major* | 
**LeaseManager.findPath is very slow when many leases need recovery**
 
-Add option for distcp to preserve the checksum type of the source files. Users 
can use "-pc" as distcp command option to preserve the checksum type.
+Committed to branch-2 and trunk.
 
 
 ---
 
-* [HADOOP-10221](https://issues.apache.org/jira/browse/HADOOP-10221) | *Major* 
| **Add a plugin to specify SaslProperties for RPC protocol based on connection 
properties**
-
-SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
"hadoop.security.saslproperties.resolver.class" configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.
+* [HADOOP-10295](https://issues.apache.org/jira/browse/HADOOP-10295) | *Major* 
| **Allow distcp to automatically identify the checksum type of source files 
and use it for the target**
 
-Note that this change, effectively removes SaslRpcServer.SASL\_PROPS which was 
a public field. Any use of this variable  should be replaced with the following 
code:
-SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
-Map\ sasl\_props = saslPropsResolver.getDefaultProperties();
+Add option for distcp to preserve the checksum type of the source files. Users 
can use "-pc" as distcp command option to preserve the checksum type.
 
 
 ---
 
-* [HADOOP-10211](https://issues.apache.org/jira/browse/HADOOP-10211) | *Major* 
| **Enable RPC protocol to negotiate SASL-QOP values between clients and 
servers**
+* [HDFS-5804](https://issues.apache.org/jira/browse/HDFS-5804) | *Major* | 
**HDFS NFS Gateway fails to mount and proxy when using Kerberos**
 
-The hadoop.rpc.protection configuration property previously supported 
specifying a single value: one of authentication, integrity or privacy.  An 
unrecognized value was silently assumed to mean authentication.  This 
configuration property now accepts a comma-separated list of any of the 3 
values, and unrecognized values are rejected with an error. Existing 
configurations containing an invalid value must be corrected. If the property 
is empty or not specified, authentication is assumed.
+Fixes NFS on Kerberized cluster.
 
 
 ---
 
-* [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | *Minor* | 
**FsShell can print "Found xxx items" unnecessarily often**
+* [HDFS-5698](https://issues.apache.org/jira/browse/HDFS-5698) | *Major* | 
**Use protobuf to serialize / deserialize FSImage**
 
-The `ls` command only prints "Found foo items" once when listing the 
directories recursively.
+Use protobuf to serialize/deserialize the FSImage.
 
 
 ---
 
-* [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | *Blocker* | 
**Lower the default maximum items per directory to fix PB fsimage loading**
+* [HDFS-4370](https://issues.apache.org/jira/browse/HDFS-4370) | *Major* | 
**Fix typo Blanacer in DataNode**
 
-**WARNING: No release note provided for this incompatible change.**
+I just committed this. Thank you Chu.
 
 
 ---
 
-* [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | *Major* | 
**Change default configuration to limit file name length in HDFS**
+* [HDFS-5776](https://issues.apache.org/jira/browse/HDFS-5776) | *Major* | 
**Support 'hedged' reads in DFSClient**
 
-The default configuration of HDFS now sets 
dfs.namenode.fs-limits.max-component-length to 255 for improved 
interoperability with other file system implementations.  This limits each 
component of a file system path to a maximum of 255 bytes in UTF-8 encoding.  
Attempts to create new files that violate this rule will fail with an error.  
Existing files that violate the rule are not effected.  Previously, 

[46/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
index 3eac7ed..8e45328 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.16.4/CHANGES.0.16.4.md
@@ -20,55 +20,15 @@
 
 ## Release 0.16.4 - 2008-05-05
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-3304](https://issues.apache.org/jira/browse/HADOOP-3304) | [HOD] 
logcondense fails if DFS has files that are not log files, but match a certain 
pattern |  Blocker | contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-3294](https://issues.apache.org/jira/browse/HADOOP-3294) | distcp 
leaves empty blocks afte successful execution |  Blocker | util | Christian 
Kunz | Tsz Wo Nicholas Sze |
-| [HADOOP-3186](https://issues.apache.org/jira/browse/HADOOP-3186) | Incorrect 
permission checking on  mv |  Blocker | . | Koji Noguchi | Tsz Wo Nicholas Sze |
 | [HADOOP-3138](https://issues.apache.org/jira/browse/HADOOP-3138) | distcp 
fail copying to /user/\/\ (with permission on) |  
Blocker | . | Koji Noguchi | Raghu Angadi |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [HADOOP-3186](https://issues.apache.org/jira/browse/HADOOP-3186) | Incorrect 
permission checking on  mv |  Blocker | . | Koji Noguchi | Tsz Wo Nicholas Sze |
+| [HADOOP-3294](https://issues.apache.org/jira/browse/HADOOP-3294) | distcp 
leaves empty blocks afte successful execution |  Blocker | util | Christian 
Kunz | Tsz Wo Nicholas Sze |
+| [HADOOP-3304](https://issues.apache.org/jira/browse/HADOOP-3304) | [HOD] 
logcondense fails if DFS has files that are not log files, but match a certain 
pattern |  Blocker | contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
index bbf3d23..f88162a 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
@@ -24,242 +24,230 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | virtual 
address space limits break streaming apps |  Blocker | . | Rick Cox | Arun C 
Murthy |
-| [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | Remove 
HOD changes from CHANGES.txt, as they are now inside src/contrib/hod |  Major | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | exists() 
calls logs FileNotFoundException in namenode log |  Major | . | Lohit 
Vijayarenu | Lohit Vijayarenu |
-| [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | [HOD] 
Update hod version number |  Major | contrib/hod | Hemanth Yamijala | Hemanth 
Yamijala |
-| [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | hadoop 
dfs -put should support multiple src |  Major | . | Lohit Vijayarenu | Lohit 
Vijayarenu |
-| [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | 
MiniMRCluster is ignoring parameter taskTrackerFirst |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| 

[30/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
index 41ffd77..1678634 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/RELEASENOTES.0.22.0.md
@@ -23,609 +23,609 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-7302](https://issues.apache.org/jira/browse/HADOOP-7302) | *Major* | 
**webinterface.private.actions should not be in common**
+* [MAPREDUCE-478](https://issues.apache.org/jira/browse/MAPREDUCE-478) | 
*Minor* | **separate jvm param for mapper and reducer**
 
-Option webinterface.private.actions has been renamed to 
mapreduce.jobtracker.webinterface.trusted and should be specified in 
mapred-site.xml instead of core-site.xml
+Allow map and reduce jvm parameters, environment variables and ulimit to be 
set separately.
+
+Configuration changes:
+  add mapred.map.child.java.opts
+  add mapred.reduce.child.java.opts
+  add mapred.map.child.env
+  add mapred.reduce.child.ulimit
+  add mapred.map.child.env
+  add mapred.reduce.child.ulimit
+  deprecated mapred.child.java.opts
+  deprecated mapred.child.env
+  deprecated mapred.child.ulimit
 
 
 ---
 
-* [HADOOP-7229](https://issues.apache.org/jira/browse/HADOOP-7229) | *Major* | 
**Absolute path to kinit in auto-renewal thread**
+* [HADOOP-6344](https://issues.apache.org/jira/browse/HADOOP-6344) | *Major* | 
**rm and rmr fail to correctly move the user's files to the trash prior to 
deleting when they are over quota.**
 
-When Hadoop's Kerberos integration is enabled, it is now required that either 
{{kinit}} be on the path for user accounts running the Hadoop client, or that 
the {{hadoop.kerberos.kinit.command}} configuration option be manually set to 
the absolute path to {{kinit}}.
+Trash feature notifies user of over-quota condition rather than silently 
deleting files/directories; deletion can be compelled with "rm -skiptrash".
 
 
 ---
 
-* [HADOOP-7193](https://issues.apache.org/jira/browse/HADOOP-7193) | *Minor* | 
**Help message is wrong for touchz command.**
+* [HADOOP-6599](https://issues.apache.org/jira/browse/HADOOP-6599) | *Major* | 
**Split RPC metrics into summary and detailed metrics**
 
-Updated the help for the touchz command.
+Split existing RpcMetrics into RpcMetrics and RpcDetailedMetrics. The new 
RpcDetailedMetrics has per method usage details and is available under context 
name "rpc" and record name "detailed-metrics"
 
 
 ---
 
-* [HADOOP-7192](https://issues.apache.org/jira/browse/HADOOP-7192) | *Trivial* 
| **fs -stat docs aren't updated to reflect the format features**
+* [MAPREDUCE-927](https://issues.apache.org/jira/browse/MAPREDUCE-927) | 
*Major* | **Cleanup of task-logs should happen in TaskTracker instead of the 
Child**
 
-Updated the web documentation to reflect the formatting abilities of 'fs 
-stat'.
+Moved Task log cleanup into a separate thread in TaskTracker.
+Added configuration "mapreduce.job.userlog.retain.hours" to specify the 
time(in hours) for which the user-logs are to be retained after the job 
completion.
 
 
 ---
 
-* [HADOOP-7156](https://issues.apache.org/jira/browse/HADOOP-7156) | 
*Critical* | **getpwuid\_r is not thread-safe on RHEL6**
+* [HADOOP-6730](https://issues.apache.org/jira/browse/HADOOP-6730) | *Major* | 
**Bug in FileContext#copy and provide base class for FileContext tests**
 
-Adds a new configuration hadoop.work.around.non.threadsafe.getpwuid which can 
be used to enable a mutex around this call to workaround thread-unsafe 
implementations of getpwuid\_r. Users should consult 
http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations for a list of 
such systems.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-7137](https://issues.apache.org/jira/browse/HADOOP-7137) | *Major* | 
**Remove hod contrib**
+* [MAPREDUCE-1707](https://issues.apache.org/jira/browse/MAPREDUCE-1707) | 
*Major* | **TaskRunner can get NPE in getting ugi from TaskTracker**
 
-Removed contrib related build targets.
+Fixed a bug that causes TaskRunner to get NPE in getting ugi from TaskTracker 
and subsequently crashes it resulting in a failing task after task-timeout 
period.
 
 
 ---
 
-* [HADOOP-7134](https://issues.apache.org/jira/browse/HADOOP-7134) | *Major* | 
**configure files that are generated as part of the released tarball need to 
have executable bit set**
+* [MAPREDUCE-1680](https://issues.apache.org/jira/browse/MAPREDUCE-1680) | 
*Major* | **Add a metrics to track the number 

[04/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
index 06e9c9b..4426ba9 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.4.0/CHANGES.2.4.0.md
@@ -24,27 +24,21 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | FsShell 
can print "Found xxx items" unnecessarily often |  Minor | fs | Jason Lowe | 
Daryn Sharp |
-| [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | Lower the 
default maximum items per directory to fix PB fsimage loading |  Blocker | 
namenode | Andrew Wang | Andrew Wang |
-| [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | Change 
default configuration to limit file name length in HDFS |  Major | namenode | 
Suresh Srinivas | Chris Nauroth |
 | [HDFS-5804](https://issues.apache.org/jira/browse/HDFS-5804) | HDFS NFS 
Gateway fails to mount and proxy when using Kerberos |  Major | nfs | Abin 
Shahab | Abin Shahab |
+| [HADOOP-8691](https://issues.apache.org/jira/browse/HADOOP-8691) | FsShell 
can print "Found xxx items" unnecessarily often |  Minor | fs | Jason Lowe | 
Daryn Sharp |
 | [HDFS-5321](https://issues.apache.org/jira/browse/HDFS-5321) | Clean up the 
HTTP-related configuration in HDFS |  Major | . | Haohui Mai | Haohui Mai |
+| [HDFS-6055](https://issues.apache.org/jira/browse/HDFS-6055) | Change 
default configuration to limit file name length in HDFS |  Major | namenode | 
Suresh Srinivas | Chris Nauroth |
+| [HDFS-6102](https://issues.apache.org/jira/browse/HDFS-6102) | Lower the 
default maximum items per directory to fix PB fsimage loading |  Blocker | 
namenode | Andrew Wang | Andrew Wang |
 | [HDFS-5138](https://issues.apache.org/jira/browse/HDFS-5138) | Support HDFS 
upgrade in HA |  Blocker | . | Kihwal Lee | Aaron T. Myers |
 | [MAPREDUCE-5036](https://issues.apache.org/jira/browse/MAPREDUCE-5036) | 
Default shuffle handler port should not be 8080 |  Major | . | Sandy Ryza | 
Sandy Ryza |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10184](https://issues.apache.org/jira/browse/HADOOP-10184) | Hadoop 
Common changes required to support HDFS ACLs. |  Major | fs, security | Chris 
Nauroth | Chris Nauroth |
 | [HDFS-5535](https://issues.apache.org/jira/browse/HDFS-5535) | Umbrella jira 
for improved HDFS rolling upgrades |  Major | datanode, ha, hdfs-client, 
namenode | Nathan Roberts | Tsz Wo Nicholas Sze |
+| [HADOOP-10184](https://issues.apache.org/jira/browse/HADOOP-10184) | Hadoop 
Common changes required to support HDFS ACLs. |  Major | fs, security | Chris 
Nauroth | Chris Nauroth |
 | [HDFS-4685](https://issues.apache.org/jira/browse/HDFS-4685) | 
Implementation of ACLs in HDFS |  Major | hdfs-client, namenode, security | 
Sachin Jose | Chris Nauroth |
 
 
@@ -52,432 +46,432 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-10423](https://issues.apache.org/jira/browse/HADOOP-10423) | Clarify 
compatibility policy document for combination of new client and old server. |  
Minor | documentation | Chris Nauroth | Chris Nauroth |
-| [HADOOP-10386](https://issues.apache.org/jira/browse/HADOOP-10386) | Log 
proxy hostname in various exceptions being thrown in a HA setup |  Minor | ha | 
Arpit Gupta | Haohui Mai |
-| [HADOOP-10383](https://issues.apache.org/jira/browse/HADOOP-10383) | 
InterfaceStability annotations should have RetentionPolicy.RUNTIME |  Major | . 
| Enis Soztutar | Enis Soztutar |
-| [HADOOP-10379](https://issues.apache.org/jira/browse/HADOOP-10379) | Protect 
authentication cookies with the HttpOnly and Secure flags |  Major | . | Haohui 
Mai | Haohui Mai |
-| [HADOOP-10374](https://issues.apache.org/jira/browse/HADOOP-10374) | 
InterfaceAudience annotations should have RetentionPolicy.RUNTIME |  Major | . 
| Enis Soztutar | Enis Soztutar |
-| [HADOOP-10348](https://issues.apache.org/jira/browse/HADOOP-10348) | 
Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk |  Major 
| . | Haohui Mai | Haohui Mai |
-| [HADOOP-10343](https://issues.apache.org/jira/browse/HADOOP-10343) | Change 
info to debug log in LossyRetryInvocationHandler |  Minor | . | Arpit Gupta | 
Arpit Gupta |
-| 

[31/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
index 40de51c..aa5e8cf 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.0/CHANGES.0.22.0.md
@@ -24,745 +24,739 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-7229](https://issues.apache.org/jira/browse/HADOOP-7229) | Absolute 
path to kinit in auto-renewal thread |  Major | security | Aaron T. Myers | 
Aaron T. Myers |
-| [HADOOP-7137](https://issues.apache.org/jira/browse/HADOOP-7137) | Remove 
hod contrib |  Major | . | Nigel Daley | Nigel Daley |
-| [HADOOP-7013](https://issues.apache.org/jira/browse/HADOOP-7013) | Add 
boolean field isCorrupt to BlockLocation |  Major | . | Patrick Kling | Patrick 
Kling |
-| [HADOOP-6949](https://issues.apache.org/jira/browse/HADOOP-6949) | Reduces 
RPC packet size for primitive arrays, especially long[], which is used at block 
reporting |  Major | io | Navis | Matt Foley |
-| [HADOOP-6905](https://issues.apache.org/jira/browse/HADOOP-6905) | Better 
logging messages when a delegation token is invalid |  Major | security | Kan 
Zhang | Kan Zhang |
-| [HADOOP-6835](https://issues.apache.org/jira/browse/HADOOP-6835) | Support 
concatenated gzip files |  Major | io | Tom White | Greg Roelofs |
-| [HADOOP-6787](https://issues.apache.org/jira/browse/HADOOP-6787) | Factor 
out glob pattern code from FileContext and Filesystem |  Major | fs | Luke Lu | 
Luke Lu |
 | [HADOOP-6730](https://issues.apache.org/jira/browse/HADOOP-6730) | Bug in 
FileContext#copy and provide base class for FileContext tests |  Major | fs, 
test | Eli Collins | Ravi Phulari |
-| [HDFS-1825](https://issues.apache.org/jira/browse/HDFS-1825) | Remove 
thriftfs contrib |  Major | . | Nigel Daley | Nigel Daley |
-| [HDFS-1560](https://issues.apache.org/jira/browse/HDFS-1560) | dfs.data.dir 
permissions should default to 700 |  Minor | datanode | Todd Lipcon | Todd 
Lipcon |
-| [HDFS-1435](https://issues.apache.org/jira/browse/HDFS-1435) | Provide an 
option to store fsimage compressed |  Major | namenode | Hairong Kuang | 
Hairong Kuang |
-| [HDFS-1315](https://issues.apache.org/jira/browse/HDFS-1315) | Add fsck 
event to audit log and remove other audit log events corresponding to FSCK 
listStatus and open calls |  Major | namenode, tools | Suresh Srinivas | Suresh 
Srinivas |
 | [HDFS-1109](https://issues.apache.org/jira/browse/HDFS-1109) | HFTP and URL 
Encoding |  Major | contrib/hdfsproxy, datanode | Dmytro Molkov | Dmytro Molkov 
|
-| [HDFS-1080](https://issues.apache.org/jira/browse/HDFS-1080) | 
SecondaryNameNode image transfer should use the defined http address rather 
than local ip address |  Major | namenode | Jakob Homan | Jakob Homan |
 | [HDFS-1061](https://issues.apache.org/jira/browse/HDFS-1061) | Memory 
footprint optimization for INodeFile object. |  Minor | namenode | Bharath 
Mundlapudi | Bharath Mundlapudi |
-| [HDFS-903](https://issues.apache.org/jira/browse/HDFS-903) | NN should 
verify images and edit logs on startup |  Critical | namenode | Eli Collins | 
Hairong Kuang |
+| [MAPREDUCE-1683](https://issues.apache.org/jira/browse/MAPREDUCE-1683) | 
Remove JNI calls from ClusterStatus cstr |  Major | jobtracker | Chris Douglas 
| Luke Lu |
+| [HADOOP-6787](https://issues.apache.org/jira/browse/HADOOP-6787) | Factor 
out glob pattern code from FileContext and Filesystem |  Major | fs | Luke Lu | 
Luke Lu |
+| [HDFS-1080](https://issues.apache.org/jira/browse/HDFS-1080) | 
SecondaryNameNode image transfer should use the defined http address rather 
than local ip address |  Major | namenode | Jakob Homan | Jakob Homan |
+| [HADOOP-6835](https://issues.apache.org/jira/browse/HADOOP-6835) | Support 
concatenated gzip files |  Major | io | Tom White | Greg Roelofs |
+| [MAPREDUCE-1733](https://issues.apache.org/jira/browse/MAPREDUCE-1733) | 
Authentication between pipes processes and java counterparts. |  Major | . | 
Jitendra Nath Pandey | Jitendra Nath Pandey |
+| [HDFS-1315](https://issues.apache.org/jira/browse/HDFS-1315) | Add fsck 
event to audit log and remove other audit log events corresponding to FSCK 
listStatus and open calls |  Major | namenode, tools | Suresh Srinivas | Suresh 
Srinivas |
+| [MAPREDUCE-1866](https://issues.apache.org/jira/browse/MAPREDUCE-1866) | 
Remove deprecated class org.apache.hadoop.streaming.UTF8ByteArrayUtils |  Minor 
| contrib/streaming | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
 | 

[37/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
index 4e13959..55f65c0 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
@@ -23,325 +23,325 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-5565](https://issues.apache.org/jira/browse/HADOOP-5565) | *Major* | 
**The job instrumentation API needs to have a method for finalizeJob,**
+* [HADOOP-4234](https://issues.apache.org/jira/browse/HADOOP-4234) | *Minor* | 
**KFS: Allow KFS layer to interface with multiple KFS namenodes**
 
-Add finalizeJob & terminateJob methods to JobTrackerInstrumentation class
+Changed KFS glue layer to allow applications to interface with multiple KFS 
metaservers.
 
 
 ---
 
-* [HADOOP-5548](https://issues.apache.org/jira/browse/HADOOP-5548) | *Blocker* 
| **Observed negative running maps on the job tracker**
+* [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | *Major* | 
**Findbugs warnings are printed related to equals implementation of several 
classes**
 
-Adds synchronization for JobTracker methods in RecoveryManager.
+Changed public class org.apache.hadoop.mapreduce.ID to be an abstract class. 
Removed from class org.apache.hadoop.mapreduce.ID the methods  public static ID 
read(DataInput in) and public static ID forName(String str).
 
 
 ---
 
-* [HADOOP-5531](https://issues.apache.org/jira/browse/HADOOP-5531) | *Blocker* 
| **Remove Chukwa on branch-0.20**
+* [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | *Major* | 
**Fix warnings generated by FindBugs**
 
-Disabled Chukwa unit tests for 0.20 branch only.
+Removed  from class org.apache.hadoop.fs.RawLocalFileSystem deprecated methods 
public String getName(), public void lock(Path p, boolean shared) and public 
void release(Path p).
 
 
 ---
 
-* [HADOOP-5521](https://issues.apache.org/jira/browse/HADOOP-5521) | *Major* | 
**Remove dependency of testcases on RESTART\_COUNT**
+* [HADOOP-4284](https://issues.apache.org/jira/browse/HADOOP-4284) | *Major* | 
**Support for user configurable global filters on HttpServer**
 
-This patch makes TestJobHistory and its dependent testcases independent of 
RESTART\_COUNT.
+Introduced HttpServer method to support global filters.
 
 
 ---
 
-* [HADOOP-5468](https://issues.apache.org/jira/browse/HADOOP-5468) | *Major* | 
**Change Hadoop doc menu to sub-menus**
+* [HADOOP-4454](https://issues.apache.org/jira/browse/HADOOP-4454) | *Minor* | 
**Support comments in 'slaves'  file**
 
-Reformatted HTML documentation for Hadoop to use submenus at the left column.
+Changed processing of conf/slaves file to allow # to begin a comment.
 
 
 ---
 
-* [HADOOP-5030](https://issues.apache.org/jira/browse/HADOOP-5030) | *Major* | 
**Chukwa RPM build improvements**
+* [HADOOP-4572](https://issues.apache.org/jira/browse/HADOOP-4572) | *Major* | 
**INode and its sub-classes should be package private**
 
-Changed RPM install location to the value specified by build.properties file.
+Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to 
org.apache.hadoop.hdfs.server.namenode.
 
 
 ---
 
-* [HADOOP-4970](https://issues.apache.org/jira/browse/HADOOP-4970) | *Major* | 
**Use the full path when move files to .Trash/Current**
+* [HADOOP-4575](https://issues.apache.org/jira/browse/HADOOP-4575) | *Major* | 
**An independent HTTPS proxy for HDFS**
 
-Changed trash facility to use absolute path of the deleted file.
+Introduced independent HSFTP proxy server for authenticated access to clusters.
 
 
 ---
 
-* [HADOOP-4873](https://issues.apache.org/jira/browse/HADOOP-4873) | *Major* | 
**display minMaps/Reduces on advanced scheduler page**
+* [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | *Major* | 
**Move http server from FSNamesystem into NameNode.**
 
-Changed fair scheduler UI to display minMaps and minReduces variables.
+Moved HTTP server from FSNameSystem to NameNode. Removed 
FSNamesystem.getNameNodeInfoPort(). Replaced 
FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort() with 
new method  FSNamesystem.getDFSNameNodeAddress(). Removed constructor 
NameNode(bindAddress, conf).
 
 
 ---
 
-* [HADOOP-4843](https://issues.apache.org/jira/browse/HADOOP-4843) | *Major* | 
**Collect Job History log file and Job Conf file into Chukwa**
+* [HADOOP-4567](https://issues.apache.org/jira/browse/HADOOP-4567) | *Major* | 
**GetFileBlockLocations should return the NetworkTopology information of the 

[49/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
index c432600..7470dc8 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.0/CHANGES.0.14.0.md
@@ -20,211 +20,195 @@
 
 ## Release 0.14.0 - 2007-08-20
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1597](https://issues.apache.org/jira/browse/HADOOP-1597) | 
Distributed upgrade status reporting and post upgrade features. |  Blocker | . 
| Konstantin Shvachko | Konstantin Shvachko |
-| [HADOOP-1570](https://issues.apache.org/jira/browse/HADOOP-1570) | Add a 
per-job configuration knob to control loading of native hadoop libraries |  
Major | io | Arun C Murthy | Arun C Murthy |
-| [HADOOP-1568](https://issues.apache.org/jira/browse/HADOOP-1568) | NameNode 
Schema for HttpFileSystem |  Major | fs | Chris Douglas | Chris Douglas |
-| [HADOOP-1562](https://issues.apache.org/jira/browse/HADOOP-1562) | Report 
Java VM metrics |  Major | metrics | David Bowen | David Bowen |
+| [HADOOP-234](https://issues.apache.org/jira/browse/HADOOP-234) | Hadoop 
Pipes for writing map/reduce jobs in C++ and python |  Major | . | Sanjay 
Dahiya | Owen O'Malley |
+| [HADOOP-1379](https://issues.apache.org/jira/browse/HADOOP-1379) | Integrate 
Findbugs into nightly build process |  Major | test | Nigel Daley | Nigel Daley 
|
+| [HADOOP-1447](https://issues.apache.org/jira/browse/HADOOP-1447) | Support 
for textInputFormat in contrib/data\_join |  Minor | . | Senthil Subramanian | 
Senthil Subramanian |
+| [HADOOP-1469](https://issues.apache.org/jira/browse/HADOOP-1469) | 
Asynchronous table creation |  Minor | . | James Kennedy | stack |
+| [HADOOP-1377](https://issues.apache.org/jira/browse/HADOOP-1377) | Creation 
time and modification time for hadoop files and directories |  Major | . | 
dhruba borthakur | dhruba borthakur |
 | [HADOOP-1515](https://issues.apache.org/jira/browse/HADOOP-1515) | 
MultiFileSplit, MultiFileInputFormat |  Major | . | Enis Soztutar | Enis 
Soztutar |
 | [HADOOP-1508](https://issues.apache.org/jira/browse/HADOOP-1508) | ant Task 
for FsShell operations |  Minor | build, fs | Chris Douglas | Chris Douglas |
-| [HADOOP-1469](https://issues.apache.org/jira/browse/HADOOP-1469) | 
Asynchronous table creation |  Minor | . | James Kennedy | stack |
-| [HADOOP-1447](https://issues.apache.org/jira/browse/HADOOP-1447) | Support 
for textInputFormat in contrib/data\_join |  Minor | . | Senthil Subramanian | 
Senthil Subramanian |
-| [HADOOP-1437](https://issues.apache.org/jira/browse/HADOOP-1437) | Eclipse 
plugin for developing and executing MapReduce programs on Hadoop |  Major | . | 
Eugene Hung | Christophe Taton |
+| [HADOOP-1570](https://issues.apache.org/jira/browse/HADOOP-1570) | Add a 
per-job configuration knob to control loading of native hadoop libraries |  
Major | io | Arun C Murthy | Arun C Murthy |
 | [HADOOP-1433](https://issues.apache.org/jira/browse/HADOOP-1433) | Add job 
priority |  Minor | . | Johan Oskarsson | Johan Oskarsson |
-| [HADOOP-1379](https://issues.apache.org/jira/browse/HADOOP-1379) | Integrate 
Findbugs into nightly build process |  Major | test | Nigel Daley | Nigel Daley 
|
-| [HADOOP-1377](https://issues.apache.org/jira/browse/HADOOP-1377) | Creation 
time and modification time for hadoop files and directories |  Major | . | 
dhruba borthakur | dhruba borthakur |
+| [HADOOP-1597](https://issues.apache.org/jira/browse/HADOOP-1597) | 
Distributed upgrade status reporting and post upgrade features. |  Blocker | . 
| Konstantin Shvachko | Konstantin Shvachko |
+| [HADOOP-1562](https://issues.apache.org/jira/browse/HADOOP-1562) | Report 
Java VM metrics |  Major | metrics | David Bowen | David Bowen |
 | [HADOOP-1134](https://issues.apache.org/jira/browse/HADOOP-1134) | Block 
level CRCs in HDFS |  Major | . | Raghu Angadi | Raghu Angadi |
-| [HADOOP-234](https://issues.apache.org/jira/browse/HADOOP-234) | Hadoop 
Pipes for writing map/reduce jobs in C++ and python |  Major | . | Sanjay 
Dahiya | Owen O'Malley |
+| [HADOOP-1568](https://issues.apache.org/jira/browse/HADOOP-1568) | NameNode 
Schema for HttpFileSystem |  Major | fs | Chris Douglas | Chris Douglas |
+| 

[33/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
index 9f341c1..8a8bef3 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.21.0/RELEASENOTES.0.21.0.md
@@ -23,298 +23,298 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-6813](https://issues.apache.org/jira/browse/HADOOP-6813) | *Blocker* 
| **Add a new newInstance method in FileSystem that takes a "user" as argument**
+* [HADOOP-4895](https://issues.apache.org/jira/browse/HADOOP-4895) | *Major* | 
**Remove deprecated methods in DFSClient**
 
-I've just committed this to 0.21.
+Removed deprecated methods DFSClient.getHints() and DFSClient.isDirectory().
 
 
 ---
 
-* [HADOOP-6748](https://issues.apache.org/jira/browse/HADOOP-6748) | *Major* | 
**Remove hadoop.cluster.administrators**
+* [HADOOP-4941](https://issues.apache.org/jira/browse/HADOOP-4941) | *Major* | 
**Remove getBlockSize(Path f), getLength(Path f) and getReplication(Path src)**
 
-Removed configuration property "hadoop.cluster.administrators". Added 
constructor public HttpServer(String name, String bindAddress, int port, 
boolean findPort, Configuration conf, AccessControlList adminsAcl) in 
HttpServer, which takes cluster administrators acl as a parameter.
+Removed deprecated FileSystem methods getBlockSize(Path f), getLength(Path f), 
and getReplication(Path src).
 
 
 ---
 
-* [HADOOP-6701](https://issues.apache.org/jira/browse/HADOOP-6701) | *Minor* | 
** Incorrect exit codes for "dfs -chown", "dfs -chgrp"**
+* [HADOOP-4268](https://issues.apache.org/jira/browse/HADOOP-4268) | *Major* | 
**Permission checking in fsck**
 
-Commands chmod, chown and chgrp now returns non zero exit code and an error 
message on failure instead of returning zero.
+Fsck now checks permissions as directories are traversed. Any user can now use 
fsck, but information is provided only for directories the user has permission 
to read.
 
 
 ---
 
-* [HADOOP-6692](https://issues.apache.org/jira/browse/HADOOP-6692) | *Major* | 
**Add FileContext#listStatus that returns an iterator**
+* [HADOOP-4648](https://issues.apache.org/jira/browse/HADOOP-4648) | *Major* | 
**Remove ChecksumDistriubtedFileSystem and InMemoryFileSystem**
 
-This issue adds Iterator\ listStatus(Path) to FileContext, moves 
FileStatus[] listStatus(Path) to FileContext#Util, and adds 
Iterator\ listStatusItor(Path) to AbstractFileSystem which 
provides a default implementation by using FileStatus[] listStatus(Path).
+Removed obsolete, deprecated subclasses of ChecksumFileSystem 
(InMemoryFileSystem, ChecksumDistributedFileSystem).
 
 
 ---
 
-* [HADOOP-6686](https://issues.apache.org/jira/browse/HADOOP-6686) | *Major* | 
**Remove redundant exception class name in unwrapped exceptions thrown at the 
RPC client**
+* [HADOOP-4940](https://issues.apache.org/jira/browse/HADOOP-4940) | *Major* | 
**Remove delete(Path f)**
 
-The exceptions thrown by the RPC client no longer carries a redundant 
exception class name in exception message.
+Removed deprecated method FileSystem.delete(Path).
 
 
 ---
 
-* [HADOOP-6577](https://issues.apache.org/jira/browse/HADOOP-6577) | *Major* | 
**IPC server response buffer reset threshold should be configurable**
+* [HADOOP-3953](https://issues.apache.org/jira/browse/HADOOP-3953) | *Major* | 
**Sticky bit for directories**
 
-Add hidden configuration option "ipc.server.max.response.size" to change the 
default 1 MB, the maximum size when large IPC handler response buffer is reset.
+UNIX-style sticky bit implemented for HDFS directories. When  the  sticky  bit 
 is set on a directory, files in that directory may be deleted or renamed only 
by a superuser or the file's owner.
 
 
 ---
 
-* [HADOOP-6569](https://issues.apache.org/jira/browse/HADOOP-6569) | *Major* | 
**FsShell#cat should avoid calling unecessary getFileStatus before opening a 
file to read**
+* [HADOOP-5022](https://issues.apache.org/jira/browse/HADOOP-5022) | *Blocker* 
| **[HOD] logcondense should delete all hod logs for a user, including 
jobtracker logs**
 
-**WARNING: No release note provided for this incompatible change.**
+New logcondense option retain-master-logs indicates whether the script should 
delete master logs as part of its cleanup process. By default this option is 
false; master logs are deleted. Earlier versions of logcondense did not delete 
master logs.
 
 
 ---
 
-* [HADOOP-6568](https://issues.apache.org/jira/browse/HADOOP-6568) | *Major* | 
**Authorization for 

[50/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
index 40e402c..125ec55 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.12.0/CHANGES.0.12.0.md
@@ -20,98 +20,88 @@
 
 ## Release 0.12.0 - 2007-03-02
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1032](https://issues.apache.org/jira/browse/HADOOP-1032) | Support 
for caching Job JARs |  Minor | . | Gautam Kowshik | Gautam Kowshik |
-| [HADOOP-492](https://issues.apache.org/jira/browse/HADOOP-492) | Global 
counters |  Major | . | arkady borkovsky | David Bowen |
 | [HADOOP-491](https://issues.apache.org/jira/browse/HADOOP-491) | streaming 
jobs should allow programs that don't do any IO for a long time |  Major | . | 
arkady borkovsky | Arun C Murthy |
+| [HADOOP-492](https://issues.apache.org/jira/browse/HADOOP-492) | Global 
counters |  Major | . | arkady borkovsky | David Bowen |
+| [HADOOP-1032](https://issues.apache.org/jira/browse/HADOOP-1032) | Support 
for caching Job JARs |  Minor | . | Gautam Kowshik | Gautam Kowshik |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-1043](https://issues.apache.org/jira/browse/HADOOP-1043) | Optimize 
the shuffle phase (increase the parallelism) |  Major | . | Devaraj Das | 
Devaraj Das |
-| [HADOOP-1042](https://issues.apache.org/jira/browse/HADOOP-1042) | Improve 
the handling of failed map output fetches |  Major | . | Devaraj Das | Devaraj 
Das |
-| [HADOOP-1041](https://issues.apache.org/jira/browse/HADOOP-1041) | Counter 
names are ugly |  Major | . | Owen O'Malley | David Bowen |
-| [HADOOP-1040](https://issues.apache.org/jira/browse/HADOOP-1040) | 
Improvement of RandomWriter example to use custom InputFormat, OutputFormat, 
and Counters |  Major | . | Owen O'Malley | Owen O'Malley |
-| [HADOOP-1033](https://issues.apache.org/jira/browse/HADOOP-1033) | Rewrite 
AmazonEC2 wiki page |  Minor | scripts | Tom White | Tom White |
-| [HADOOP-1030](https://issues.apache.org/jira/browse/HADOOP-1030) | in unit 
tests, set ipc timeout in one place |  Minor | test | Doug Cutting | Doug 
Cutting |
-| [HADOOP-1025](https://issues.apache.org/jira/browse/HADOOP-1025) | remove 
dead code in Server.java |  Minor | ipc | Doug Cutting | Doug Cutting |
-| [HADOOP-1017](https://issues.apache.org/jira/browse/HADOOP-1017) | 
Optimization: Reduce Overhead from ReflectionUtils.newInstance |  Major | util 
| Ron Bodkin |  |
+| [HADOOP-975](https://issues.apache.org/jira/browse/HADOOP-975) | Separation 
of user tasks' stdout and stderr streams |  Major | . | Arun C Murthy | Arun C 
Murthy |
+| [HADOOP-982](https://issues.apache.org/jira/browse/HADOOP-982) | A couple 
setter functions and toString method for BytesWritable. |  Major | io | Owen 
O'Malley | Owen O'Malley |
+| [HADOOP-858](https://issues.apache.org/jira/browse/HADOOP-858) | clean up 
smallJobsBenchmark and move to src/test/org/apache/hadoop/mapred |  Minor | 
build | Nigel Daley | Nigel Daley |
+| [HADOOP-954](https://issues.apache.org/jira/browse/HADOOP-954) | Metrics 
should offer complete set of static report methods or none at all |  Minor | 
metrics | Nigel Daley | David Bowen |
+| [HADOOP-882](https://issues.apache.org/jira/browse/HADOOP-882) | 
S3FileSystem should retry if there is a communication problem with S3 |  Major 
| fs | Tom White | Tom White |
+| [HADOOP-977](https://issues.apache.org/jira/browse/HADOOP-977) | The output 
from the user's task should be tagged and sent to the resepective console 
streams. |  Major | . | Owen O'Malley | Arun C Murthy |
 | [HADOOP-1007](https://issues.apache.org/jira/browse/HADOOP-1007) | Names 
used for map, reduce, and shuffle metrics should be unique |  Trivial | metrics 
| Nigel Daley | Nigel Daley |
+| [HADOOP-889](https://issues.apache.org/jira/browse/HADOOP-889) | DFS unit 
tests have duplicate code |  Minor | test | Doug Cutting | Milind Bhandarkar |
+| [HADOOP-943](https://issues.apache.org/jira/browse/HADOOP-943) | fsck to 
show the filename of the corrupted file |  Trivial | . | Koji Noguchi | dhruba 
borthakur |
+| 

[08/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
index 1e7747e..1042346 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.1-beta/CHANGES.2.1.1-beta.md
@@ -24,15 +24,9 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9944](https://issues.apache.org/jira/browse/HADOOP-9944) | 
RpcRequestHeaderProto defines callId as uint32 while 
ipc.Client.CONNECTION\_CONTEXT\_CALL\_ID is signed (-3) |  Blocker | . | Arun C 
Murthy | Arun C Murthy |
-| [YARN-1170](https://issues.apache.org/jira/browse/YARN-1170) | yarn proto 
definitions should specify package as 'hadoop.yarn' |  Blocker | . | Arun C 
Murthy | Binglin Chang |
 | [YARN-707](https://issues.apache.org/jira/browse/YARN-707) | Add user info 
in the YARN ClientToken |  Blocker | . | Bikas Saha | Jason Lowe |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|: |: | :--- |: |: |: |
+| [YARN-1170](https://issues.apache.org/jira/browse/YARN-1170) | yarn proto 
definitions should specify package as 'hadoop.yarn' |  Blocker | . | Arun C 
Murthy | Binglin Chang |
+| [HADOOP-9944](https://issues.apache.org/jira/browse/HADOOP-9944) | 
RpcRequestHeaderProto defines callId as uint32 while 
ipc.Client.CONNECTION\_CONTEXT\_CALL\_ID is signed (-3) |  Blocker | . | Arun C 
Murthy | Arun C Murthy |
 
 
 ### NEW FEATURES:
@@ -40,199 +34,193 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
 | [HADOOP-9789](https://issues.apache.org/jira/browse/HADOOP-9789) | Support 
server advertised kerberos principals |  Critical | ipc, security | Daryn Sharp 
| Daryn Sharp |
-| [HDFS-5118](https://issues.apache.org/jira/browse/HDFS-5118) | Provide 
testing support for DFSClient to drop RPC responses |  Major | . | Jing Zhao | 
Jing Zhao |
 | [HDFS-5076](https://issues.apache.org/jira/browse/HDFS-5076) | Add MXBean 
methods to query NN's transaction information and JournalNode's journal status 
|  Minor | . | Jing Zhao | Jing Zhao |
+| [HDFS-5118](https://issues.apache.org/jira/browse/HDFS-5118) | Provide 
testing support for DFSClient to drop RPC responses |  Major | . | Jing Zhao | 
Jing Zhao |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |: |: | :--- |: |: |: |
-| [HADOOP-9962](https://issues.apache.org/jira/browse/HADOOP-9962) | in order 
to avoid dependency divergence within Hadoop itself lets enable 
DependencyConvergence |  Major | build | Roman Shaposhnik | Roman Shaposhnik |
-| [HADOOP-9945](https://issues.apache.org/jira/browse/HADOOP-9945) | 
HAServiceState should have a state for stopped services |  Minor | ha | Karthik 
Kambatla | Karthik Kambatla |
-| [HADOOP-9918](https://issues.apache.org/jira/browse/HADOOP-9918) | Add 
addIfService() to CompositeService |  Minor | . | Karthik Kambatla | Karthik 
Kambatla |
-| [HADOOP-9886](https://issues.apache.org/jira/browse/HADOOP-9886) | Turn 
warning message in RetryInvocationHandler to debug |  Minor | . | Arpit Gupta | 
Arpit Gupta |
-| [HADOOP-9879](https://issues.apache.org/jira/browse/HADOOP-9879) | Move the 
version info of zookeeper dependencies to hadoop-project/pom |  Minor | build | 
Karthik Kambatla | Karthik Kambatla |
+| [HADOOP-8814](https://issues.apache.org/jira/browse/HADOOP-8814) | 
Inefficient comparison with the empty string. Use isEmpty() instead |  Minor | 
conf, fs, fs/s3, ha, io, metrics, performance, record, security, util | Brandon 
Li | Brandon Li |
+| [MAPREDUCE-1981](https://issues.apache.org/jira/browse/MAPREDUCE-1981) | 
Improve getSplits performance by using listLocatedStatus |  Major | job 
submission | Hairong Kuang | Hairong Kuang |
+| [HADOOP-9803](https://issues.apache.org/jira/browse/HADOOP-9803) | Add 
generic type parameter to RetryInvocationHandler |  Minor | ipc | Tsz Wo 
Nicholas Sze | Tsz Wo Nicholas Sze |
+| [YARN-758](https://issues.apache.org/jira/browse/YARN-758) | Augment MockNM 
to use multiple cores |  Minor | . | Bikas Saha | Karthik Kambatla |
+| [MAPREDUCE-5367](https://issues.apache.org/jira/browse/MAPREDUCE-5367) | 
Local jobs all use same local working directory |  Major | . | Sandy Ryza | 
Sandy Ryza |
+| [HDFS-5061](https://issues.apache.org/jira/browse/HDFS-5061) | Make 
FSNameSystem#auditLoggers an unmodifiable list |  Major | namenode | Arpit 
Agarwal | Arpit Agarwal |
+| 

[51/51] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

2017-08-31 Thread aw
HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

Signed-off-by: Andrew Wang 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/19041008
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/19041008
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/19041008

Branch: refs/heads/trunk
Commit: 190410085b86b002a7515ce3a000d87bafffc77d
Parents: 91cc070
Author: Allen Wittenauer 
Authored: Thu May 4 18:22:34 2017 -0700
Committer: Allen Wittenauer 
Committed: Thu Aug 31 19:06:49 2017 -0700

--
 hadoop-common-project/hadoop-common/pom.xml |2 +-
 .../markdown/release/0.1.0/CHANGES.0.1.0.md |  106 +-
 .../markdown/release/0.1.1/CHANGES.0.1.1.md |   36 +-
 .../markdown/release/0.10.0/CHANGES.0.10.0.md   |  118 +-
 .../markdown/release/0.10.1/CHANGES.0.10.1.md   |   52 +-
 .../markdown/release/0.11.0/CHANGES.0.11.0.md   |  106 +-
 .../markdown/release/0.11.1/CHANGES.0.11.1.md   |   44 +-
 .../markdown/release/0.11.2/CHANGES.0.11.2.md   |   42 +-
 .../markdown/release/0.12.0/CHANGES.0.12.0.md   |  124 +-
 .../markdown/release/0.12.1/CHANGES.0.12.1.md   |   70 +-
 .../markdown/release/0.12.2/CHANGES.0.12.2.md   |   44 +-
 .../markdown/release/0.12.3/CHANGES.0.12.3.md   |   50 +-
 .../markdown/release/0.13.0/CHANGES.0.13.0.md   |  252 +-
 .../markdown/release/0.13.1/CHANGES.0.13.1.md   |   64 -
 .../release/0.13.1/RELEASENOTES.0.13.1.md   |   24 -
 .../markdown/release/0.14.0/CHANGES.0.14.0.md   |  288 +--
 .../markdown/release/0.14.1/CHANGES.0.14.1.md   |   44 +-
 .../markdown/release/0.14.2/CHANGES.0.14.2.md   |   52 +-
 .../markdown/release/0.14.3/CHANGES.0.14.3.md   |   44 +-
 .../markdown/release/0.14.4/CHANGES.0.14.4.md   |   36 +-
 .../markdown/release/0.15.0/CHANGES.0.15.0.md   |  266 +-
 .../markdown/release/0.15.1/CHANGES.0.15.1.md   |   32 +-
 .../markdown/release/0.15.2/CHANGES.0.15.2.md   |   52 +-
 .../markdown/release/0.15.3/CHANGES.0.15.3.md   |   44 +-
 .../markdown/release/0.15.4/CHANGES.0.15.4.md   |   42 +-
 .../markdown/release/0.16.0/CHANGES.0.16.0.md   |  320 ++-
 .../markdown/release/0.16.1/CHANGES.0.16.1.md   |   74 +-
 .../markdown/release/0.16.2/CHANGES.0.16.2.md   |   70 +-
 .../markdown/release/0.16.3/CHANGES.0.16.3.md   |   46 +-
 .../markdown/release/0.16.4/CHANGES.0.16.4.md   |   46 +-
 .../markdown/release/0.17.0/CHANGES.0.17.0.md   |  350 ++-
 .../release/0.17.0/RELEASENOTES.0.17.0.md   |  450 ++--
 .../markdown/release/0.17.1/CHANGES.0.17.1.md   |   48 +-
 .../markdown/release/0.17.2/CHANGES.0.17.2.md   |   60 +-
 .../release/0.17.2/RELEASENOTES.0.17.2.md   |   12 +-
 .../markdown/release/0.17.3/CHANGES.0.17.3.md   |   40 +-
 .../markdown/release/0.18.0/CHANGES.0.18.0.md   |  492 ++--
 .../release/0.18.0/RELEASENOTES.0.18.0.md   |  302 +--
 .../markdown/release/0.18.1/CHANGES.0.18.1.md   |   48 +-
 .../release/0.18.1/RELEASENOTES.0.18.1.md   |8 +-
 .../markdown/release/0.18.2/CHANGES.0.18.2.md   |   58 +-
 .../release/0.18.2/RELEASENOTES.0.18.2.md   |   20 +-
 .../markdown/release/0.18.3/CHANGES.0.18.3.md   |  100 +-
 .../release/0.18.3/RELEASENOTES.0.18.3.md   |   50 +-
 .../markdown/release/0.18.4/CHANGES.0.18.4.md   |   48 +-
 .../markdown/release/0.19.0/CHANGES.0.19.0.md   |  636 +++--
 .../release/0.19.0/RELEASENOTES.0.19.0.md   |  306 +--
 .../markdown/release/0.19.1/CHANGES.0.19.1.md   |   96 +-
 .../release/0.19.1/RELEASENOTES.0.19.1.md   |   40 +-
 .../markdown/release/0.19.2/CHANGES.0.19.2.md   |   92 +-
 .../markdown/release/0.2.0/CHANGES.0.2.0.md |  102 +-
 .../markdown/release/0.2.1/CHANGES.0.2.1.md |   44 +-
 .../markdown/release/0.20.0/CHANGES.0.20.0.md   |  508 ++--
 .../release/0.20.0/RELEASENOTES.0.20.0.md   |  186 +-
 .../markdown/release/0.20.1/CHANGES.0.20.1.md   |  134 +-
 .../release/0.20.1/RELEASENOTES.0.20.1.md   |  112 +-
 .../markdown/release/0.20.2/CHANGES.0.20.2.md   |   90 +-
 .../release/0.20.2/RELEASENOTES.0.20.2.md   |   66 +-
 .../release/0.20.203.0/CHANGES.0.20.203.0.md|   64 +-
 .../0.20.203.0/RELEASENOTES.0.20.203.0.md   |   44 +-
 .../release/0.20.203.1/CHANGES.0.20.203.1.md|   42 +-
 .../release/0.20.204.0/CHANGES.0.20.204.0.md|  100 +-
 .../0.20.204.0/RELEASENOTES.0.20.204.0.md   |   38 +-
 .../release/0.20.204.1/CHANGES.0.20.204.1.md|   64 -
 .../0.20.204.1/RELEASENOTES.0.20.204.1.md   |   24 -
 .../release/0.20.205.0/CHANGES.0.20.205.0.md|  210 +-
 .../0.20.205.0/RELEASENOTES.0.20.205.0.md   |   98 +-
 .../markdown/release/0.20.3/CHANGES.0.20.3.md   |   74 +-
 .../release/0.20.3/RELEASENOTES.0.20.3.md   |   12 +-
 .../markdown/release/0.21.0/CHANGES.0.21.0.md   | 2412 +-
 .../release/0.21.0/RELEASENOTES.0.21.0.md   | 1324 +-
 .../markdown/release/0.21.1/CHANGES.0.21.1.md   |   

  1   2   3   >