hadoop git commit: HDFS-9231. fsck doesn't list correct file path when Bad Replicas/Blocks are in a snapshot. (Xiao Chen via Yongjun Zhang)

2015-10-27 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/trunk bf8e45298 -> 97913f430


HDFS-9231. fsck doesn't list correct file path when Bad Replicas/Blocks are in 
a snapshot. (Xiao Chen via Yongjun Zhang)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/97913f43
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/97913f43
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/97913f43

Branch: refs/heads/trunk
Commit: 97913f430cbe3f82ac866ae6ab8f42754102f6c0
Parents: bf8e452
Author: Yongjun Zhang 
Authored: Tue Oct 27 23:13:58 2015 -0700
Committer: Yongjun Zhang 
Committed: Tue Oct 27 23:31:50 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/namenode/FSDirSnapshotOp.java   | 38 +
 .../hdfs/server/namenode/FSNamesystem.java  | 75 +
 .../hdfs/server/namenode/NameNodeMXBean.java|  7 ++
 .../hdfs/server/namenode/NamenodeFsck.java  | 20 ++---
 .../hdfs/server/namenode/snapshot/Snapshot.java |  8 ++
 .../src/main/webapps/hdfs/dfshealth.html|  2 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 88 
 8 files changed, 227 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/97913f43/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e96f996..fc41df4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2167,6 +2167,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9268. fuse_dfs chown crashes when uid is passed as -1 (cmccabe)
 
+HDFS-9231. fsck doesn't list correct file path when Bad Replicas/Blocks
+are in a snapshot. (Xiao Chen via Yongjun Zhang)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/97913f43/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
index 317fc4b..41ccfd1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
@@ -29,9 +29,13 @@ import 
org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
 import 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature;
 import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot;
 import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager;
+import org.apache.hadoop.hdfs.util.ReadOnlyList;
 import org.apache.hadoop.util.ChunkedArrayList;
 
 import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.ListIterator;
 import java.util.List;
 
 class FSDirSnapshotOp {
@@ -159,6 +163,40 @@ class FSDirSnapshotOp {
 return diffs;
   }
 
+  /** Get a collection of full snapshot paths given file and snapshot dir.
+   * @param lsf a list of snapshottable features
+   * @param file full path of the file
+   * @return collection of full paths of snapshot of the file
+   */
+  static Collection getSnapshotFiles(FSDirectory fsd,
+  List lsf,
+  String file) throws IOException {
+ArrayList snaps = new ArrayList();
+ListIterator sfi = lsf.listIterator();
+for (DirectorySnapshottableFeature sf : lsf) {
+  // for each snapshottable dir e.g. /dir1, /dir2
+  final ReadOnlyList lsnap = sf.getSnapshotList();
+  for (Snapshot s : lsnap) {
+// for each snapshot name under snapshottable dir
+// e.g. /dir1/.snapshot/s1, /dir1/.snapshot/s2
+final String dirName = s.getRoot().getRootFullPathName();
+if (!file.startsWith(dirName)) {
+  // file not in current snapshot root dir, no need to check other 
snaps
+  break;
+}
+String snapname = s.getRoot().getFullPathName();
+if (dirName.equals(Path.SEPARATOR)) { // handle rootDir
+  snapname += Path.SEPARATOR;
+}
+snapname += file.substring(file.indexOf(dirName) + dirName.length());
+if (fsd.getFSNamesystem().getFileInfo(snapname, true) != null) {
+  snaps.add(snapname);
+}
+  }
+}
+return snaps;
+  }
+
   /**
* Delete a snapshot of a snapshottable directory
* @param snapshotRoot The snapshottable directory

http://git-wip

[2/2] hadoop git commit: HDFS-9311. Support optional offload of NameNode HA service health checks to a separate RPC server. Contributed by Chris Nauroth.

2015-10-27 Thread cnauroth
HDFS-9311. Support optional offload of NameNode HA service health checks to a 
separate RPC server. Contributed by Chris Nauroth.

(cherry picked from commit bf8e45298218f70e38838152f69c7705d8606bd6)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af0f2e27
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af0f2e27
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af0f2e27

Branch: refs/heads/branch-2
Commit: af0f2e27d189ceefa71fbb5178cc69006a62a257
Parents: 0377795
Author: cnauroth 
Authored: Tue Oct 27 23:07:14 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 23:08:32 2015 -0700

--
 .../org/apache/hadoop/ha/HAServiceTarget.java   | 50 ++-
 .../org/apache/hadoop/ha/HealthMonitor.java |  2 +-
 .../org/apache/hadoop/ha/DummyHAService.java| 47 +-
 .../org/apache/hadoop/ha/TestHealthMonitor.java | 10 ++-
 ...HealthMonitorWithDedicatedHealthAddress.java | 37 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  6 ++
 .../java/org/apache/hadoop/hdfs/DFSUtil.java| 23 +
 .../hadoop/hdfs/server/namenode/NameNode.java   | 64 --
 .../hdfs/server/namenode/NameNodeRpcServer.java | 75 +++-
 .../hadoop/hdfs/tools/NNHAServiceTarget.java| 11 +++
 .../src/main/resources/hdfs-default.xml | 39 
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 30 +++
 .../TestNameNodeRespectsBindHostKeys.java   | 50 ++-
 .../server/namenode/ha/TestNNHealthCheck.java   | 93 ++--
 15 files changed, 473 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af0f2e27/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
index 56678b4..98aab99 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
@@ -50,6 +50,23 @@ public abstract class HAServiceTarget {
   public abstract InetSocketAddress getAddress();
 
   /**
+   * Returns an optional separate RPC server address for health checks at the
+   * target node.  If defined, then this address is used by the health monitor
+   * for the {@link HAServiceProtocol#monitorHealth()} and
+   * {@link HAServiceProtocol#getServiceStatus()} calls.  This can be useful 
for
+   * separating out these calls onto separate RPC handlers to protect against
+   * resource exhaustion in the main RPC handler pool.  If null (which is the
+   * default implementation), then all RPC calls go to the address defined by
+   * {@link #getAddress()}.
+   *
+   * @return IPC address of the lifeline RPC server on the target node, or null
+   * if no lifeline RPC server is used
+   */
+  public InetSocketAddress getHealthMonitorAddress() {
+return null;
+  }
+
+  /**
* @return the IPC address of the ZKFC on the target node
*/
   public abstract InetSocketAddress getZKFCAddress();
@@ -73,15 +90,42 @@ public abstract class HAServiceTarget {
*/
   public HAServiceProtocol getProxy(Configuration conf, int timeoutMs)
   throws IOException {
+return getProxyForAddress(conf, timeoutMs, getAddress());
+  }
+
+  /**
+   * Returns a proxy to connect to the target HA service for health monitoring.
+   * If {@link #getHealthMonitorAddress()} is implemented to return a non-null
+   * address, then this proxy will connect to that address.  Otherwise, the
+   * returned proxy defaults to using {@link #getAddress()}, which means this
+   * method's behavior is identical to {@link #getProxy(Configuration, int)}.
+   *
+   * @param conf Configuration
+   * @param timeoutMs timeout in milliseconds
+   * @return a proxy to connect to the target HA service for health monitoring
+   * @throws IOException if there is an error
+   */
+  public HAServiceProtocol getHealthMonitorProxy(Configuration conf,
+  int timeoutMs) throws IOException {
+InetSocketAddress addr = getHealthMonitorAddress();
+if (addr == null) {
+  addr = getAddress();
+}
+return getProxyForAddress(conf, timeoutMs, addr);
+  }
+
+  private HAServiceProtocol getProxyForAddress(Configuration conf,
+  int timeoutMs, InetSocketAddress addr) throws IOException {
 Configuration confCopy = new Configuration(conf);
 // Lower 

[1/2] hadoop git commit: HDFS-9311. Support optional offload of NameNode HA service health checks to a separate RPC server. Contributed by Chris Nauroth.

2015-10-27 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0377795e0 -> af0f2e27d
  refs/heads/trunk 1f7ecb0c8 -> bf8e45298


HDFS-9311. Support optional offload of NameNode HA service health checks to a 
separate RPC server. Contributed by Chris Nauroth.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf8e4529
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf8e4529
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf8e4529

Branch: refs/heads/trunk
Commit: bf8e45298218f70e38838152f69c7705d8606bd6
Parents: 1f7ecb0
Author: cnauroth 
Authored: Tue Oct 27 23:07:14 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 23:07:14 2015 -0700

--
 .../org/apache/hadoop/ha/HAServiceTarget.java   | 50 ++-
 .../org/apache/hadoop/ha/HealthMonitor.java |  2 +-
 .../org/apache/hadoop/ha/DummyHAService.java| 47 +-
 .../org/apache/hadoop/ha/TestHealthMonitor.java | 10 ++-
 ...HealthMonitorWithDedicatedHealthAddress.java | 37 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  6 ++
 .../java/org/apache/hadoop/hdfs/DFSUtil.java| 23 +
 .../hadoop/hdfs/server/namenode/NameNode.java   | 64 --
 .../hdfs/server/namenode/NameNodeRpcServer.java | 74 +++-
 .../hadoop/hdfs/tools/NNHAServiceTarget.java| 11 +++
 .../src/main/resources/hdfs-default.xml | 39 
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 30 +++
 .../TestNameNodeRespectsBindHostKeys.java   | 50 ++-
 .../server/namenode/ha/TestNNHealthCheck.java   | 93 ++--
 15 files changed, 472 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf8e4529/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
index 56678b4..98aab99 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceTarget.java
@@ -50,6 +50,23 @@ public abstract class HAServiceTarget {
   public abstract InetSocketAddress getAddress();
 
   /**
+   * Returns an optional separate RPC server address for health checks at the
+   * target node.  If defined, then this address is used by the health monitor
+   * for the {@link HAServiceProtocol#monitorHealth()} and
+   * {@link HAServiceProtocol#getServiceStatus()} calls.  This can be useful 
for
+   * separating out these calls onto separate RPC handlers to protect against
+   * resource exhaustion in the main RPC handler pool.  If null (which is the
+   * default implementation), then all RPC calls go to the address defined by
+   * {@link #getAddress()}.
+   *
+   * @return IPC address of the lifeline RPC server on the target node, or null
+   * if no lifeline RPC server is used
+   */
+  public InetSocketAddress getHealthMonitorAddress() {
+return null;
+  }
+
+  /**
* @return the IPC address of the ZKFC on the target node
*/
   public abstract InetSocketAddress getZKFCAddress();
@@ -73,15 +90,42 @@ public abstract class HAServiceTarget {
*/
   public HAServiceProtocol getProxy(Configuration conf, int timeoutMs)
   throws IOException {
+return getProxyForAddress(conf, timeoutMs, getAddress());
+  }
+
+  /**
+   * Returns a proxy to connect to the target HA service for health monitoring.
+   * If {@link #getHealthMonitorAddress()} is implemented to return a non-null
+   * address, then this proxy will connect to that address.  Otherwise, the
+   * returned proxy defaults to using {@link #getAddress()}, which means this
+   * method's behavior is identical to {@link #getProxy(Configuration, int)}.
+   *
+   * @param conf Configuration
+   * @param timeoutMs timeout in milliseconds
+   * @return a proxy to connect to the target HA service for health monitoring
+   * @throws IOException if there is an error
+   */
+  public HAServiceProtocol getHealthMonitorProxy(Configuration conf,
+  int timeoutMs) throws IOException {
+InetSocketAddress addr = getHealthMonitorAddress();
+if (addr == null) {
+  addr = getAddress();
+}
+return getProxyForAddress(conf, timeoutMs, addr);
+  }
+
+  private HAServiceProtocol getProxyForAddress(Configuration conf,
+  int timeoutMs, InetSocketAddress addr) throws IOException {
 Configuration confCopy = new Configuration(conf);
 // Lower the timeout so we quickly fail to connect
-
confCopy.setInt(CommonConfig

[1/2] hadoop git commit: HADOOP-11685. StorageException complaining " no lease ID" during HBase distributed log splitting. Contributed by Duo Xu.

2015-10-27 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a23f79b92 -> 0377795e0
  refs/heads/trunk 73822de7c -> 1f7ecb0c8


HADOOP-11685. StorageException complaining " no lease ID" during HBase 
distributed log splitting. Contributed by Duo Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1f7ecb0c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1f7ecb0c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1f7ecb0c

Branch: refs/heads/trunk
Commit: 1f7ecb0c84042783f9fcf3f77d7d889dc58c9ead
Parents: 73822de
Author: cnauroth 
Authored: Tue Oct 27 22:56:22 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 22:56:22 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../fs/azure/AzureNativeFileSystemStore.java| 15 +-
 .../hadoop/fs/azure/NativeAzureFileSystem.java  |  5 ++--
 .../fs/azure/TestNativeAzureFileSystemLive.java | 29 +++-
 4 files changed, 48 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7ecb0c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 016fec8..25a3a60 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -919,6 +919,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate
 Azure Storage account keys for service integration tests. (cnauroth)
 
+HADOOP-11685. StorageException complaining " no lease ID" during HBase
+distributed log splitting (Duo Xu via cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7ecb0c/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 8a33742..6412714 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -1503,10 +1503,23 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
   storePermissionStatus(blob, permissionStatus);
   storeFolderAttribute(blob);
   openOutputStream(blob).close();
-} catch (Exception e) {
+} catch (StorageException e) {
   // Caught exception while attempting upload. Re-throw as an Azure
   // storage exception.
   throw new AzureException(e);
+} catch (URISyntaxException e) {
+  throw new AzureException(e);
+} catch (IOException e) {
+  Throwable t = e.getCause();
+  if (t != null && t instanceof StorageException) {
+StorageException se = (StorageException) t;
+// If we got this exception, the blob should have already been created
+if (!se.getErrorCode().equals("LeaseIdMissing")) {
+  throw new AzureException(e);
+}
+  } else {
+throw new AzureException(e);
+  }
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7ecb0c/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
index 9305b24..7c5a504 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
@@ -1870,8 +1870,9 @@ public class NativeAzureFileSystem extends FileSystem {
* @throws IOException
*   If login fails in getCurrentUser
*/
-  private PermissionStatus createPermissionStatus(FsPermission permission)
-  throws IOException {
+  @VisibleForTesting
+  PermissionStatus createPermissionStatus(FsPermission permission)
+throws IOException {
 // Create the permission status for this file based on current user
 return new PermissionStatus(
 UserGroupInformation.getCurrentUser().getShortUserName(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7ecb0c/hadoop-tools/hadoop-azure/src/test/java/org/apache/ha

[2/2] hadoop git commit: HADOOP-11685. StorageException complaining " no lease ID" during HBase distributed log splitting. Contributed by Duo Xu.

2015-10-27 Thread cnauroth
HADOOP-11685. StorageException complaining " no lease ID" during HBase 
distributed log splitting. Contributed by Duo Xu.

(cherry picked from commit 1f7ecb0c84042783f9fcf3f77d7d889dc58c9ead)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0377795e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0377795e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0377795e

Branch: refs/heads/branch-2
Commit: 0377795e0697ca629d000e0a2bb05400a65f4aaa
Parents: a23f79b
Author: cnauroth 
Authored: Tue Oct 27 22:56:22 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 22:56:33 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../fs/azure/AzureNativeFileSystemStore.java| 15 +-
 .../hadoop/fs/azure/NativeAzureFileSystem.java  |  5 ++--
 .../fs/azure/TestNativeAzureFileSystemLive.java | 29 +++-
 4 files changed, 48 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0377795e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 0347a1d..9561068 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -322,6 +322,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate
 Azure Storage account keys for service integration tests. (cnauroth)
 
+HADOOP-11685. StorageException complaining " no lease ID" during HBase
+distributed log splitting (Duo Xu via cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0377795e/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 8a33742..6412714 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -1503,10 +1503,23 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
   storePermissionStatus(blob, permissionStatus);
   storeFolderAttribute(blob);
   openOutputStream(blob).close();
-} catch (Exception e) {
+} catch (StorageException e) {
   // Caught exception while attempting upload. Re-throw as an Azure
   // storage exception.
   throw new AzureException(e);
+} catch (URISyntaxException e) {
+  throw new AzureException(e);
+} catch (IOException e) {
+  Throwable t = e.getCause();
+  if (t != null && t instanceof StorageException) {
+StorageException se = (StorageException) t;
+// If we got this exception, the blob should have already been created
+if (!se.getErrorCode().equals("LeaseIdMissing")) {
+  throw new AzureException(e);
+}
+  } else {
+throw new AzureException(e);
+  }
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0377795e/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
index 9305b24..7c5a504 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
@@ -1870,8 +1870,9 @@ public class NativeAzureFileSystem extends FileSystem {
* @throws IOException
*   If login fails in getCurrentUser
*/
-  private PermissionStatus createPermissionStatus(FsPermission permission)
-  throws IOException {
+  @VisibleForTesting
+  PermissionStatus createPermissionStatus(FsPermission permission)
+throws IOException {
 // Create the permission status for this file based on current user
 return new PermissionStatus(
 UserGroupInformation.getCurrentUser().getShortUserName(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0377795e/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemLive.java

[2/2] hadoop git commit: HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests. Contributed by Chris Nauroth.

2015-10-27 Thread cnauroth
HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate Azure 
Storage account keys for service integration tests. Contributed by Chris 
Nauroth.

(cherry picked from commit 73822de7c38e189f765ff48d15cbe0df7404)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a23f79b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a23f79b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a23f79b9

Branch: refs/heads/branch-2
Commit: a23f79b92c74013525106ab93bc55c97c4a243f2
Parents: e984534
Author: cnauroth 
Authored: Tue Oct 27 22:48:56 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 22:49:04 2015 -0700

--
 .gitignore  |  1 +
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../hadoop-azure/src/site/markdown/index.md | 29 
 .../src/test/resources/azure-test.xml   | 14 --
 4 files changed, 33 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a23f79b9/.gitignore
--
diff --git a/.gitignore b/.gitignore
index e115db1..1bfa8df 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,4 +23,5 @@ 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.t
 yarnregistry.pdf
 hadoop-tools/hadoop-aws/src/test/resources/auth-keys.xml
 hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml
+hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml
 patchprocess/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a23f79b9/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index b99fd7b..0347a1d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -319,6 +319,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust.
 (Steve Loughran via jing9)
 
+HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate
+Azure Storage account keys for service integration tests. (cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a23f79b9/hadoop-tools/hadoop-azure/src/site/markdown/index.md
--
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
index 0d69ccf..9d0115a 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
@@ -226,18 +226,25 @@ following failure message:
 
 To resolve this, restart the Azure Emulator.  Ensure it v3.2 or later.
 
-It's also possible to run tests against a live Azure Storage account by adding
-credentials to `src/test/resources/azure-test.xml` and setting
+It's also possible to run tests against a live Azure Storage account by saving 
a
+file to `src/test/resources/azure-auth-keys.xml` and setting
 `fs.azure.test.account.name` to the name of the storage account.
 
 For example:
 
-
-  fs.azure.account.key.youraccount.blob.core.windows.net
-  YOUR ACCESS KEY
-
-
-
-  fs.azure.test.account.name
-  youraccount
-
+
+
+
+  
+fs.azure.account.key.youraccount.blob.core.windows.net
+YOUR ACCESS KEY
+  
+
+  
+fs.azure.test.account.name
+youraccount
+  
+
+
+DO NOT ADD azure-auth-keys.xml TO REVISION CONTROL.  The keys to your Azure
+Storage account are a secret and must not be shared.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a23f79b9/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
--
diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml 
b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
index 75b466d..00611fc 100644
--- a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
+++ b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
@@ -15,7 +15,7 @@
 
 
 http://www.w3.org/2001/XInclude";>
- 
+
   
 
-  
+  -->
+
+  
+  
+  
+  
+  http://www.w3.org/2001/XInclude"; href="azure-auth-keys.xml">
+
+  
+
   
   

[1/2] hadoop git commit: HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate Azure Storage account keys for service integration tests. Contributed by Chris Nauroth.

2015-10-27 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e98453431 -> a23f79b92
  refs/heads/trunk eca51b13a -> 73822de7c


HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate Azure 
Storage account keys for service integration tests. Contributed by Chris 
Nauroth.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73822de7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73822de7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73822de7

Branch: refs/heads/trunk
Commit: 73822de7c38e189f765ff48d15cbe0df7404
Parents: eca51b1
Author: cnauroth 
Authored: Tue Oct 27 22:48:56 2015 -0700
Committer: cnauroth 
Committed: Tue Oct 27 22:48:56 2015 -0700

--
 .gitignore  |  1 +
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../hadoop-azure/src/site/markdown/index.md | 29 
 .../src/test/resources/azure-test.xml   | 14 --
 4 files changed, 33 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73822de7/.gitignore
--
diff --git a/.gitignore b/.gitignore
index cde198e..998287d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -24,4 +24,5 @@ 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.t
 yarnregistry.pdf
 hadoop-tools/hadoop-aws/src/test/resources/auth-keys.xml
 hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml
+hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml
 patchprocess/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73822de7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e1addb2..016fec8 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -916,6 +916,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust.
 (Steve Loughran via jing9)
 
+HADOOP-12520. Use XInclude in hadoop-azure test configuration to isolate
+Azure Storage account keys for service integration tests. (cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73822de7/hadoop-tools/hadoop-azure/src/site/markdown/index.md
--
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
index 0d69ccf..9d0115a 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
@@ -226,18 +226,25 @@ following failure message:
 
 To resolve this, restart the Azure Emulator.  Ensure it v3.2 or later.
 
-It's also possible to run tests against a live Azure Storage account by adding
-credentials to `src/test/resources/azure-test.xml` and setting
+It's also possible to run tests against a live Azure Storage account by saving 
a
+file to `src/test/resources/azure-auth-keys.xml` and setting
 `fs.azure.test.account.name` to the name of the storage account.
 
 For example:
 
-
-  fs.azure.account.key.youraccount.blob.core.windows.net
-  YOUR ACCESS KEY
-
-
-
-  fs.azure.test.account.name
-  youraccount
-
+
+
+
+  
+fs.azure.account.key.youraccount.blob.core.windows.net
+YOUR ACCESS KEY
+  
+
+  
+fs.azure.test.account.name
+youraccount
+  
+
+
+DO NOT ADD azure-auth-keys.xml TO REVISION CONTROL.  The keys to your Azure
+Storage account are a secret and must not be shared.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73822de7/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
--
diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml 
b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
index 75b466d..00611fc 100644
--- a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
+++ b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml
@@ -15,7 +15,7 @@
 
 
 http://www.w3.org/2001/XInclude";>
- 
+
   
 
-  
+  -->
+
+  
+  
+  
+  
+  http://www.w3.org/2001/XInclude"; href="azure-auth-keys.xml">
+
+  
+
   
   

hadoop git commit: Add HDFS-9317 to CHANGES.txt.

2015-10-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7eb0daefc -> e98453431


Add HDFS-9317 to CHANGES.txt.

(cherry picked from commit eca51b13af1951b18ab7a5a92aaa0b9fe07d9ef8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e9845343
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e9845343
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e9845343

Branch: refs/heads/branch-2
Commit: e98453431442c419d957deef4b8efa66ea52a045
Parents: 7eb0dae
Author: Akira Ajisaka 
Authored: Wed Oct 28 13:17:41 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Oct 28 13:19:48 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9845343/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 0118dbe..3b72524 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1401,6 +1401,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-9305. Delayed heartbeat processing causes storm of subsequent
 heartbeats. (Arpit Agarwal)
 
+HDFS-9317. Document fsck -blockId and -storagepolicy options in branch-2.7.
+(aajisaka)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES



hadoop git commit: Add HDFS-9317 to CHANGES.txt.

2015-10-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 68ce93c32 -> eca51b13a


Add HDFS-9317 to CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eca51b13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eca51b13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eca51b13

Branch: refs/heads/trunk
Commit: eca51b13af1951b18ab7a5a92aaa0b9fe07d9ef8
Parents: 68ce93c
Author: Akira Ajisaka 
Authored: Wed Oct 28 13:17:41 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Oct 28 13:17:41 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eca51b13/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fd28c02..ce12105 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2233,6 +2233,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-9305. Delayed heartbeat processing causes storm of subsequent
 heartbeats. (Arpit Agarwal)
 
+HDFS-9317. Document fsck -blockId and -storagepolicy options in branch-2.7.
+(aajisaka)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES



hadoop git commit: HDFS-9317. Document fsck -blockId and -storagepolicy options in branch-2.7. (aajisaka)

2015-10-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 59a207213 -> ce01a5951


HDFS-9317. Document fsck -blockId and -storagepolicy options in branch-2.7. 
(aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce01a595
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce01a595
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce01a595

Branch: refs/heads/branch-2.7
Commit: ce01a5951bc8f4ed933523da6a044b4194fc5035
Parents: 59a2072
Author: Akira Ajisaka 
Authored: Wed Oct 28 13:14:58 2015 +0900
Committer: Akira Ajisaka 
Committed: Wed Oct 28 13:14:58 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt| 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java  | 6 --
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  | 3 +++
 3 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce01a595/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9b28d3b..e63532d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -84,6 +84,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-9305. Delayed heartbeat processing causes storm of subsequent
 heartbeats. (Arpit Agarwal)
 
+HDFS-9317. Document fsck -blockId and -storagepolicy options in branch-2.7.
+(aajisaka)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce01a595/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
index 900d8ba..e244406 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
@@ -78,7 +78,9 @@ public class DFSck extends Configured implements Tool {
   private static final String USAGE = "Usage: hdfs fsck  "
   + "[-list-corruptfileblocks | "
   + "[-move | -delete | -openforwrite] "
-  + "[-files [-blocks [-locations | -racks\n"
+  + "[-files [-blocks [-locations | -racks "
+  + "[-includeSnapshots] "
+  + "[-storagepolicies] [-blockId ]\n"
   + "\t\tstart checking from this path\n"
   + "\t-move\tmove corrupted files to /lost+found\n"
   + "\t-delete\tdelete corrupted files\n"
@@ -92,7 +94,7 @@ public class DFSck extends Configured implements Tool {
   + "\t-blocks\tprint out block report\n"
   + "\t-locations\tprint out locations for every block\n"
   + "\t-racks\tprint out network topology for data-node locations\n"
-  + "\t-storagepolicies\tprint out storage policy summary for the 
blocks\n\n"
+  + "\t-storagepolicies\tprint out storage policy summary for the blocks\n"
   + "\t-blockId\tprint out which file this blockId belongs to, locations"
   + " (nodes, racks) of this block, and other diagnostics info"
   + " (under replicated, corrupted or not, etc)\n\n"

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce01a595/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index a2622af..5f77694 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -101,6 +101,7 @@ Usage:
   [-move | -delete | -openforwrite]
   [-files [-blocks [-locations | -racks]]]
   [-includeSnapshots]
+  [-storagepolicies] [-blockId ]
 
 | COMMAND\_OPTION | Description |
 |: |: |
@@ -114,6 +115,8 @@ Usage:
 | `-list-corruptfileblocks` | Print out list of missing blocks and files they 
belong to. |
 | `-move` | Move corrupted files to /lost+found. |
 | `-openforwrite` | Print out files opened for write. |
+| `-storagepolicies` | Print out storage policy summary for the blocks. |
+| `-blockId` | Print out information about the block. |
 
 Runs the HDFS filesystem checking utility. See 
[fsck](./HdfsUserGuide.html#fsck) for more info.
 



hadoop git commit: YARN-4283 Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-27 Thread eclark
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-11890 fa64e68fd -> 009e67021


YARN-4283 Avoid unsafe split and append on fields that might be IPv6 literals


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/009e6702
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/009e6702
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/009e6702

Branch: refs/heads/HADOOP-11890
Commit: 009e67021f5b4c20fc70463159a3a3fdce8c7c49
Parents: fa64e68
Author: Elliott Clark 
Authored: Tue Oct 27 18:41:02 2015 -0700
Committer: Elliott Clark 
Committed: Tue Oct 27 18:41:02 2015 -0700

--
 .../apache/hadoop/yarn/util/ConverterUtils.java | 20 ++---
 .../hadoop/yarn/webapp/util/WebAppUtils.java| 20 ++---
 .../hadoop/yarn/conf/TestYarnConfiguration.java | 82 
 .../hadoop/yarn/util/TestConverterUtils.java| 14 +++-
 .../apache/hadoop/yarn/lib/TestZKClient.java| 17 ++--
 .../containermanager/ContainerManagerImpl.java  |  3 +-
 .../server/resourcemanager/ResourceManager.java |  6 +-
 .../yarn/server/resourcemanager/MockNM.java |  7 +-
 .../yarn/server/webproxy/WebAppProxy.java   |  7 +-
 .../webproxy/amfilter/AmFilterInitializer.java  |  6 +-
 .../server/webproxy/TestWebAppProxyServlet.java |  8 +-
 11 files changed, 116 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/009e6702/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
index e9674cf..34cf4d2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.yarn.api.records.URL;
 import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
 
+import com.google.common.net.HostAndPort;
 
 /**
  * This class contains a set of utilities which help converting data structures
@@ -152,26 +153,27 @@ public class ConverterUtils {
   public static String toString(ContainerId cId) {
 return cId == null ? null : cId.toString();
   }
-  
+
   public static NodeId toNodeIdWithDefaultPort(String nodeIdStr) {
-if (nodeIdStr.indexOf(":") < 0) {
-  return toNodeId(nodeIdStr + ":0");
-}
-return toNodeId(nodeIdStr);
+HostAndPort hp = HostAndPort.fromString(nodeIdStr);
+hp = hp.withDefaultPort(0);
+return toNodeId(hp.toString());
   }
 
   public static NodeId toNodeId(String nodeIdStr) {
-String[] parts = nodeIdStr.split(":");
-if (parts.length != 2) {
+HostAndPort hp = HostAndPort.fromString(nodeIdStr);
+if (!hp.hasPort()) {
   throw new IllegalArgumentException("Invalid NodeId [" + nodeIdStr
   + "]. Expected host:port");
 }
 try {
+  String hostPortStr = hp.toString();
+  String host = hostPortStr.substring(0, hostPortStr.lastIndexOf(":"));
   NodeId nodeId =
-  NodeId.newInstance(parts[0].trim(), Integer.parseInt(parts[1]));
+  NodeId.newInstance(host, hp.getPort());
   return nodeId;
 } catch (NumberFormatException e) {
-  throw new IllegalArgumentException("Invalid port: " + parts[1], e);
+  throw new IllegalArgumentException("Invalid port: " + hp.getPort(), e);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/009e6702/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
index 459c110..1b05f4e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
@@ -37,6 +37,8 @@ import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.util.RMHAUtils;
 
+import com.google.common.net.HostAndPort;
+
 @Private
 @Evolving
 public class WebAppUtils {
@@ -51,15 +53,13 @@ publi

[35/50] hadoop git commit: YARN-2729. Support script based NodeLabelsProvider Interface in Distributed Node Label Configuration Setup. (Naganarasimha G R via rohithsharmaks)

2015-10-27 Thread jing9
YARN-2729. Support script based NodeLabelsProvider Interface in Distributed 
Node Label Configuration Setup. (Naganarasimha G R via rohithsharmaks)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5acdde47
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5acdde47
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5acdde47

Branch: refs/heads/HDFS-8966
Commit: 5acdde4744c131e05db7b4b5f7d684fed7608b99
Parents: 092883b
Author: Rohith Sharma K S 
Authored: Mon Oct 26 15:42:42 2015 +0530
Committer: Rohith Sharma K S 
Committed: Mon Oct 26 15:42:42 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  20 +-
 .../nodelabels/CommonNodeLabelsManager.java |   4 +-
 .../src/main/resources/yarn-default.xml |  28 ++-
 .../yarn/server/nodemanager/NodeManager.java|  14 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  |   7 +-
 .../nodelabels/AbstractNodeLabelsProvider.java  |  30 +--
 .../ConfigurationNodeLabelsProvider.java|  18 +-
 .../nodelabels/NodeLabelsProvider.java  |   7 +-
 .../ScriptBasedNodeLabelsProvider.java  | 190 +
 .../TestNodeStatusUpdaterForLabels.java |   6 +-
 .../TestConfigurationNodeLabelsProvider.java|  34 +--
 .../TestScriptBasedNodeLabelsProvider.java  | 209 +++
 .../resourcemanager/ResourceTrackerService.java |  10 +-
 14 files changed, 511 insertions(+), 69 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5acdde47/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ba25adf..8b489e1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -241,6 +241,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3739. Add reservation system recovery to RM recovery process.
 (Subru Krishnan via adhoot)
 
+YARN-2729. Support script based NodeLabelsProvider Interface in 
Distributed Node Label
+Configuration Setup. (Naganarasimha G R via rohithsharmaks)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5acdde47/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index dafd311..18e6082 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2112,6 +2112,12 @@ public class YarnConfiguration extends Configuration {
 NODELABEL_CONFIGURATION_TYPE, DEFAULT_NODELABEL_CONFIGURATION_TYPE));
   }
 
+  @Private
+  public static boolean areNodeLabelsEnabled(
+  Configuration conf) {
+return conf.getBoolean(NODE_LABELS_ENABLED, DEFAULT_NODE_LABELS_ENABLED);
+  }
+
   private static final String NM_NODE_LABELS_PREFIX = NM_PREFIX
   + "node-labels.";
 
@@ -2120,6 +2126,7 @@ public class YarnConfiguration extends Configuration {
 
   // whitelist names for the yarn.nodemanager.node-labels.provider
   public static final String CONFIG_NODE_LABELS_PROVIDER = "config";
+  public static final String SCRIPT_NODE_LABELS_PROVIDER = "script";
 
   private static final String NM_NODE_LABELS_PROVIDER_PREFIX =
   NM_NODE_LABELS_PREFIX + "provider.";
@@ -2145,8 +2152,8 @@ public class YarnConfiguration extends Configuration {
   public static final long DEFAULT_NM_NODE_LABELS_PROVIDER_FETCH_TIMEOUT_MS =
   DEFAULT_NM_NODE_LABELS_PROVIDER_FETCH_INTERVAL_MS * 2;
 
-  public static final String NM_PROVIDER_CONFIGURED_NODE_LABELS =
-  NM_NODE_LABELS_PROVIDER_PREFIX + "configured-node-labels";
+  public static final String NM_PROVIDER_CONFIGURED_NODE_PARTITION =
+  NM_NODE_LABELS_PROVIDER_PREFIX + "configured-node-partition";
 
   private static final String RM_NODE_LABELS_PREFIX = RM_PREFIX
   + "node-labels.";
@@ -2174,6 +2181,15 @@ public class YarnConfiguration extends Configuration {
   public static final float DEFAULT_AM_BLACKLISTING_DISABLE_THRESHOLD = 0.8f;
 
 
+  private static final String NM_SCRIPT_BASED_NODE_LABELS_PROVIDER_PREFIX =
+  NM_NODE_LABELS_PROVIDER_PREFIX + "script.";
+
+  public static final String NM_SCRIPT_BASED_NODE_LABELS_PROVIDER_PATH =
+  NM_SC

[05/50] hadoop git commit: YARN-3739. Add reservation system recovery to RM recovery process. Contributed by Subru Krishnan.

2015-10-27 Thread jing9
YARN-3739. Add reservation system recovery to RM recovery process. Contributed 
by  Subru Krishnan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2798723a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2798723a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2798723a

Branch: refs/heads/HDFS-8966
Commit: 2798723a5443d04455b9d79c48d61f435ab52267
Parents: 381610d
Author: Anubhav Dhoot 
Authored: Thu Oct 22 06:36:58 2015 -0700
Committer: Anubhav Dhoot 
Committed: Thu Oct 22 06:51:00 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../server/resourcemanager/ClientRMService.java |   2 +-
 .../server/resourcemanager/ResourceManager.java |   4 +
 .../reservation/AbstractReservationSystem.java  |  91 -
 .../AbstractSchedulerPlanFollower.java  |  76 ++--
 .../CapacitySchedulerPlanFollower.java  |  16 -
 .../reservation/FairSchedulerPlanFollower.java  |  15 -
 .../reservation/InMemoryPlan.java   |  35 +-
 .../resourcemanager/reservation/PlanEdit.java   |   6 +-
 .../reservation/PlanFollower.java   |   4 +-
 .../reservation/ReservationSystem.java  |   7 +-
 .../reservation/planning/PlanningAlgorithm.java |   2 +-
 .../TestReservationSystemWithRMHA.java  | 360 +--
 .../reservation/ReservationSystemTestUtil.java  |   1 -
 .../reservation/TestCapacityOverTimePolicy.java |  29 +-
 .../reservation/TestInMemoryPlan.java   |  16 +-
 .../reservation/TestNoOverCommitPolicy.java |  19 +-
 .../TestSchedulerPlanFollowerBase.java  |  17 +-
 .../planning/TestAlignedPlanner.java|   7 +-
 .../planning/TestGreedyReservationAgent.java|  17 +-
 .../planning/TestSimpleCapacityReplanner.java   |  14 +-
 21 files changed, 556 insertions(+), 185 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2798723a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ae26386..79df1ce 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -238,6 +238,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4262. Allow whitelisted users to run privileged docker containers.
 (Sidharta Seethana via vvasudev)
 
+YARN-3739. Add reservation system recovery to RM recovery process.
+(Subru Krishnan via adhoot)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2798723a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index 4a02580..812267d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -1363,7 +1363,7 @@ public class ClientRMService extends AbstractService 
implements
   .format(
   "Reservation {0} is within threshold so attempting to create 
synchronously.",
   reservationId));
-  reservationSystem.synchronizePlan(planName);
+  reservationSystem.synchronizePlan(planName, true);
   LOG.info(MessageFormat.format("Created reservation {0} synchronously.",
   reservationId));
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2798723a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index b38f188..88fb1cf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemana

[41/50] hadoop git commit: HDFS-9268. fuse_dfs chown crashes when uid is passed as -1 (cmccabe)

2015-10-27 Thread jing9
HDFS-9268. fuse_dfs chown crashes when uid is passed as -1 (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f1eb2bc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f1eb2bc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f1eb2bc

Branch: refs/heads/HDFS-8966
Commit: 2f1eb2bceb1df5f27649a514246b38b9ccf60cba
Parents: 5e718de
Author: Colin Patrick Mccabe 
Authored: Mon Oct 26 13:33:22 2015 -0700
Committer: Colin Patrick Mccabe 
Committed: Mon Oct 26 13:33:22 2015 -0700

--
 .../src/main/native/fuse-dfs/fuse_impls_chown.c| 6 +++---
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt| 2 ++
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f1eb2bc/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_chown.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_chown.c
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_chown.c
index 2a6b61c..7fc9b87 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_chown.c
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_chown.c
@@ -61,10 +61,10 @@ int dfs_chown(const char *path, uid_t uid, gid_t gid)
 }
   }
 
-  ret = fuseConnect(user, fuse_get_context(), &conn);
+  ret = fuseConnectAsThreadUid(&conn);
   if (ret) {
-fprintf(stderr, "fuseConnect: failed to open a libhdfs connection!  "
-"error %d.\n", ret);
+fprintf(stderr, "fuseConnectAsThreadUid: failed to open a libhdfs "
+"connection!  error %d.\n", ret);
 ret = -EIO;
 goto cleanup;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f1eb2bc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c748d29..3b10893 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2142,6 +2142,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-9304. Add HdfsClientConfigKeys class to TestHdfsConfigFields
 #configurationClasses. (Mingliang Liu via wheat9)
 
+HDFS-9268. fuse_dfs chown crashes when uid is passed as -1 (cmccabe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[07/50] hadoop git commit: HDFS-9280. Document NFS gateway export point parameter. Contributed by Xiao Chen.

2015-10-27 Thread jing9
HDFS-9280. Document NFS gateway export point parameter. Contributed by Xiao 
Chen.

Change-Id: I1cea610c31301d793db3c9ca4dae86d0e5d2d64b


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aea26bf4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aea26bf4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aea26bf4

Branch: refs/heads/HDFS-8966
Commit: aea26bf4dd12316d1a7b15924607165b581a12ab
Parents: 4c0bae2
Author: Zhe Zhang 
Authored: Thu Oct 22 11:38:04 2015 -0700
Committer: Zhe Zhang 
Committed: Thu Oct 22 11:38:04 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 2 ++
 .../hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md  | 8 
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aea26bf4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 316ee3b..819534e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1591,6 +1591,8 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9253. Refactor tests of libhdfs into a directory. (wheat9)
 
+HDFS-9280. Document NFS gateway export point parameter. (Xiao Chen via zhz)
+
   BUG FIXES
 
 HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aea26bf4/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
index da7aa6f..7dc2fe4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
@@ -195,6 +195,14 @@ It's strongly recommended for the users to update a few 
configuration properties
 
 log4j.logger.org.apache.hadoop.oncrpc=DEBUG
 
+*   Export point. One can specify the NFS export point of HDFS. Exactly one 
export point is supported.
+Full path is required when configuring the export point. By default, the 
export point is the root directory "/".
+
+
+  nfs.export.point
+  /
+
+
 Start and stop NFS gateway service
 --
 



[25/50] hadoop git commit: HDFS-9297. Update TestBlockMissingException to use corruptBlockOnDataNodesByDeletingBlockFile(). (Tony Wu via lei)

2015-10-27 Thread jing9
HDFS-9297. Update TestBlockMissingException to use 
corruptBlockOnDataNodesByDeletingBlockFile(). (Tony Wu via lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5679e46b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5679e46b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5679e46b

Branch: refs/heads/HDFS-8966
Commit: 5679e46b7f867f8f7f8195c86c37e3db7b23d7d7
Parents: 15eb84b
Author: Lei Xu 
Authored: Fri Oct 23 17:42:23 2015 -0700
Committer: Lei Xu 
Committed: Fri Oct 23 17:42:23 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop/hdfs/TestBlockMissingException.java   | 15 ++-
 2 files changed, 5 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5679e46b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1641884..f5f13cf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1602,6 +1602,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9280. Document NFS gateway export point parameter. (Xiao Chen via zhz)
 
+HDFS-9297. Update TestBlockMissingException to use 
corruptBlockOnDataNodesByDeletingBlockFile().
+(Tony Wu via lei)
+
   BUG FIXES
 
 HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5679e46b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
index a3104a0..7287b5c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
@@ -67,7 +67,8 @@ public class TestBlockMissingException {
   0, numBlocks * blockSize);
   // remove block of file
   LOG.info("Remove first block of file");
-  corruptBlock(file1, locations.get(0).getBlock());
+  dfs.corruptBlockOnDataNodesByDeletingBlockFile(
+  locations.get(0).getBlock());
 
   // validate that the system throws BlockMissingException
   validateFile(fileSys, file1);
@@ -118,16 +119,4 @@ public class TestBlockMissingException {
 stm.close();
 assertTrue("Expected BlockMissingException ", gotException);
   }
-
-  //
-  // Corrupt specified block of file
-  //
-  void corruptBlock(Path file, ExtendedBlock blk) {
-// Now deliberately remove/truncate data blocks from the file.
-File[] blockFiles = dfs.getAllBlockFiles(blk);
-for (File f : blockFiles) {
-  f.delete();
-  LOG.info("Deleted block " + f);
-}
-  }
 }



[01/50] hadoop git commit: HDFS-9273. ACLs on root directory may be lost after NN restart. Contributed by Xiao Chen.

2015-10-27 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-8966 b645d67a9 -> 1ecc33005 (forced update)


HDFS-9273. ACLs on root directory may be lost after NN restart. Contributed by 
Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b525a9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b525a9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b525a9c

Branch: refs/heads/HDFS-8966
Commit: 1b525a9c32fabd8919c80717a58afbfa7fdce27e
Parents: d1cdce7
Author: cnauroth 
Authored: Wed Oct 21 16:39:02 2015 -0700
Committer: cnauroth 
Committed: Wed Oct 21 16:39:02 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/namenode/FSImageFormatPBINode.java   |  4 +++
 .../server/namenode/TestFSImageWithAcl.java | 29 
 3 files changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b525a9c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4565f8a..949dc80 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2107,6 +2107,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9274. Default value of 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec
 should be consistent. (Yi Liu via zhz)
 
+HDFS-9273. ACLs on root directory may be lost after NN restart.
+(Xiao Chen via cnauroth)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b525a9c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
index 34b28e4..cf7895b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
@@ -418,6 +418,10 @@ public final class FSImageFormatPBINode {
   }
   dir.rootDir.cloneModificationTime(root);
   dir.rootDir.clonePermissionStatus(root);
+  final AclFeature af = root.getFeature(AclFeature.class);
+  if (af != null) {
+dir.rootDir.addAclFeature(af);
+  }
   // root dir supports having extended attributes according to POSIX
   final XAttrFeature f = root.getXAttrFeature();
   if (f != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b525a9c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
index bd88478..690fec6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
@@ -206,6 +206,35 @@ public class TestFSImageWithAcl {
 doTestDefaultAclNewChildren(false);
   }
 
+  @Test
+  public void testRootACLAfterLoadingFsImage() throws IOException {
+DistributedFileSystem fs = cluster.getFileSystem();
+Path rootdir = new Path("/");
+AclEntry e1 = new AclEntry.Builder().setName("foo")
+.setPermission(ALL).setScope(ACCESS).setType(GROUP).build();
+AclEntry e2 = new AclEntry.Builder().setName("bar")
+.setPermission(READ).setScope(ACCESS).setType(GROUP).build();
+fs.modifyAclEntries(rootdir, Lists.newArrayList(e1, e2));
+
+AclStatus s = cluster.getNamesystem().getAclStatus(rootdir.toString());
+AclEntry[] returned =
+Lists.newArrayList(s.getEntries()).toArray(new AclEntry[0]);
+Assert.assertArrayEquals(
+new AclEntry[] { aclEntry(ACCESS, GROUP, READ_EXECUTE),
+aclEntry(ACCESS, GROUP, "bar", READ),
+aclEntry(ACCESS, GROUP, "foo", ALL) }, returned);
+
+// restart - hence save and load from fsimage
+restart(fs, true);
+
+s = cluster.getNamesystem().getAclStatus(rootdir.toString());
+returned = Lists.newArrayList(s.getEntries()).toArray(new AclEntry[0]);
+Assert.assertArrayEquals(
+new AclEntry[] { aclEnt

[27/50] hadoop git commit: YARN-4294. [JDK8] Fix javadoc errors caused by wrong reference and illegal tag. (aajisaka)

2015-10-27 Thread jing9
YARN-4294. [JDK8] Fix javadoc errors caused by wrong reference and illegal tag. 
(aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7781fe1b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7781fe1b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7781fe1b

Branch: refs/heads/HDFS-8966
Commit: 7781fe1b9e89488b37217f81c246c1398407e47a
Parents: 86c9222
Author: Akira Ajisaka 
Authored: Sat Oct 24 11:54:42 2015 +0900
Committer: Akira Ajisaka 
Committed: Sat Oct 24 11:54:42 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../apache/hadoop/yarn/api/ApplicationClientProtocol.java| 3 +--
 .../api/protocolrecords/FailApplicationAttemptRequest.java   | 1 +
 .../api/protocolrecords/FailApplicationAttemptResponse.java  | 1 +
 .../nodelabels/RMDelegatedNodeLabelsUpdater.java | 8 
 .../nodelabels/RMNodeLabelsMappingProvider.java  | 3 ++-
 6 files changed, 12 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7781fe1b/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 7e30ac9..99cad92 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -983,6 +983,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4256. YARN fair scheduler vcores with decimal values. (Jun Gong via 
zxu)
 
+YARN-4294. [JDK8] Fix javadoc errors caused by wrong reference and illegal
+tag. (aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7781fe1b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
index 78dc196..1f0e777 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
@@ -59,16 +59,15 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.SignalContainerRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.SignalContainerResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationResponse;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
 import org.apache.hadoop.yarn.api.records.NodeReport;
 import org.apache.hadoop.yarn.api.records.ReservationId;
 import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.api.records.YarnClusterMetrics;
 import org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException;
-import org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 
 /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7781fe1b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FailApplicationAttemptRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FailApplicationAttemptRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FailApplicationAttemptRequest.java
index 2d3c7a4..270020a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FailApplicationAttemptRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FailApplicationAttemptRequest.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.api.protocolrecords;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.util.Records;
 

[43/50] hadoop git commit: HDFS-9284. fsck command should not print exception trace when file not found. Contributed by Jagadesh Kiran N.

2015-10-27 Thread jing9
HDFS-9284. fsck command should not print exception trace when file not found. 
Contributed by Jagadesh Kiran N.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/677a936b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/677a936b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/677a936b

Branch: refs/heads/HDFS-8966
Commit: 677a936bf759515ac94d9accb9bf5364f688d051
Parents: a01a209
Author: Andrew Wang 
Authored: Mon Oct 26 15:15:03 2015 -0700
Committer: Andrew Wang 
Committed: Mon Oct 26 15:15:03 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java | 3 +--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/677a936b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3b10893..397cb19 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1573,6 +1573,9 @@ Release 2.8.0 - UNRELEASED
 BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas.
 (Wei-Chiu Chuang via Yongjun Zhang)
 
+HDFS-9284. fsck command should not print exception trace when file not
+found. (Jagadesh Kiran N via wang)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/677a936b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
index 6bb6603..ab689ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
@@ -41,7 +41,6 @@ import org.apache.hadoop.hdfs.server.namenode.NamenodeFsck;
 import org.apache.hadoop.hdfs.web.URLConnectionFactory;
 import org.apache.hadoop.security.UserGroupInformation;
 import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
-import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
@@ -317,7 +316,7 @@ public class DFSck extends Configured implements Tool {
   namenodeAddress = getCurrentNamenodeAddress(dirpath);
 } catch (IOException ioe) {
   System.err.println("FileSystem is inaccessible due to:\n"
-  + StringUtils.stringifyException(ioe));
+  + ioe.toString());
 }
 
 if (namenodeAddress == null) {



[50/50] hadoop git commit: HDFS-8967. Create a BlockManagerLock class to represent the lock used in the BlockManager. Contributed by Haohui Mai.

2015-10-27 Thread jing9
HDFS-8967. Create a BlockManagerLock class to represent the lock used in the 
BlockManager. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1ecc3300
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1ecc3300
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1ecc3300

Branch: refs/heads/HDFS-8966
Commit: 1ecc3300552f3f54192fa7e8cbd647cc8b589fe5
Parents: 56e4f62
Author: Jing Zhao 
Authored: Tue Oct 6 23:35:24 2015 -0700
Committer: Jing Zhao 
Committed: Tue Oct 27 14:46:09 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 124 ---
 .../blockmanagement/BlockManagerLock.java   |  50 
 .../CacheReplicationMonitor.java|   9 +-
 .../server/blockmanagement/DatanodeManager.java |  10 +-
 .../blockmanagement/DecommissionManager.java|   4 +-
 .../blockmanagement/HeartbeatManager.java   |   8 +-
 .../hdfs/server/namenode/CacheManager.java  |   4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   5 +
 .../hadoop/hdfs/server/namenode/Namesystem.java |   4 +
 .../blockmanagement/BlockManagerTestUtil.java   |  15 ++-
 .../blockmanagement/TestBlockManager.java   |  27 ++--
 .../blockmanagement/TestDatanodeManager.java|   2 +
 .../blockmanagement/TestReplicationPolicy.java  |  22 +++-
 .../datanode/TestDataNodeVolumeFailure.java |   3 +-
 14 files changed, 200 insertions(+), 87 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ecc3300/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 5f55ece..4c96a3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -94,6 +94,7 @@ import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 
 import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.getInternalBlockLength;
 
+import org.apache.hadoop.hdfs.util.RwLock;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.net.Node;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -112,7 +113,7 @@ import org.slf4j.LoggerFactory;
  * Keeps information related to the blocks stored in the Hadoop cluster.
  */
 @InterfaceAudience.Private
-public class BlockManager implements BlockStatsMXBean {
+public class BlockManager implements RwLock, BlockStatsMXBean {
 
   public static final Logger LOG = LoggerFactory.getLogger(BlockManager.class);
   public static final Logger blockLog = NameNode.blockStateChangeLog;
@@ -125,6 +126,7 @@ public class BlockManager implements BlockStatsMXBean {
 
   private final Namesystem namesystem;
 
+  private final BlockManagerLock lock;
   private final DatanodeManager datanodeManager;
   private final HeartbeatManager heartbeatManager;
   private final BlockTokenSecretManager blockTokenSecretManager;
@@ -302,6 +304,7 @@ public class BlockManager implements BlockStatsMXBean {
   public BlockManager(final Namesystem namesystem, final Configuration conf)
 throws IOException {
 this.namesystem = namesystem;
+this.lock = new BlockManagerLock(namesystem);
 datanodeManager = new DatanodeManager(this, namesystem, conf);
 heartbeatManager = datanodeManager.getHeartbeatManager();
 
@@ -518,7 +521,7 @@ public class BlockManager implements BlockStatsMXBean {
 
   /** Dump meta data to out. */
   public void metaSave(PrintWriter out) {
-assert namesystem.hasWriteLock();
+assert hasWriteLock();
 final List live = new ArrayList();
 final List dead = new ArrayList();
 datanodeManager.fetchDatanodes(live, dead, false);
@@ -550,7 +553,7 @@ public class BlockManager implements BlockStatsMXBean {
 // Dump all datanodes
 getDatanodeManager().datanodeDump(out);
   }
-  
+
   /**
* Dump the metadata for the given block in a human-readable
* form.
@@ -579,12 +582,12 @@ public class BlockManager implements BlockStatsMXBean {
   out.print(fileName + ": ");
 }
 // l: == live:, d: == decommissioned c: == corrupt e: == excess
-out.print(block + ((usableReplicas > 0)? "" : " MISSING") + 
+out.print(block + ((usableReplicas > 0)? "" : " MISSING") +
   " (replicas:" +
   " l: " + numReplicas.liveReplicas() +
   " d: " + numReplicas.decommissionedAndDecommissioning() +
   " c: " + numR

[13/50] hadoop git commit: HADOOP-9692. Improving log message when SequenceFile reader throws EOFException on zero-length file. (Zhe Zhang and Chu Tong via ozawa)

2015-10-27 Thread jing9
HADOOP-9692. Improving log message when SequenceFile reader throws EOFException 
on zero-length file. (Zhe Zhang and Chu Tong via ozawa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/513ec3de
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/513ec3de
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/513ec3de

Branch: refs/heads/HDFS-8966
Commit: 513ec3de194f705ca342de16829e1f85be227e7f
Parents: 039a1f9
Author: Tsuyoshi Ozawa 
Authored: Thu Oct 22 11:55:25 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Fri Oct 23 06:50:50 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/io/SequenceFile.java | 15 +++---
 .../org/apache/hadoop/io/TestSequenceFile.java  | 21 
 3 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/513ec3de/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 0d3daa2..a7a1d1b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -902,6 +902,9 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-10406. TestIPC.testIpcWithReaderQueuing may fail. (Xiao Chen via 
wang)
 
+HADOOP-9692. Improving log message when SequenceFile reader throws
+EOFException on zero-length file. (Zhe Zhang and Chu Tong via ozawa)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/513ec3de/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
index e37e855..ed57eee 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
@@ -1912,17 +1912,26 @@ public class SequenceFile {
  */
 private void init(boolean tempReader) throws IOException {
   byte[] versionBlock = new byte[VERSION.length];
-  in.readFully(versionBlock);
+  String exceptionMsg = this + " not a SequenceFile";
+
+  // Try to read sequence file header.
+  try {
+in.readFully(versionBlock);
+  } catch (EOFException e) {
+throw new EOFException(exceptionMsg);
+  }
 
   if ((versionBlock[0] != VERSION[0]) ||
   (versionBlock[1] != VERSION[1]) ||
-  (versionBlock[2] != VERSION[2]))
+  (versionBlock[2] != VERSION[2])) {
 throw new IOException(this + " not a SequenceFile");
+  }
 
   // Set 'version'
   version = versionBlock[3];
-  if (version > VERSION[3])
+  if (version > VERSION[3]) {
 throw new VersionMismatchException(VERSION[3], version);
+  }
 
   if (version < BLOCK_COMPRESS_VERSION) {
 UTF8 className = new UTF8();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/513ec3de/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
index 7495c6e..99c97db 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
@@ -522,6 +522,27 @@ public class TestSequenceFile extends TestCase {
 assertTrue("InputStream for " + path + " should have been closed.", 
openedFile[0].isClosed());
   }
 
+  /**
+   * Test to makes sure zero length sequence file is handled properly while
+   * initializing.
+   */
+  public void testInitZeroLengthSequenceFile() throws IOException {
+Configuration conf = new Configuration();
+LocalFileSystem fs = FileSystem.getLocal(conf);
+
+// create an empty file (which is not a valid sequence file)
+Path path = new Path(System.getProperty("test.build.data", ".") +
+  "/zerolength.seq");
+fs.create(path).close();
+
+try {
+  new SequenceFile.Reader(conf, SequenceFile.Reader.file(path));
+  fail("IOException expected.");
+} catch (IOException expected) {
+  assertTrue(expected instanceof EOFException);

[28/50] hadoop git commit: YARN-4289. TestDistributedShell failing with bind exception. (Brahma Reddy Battula via stevel)

2015-10-27 Thread jing9
YARN-4289. TestDistributedShell failing with bind exception.   (Brahma Reddy 
Battula via stevel)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/14f8dd04
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/14f8dd04
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/14f8dd04

Branch: refs/heads/HDFS-8966
Commit: 14f8dd04230ff42c5235eb240f84b8112c5302b5
Parents: 7781fe1
Author: Steve Loughran 
Authored: Sat Oct 24 12:48:03 2015 +0100
Committer: Steve Loughran 
Committed: Sat Oct 24 12:48:15 2015 +0100

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../yarn/applications/distributedshell/TestDistributedShell.java  | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/14f8dd04/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 99cad92..f3dc16c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -986,6 +986,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4294. [JDK8] Fix javadoc errors caused by wrong reference and illegal
 tag. (aajisaka)
 
+YARN-4289. TestDistributedShell failing with bind exception.
+(Brahma Reddy Battula via stevel)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14f8dd04/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
index 967d172..dcb6e72 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileContext;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.util.JarFinder;
 import org.apache.hadoop.util.Shell;
 import org.apache.hadoop.yarn.api.records.ApplicationReport;
@@ -81,6 +82,8 @@ public class TestDistributedShell {
 conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true);
 conf.set(YarnConfiguration.RM_SCHEDULER, 
CapacityScheduler.class.getName());
 conf.setBoolean(YarnConfiguration.NODE_LABELS_ENABLED, true);
+conf.set("mapreduce.jobhistory.address",
+"0.0.0.0:" + ServerSocketUtil.getPort(10021, 10));
 
 if (yarnCluster == null) {
   yarnCluster =



[11/50] hadoop git commit: HADOOP-12484. Single File Rename Throws Incorrectly In Potential Race Condition Scenarios. Contributed by Gaurav Kanade.

2015-10-27 Thread jing9
HADOOP-12484. Single File Rename Throws Incorrectly In Potential Race Condition 
Scenarios. Contributed by Gaurav Kanade.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb282d5b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb282d5b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb282d5b

Branch: refs/heads/HDFS-8966
Commit: cb282d5b89fdece4719cc4ad37a6e27f13371534
Parents: 0fce5f9
Author: cnauroth 
Authored: Thu Oct 22 14:29:57 2015 -0700
Committer: cnauroth 
Committed: Thu Oct 22 14:29:57 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 32 +---
 2 files changed, 30 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb282d5b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 74c62cb..87ba2ba 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1338,6 +1338,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12334. Change Mode Of Copy Operation of HBase WAL Archiving to 
bypass
 Azure Storage Throttling after retries. (Gaurav Kanade via cnauroth)
 
+HADOOP-12484. Single File Rename Throws Incorrectly In Potential Race
+Condition Scenarios. (Gaurav Kanade via cnauroth)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb282d5b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
index b963d5a1..9305b24 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
@@ -545,10 +545,32 @@ public class NativeAzureFileSystem extends FileSystem {
 
 // Get a lease on source to block write access.
 String srcName = fs.pathToKey(srcFile);
-SelfRenewingLease lease = fs.acquireLease(srcFile);
-
-// Delete the file. This will free the lease too.
-fs.getStoreInterface().delete(srcName, lease);
+SelfRenewingLease lease = null;
+try {
+  lease = fs.acquireLease(srcFile);
+  // Delete the file. This will free the lease too.
+  fs.getStoreInterface().delete(srcName, lease);
+} catch(AzureException e) {
+String errorCode = "";
+try {
+  StorageException e2 = (StorageException) e.getCause();
+  errorCode = e2.getErrorCode();
+} catch(Exception e3) {
+  // do nothing if cast fails
+}
+// If the rename already finished do nothing
+if(!errorCode.equals("BlobNotFound")){
+  throw e;
+}
+} finally {
+  try {
+if(lease != null){
+  lease.free();
+}
+  } catch(StorageException e) {
+LOG.warn("Unable to free lease because: " + e.getMessage());
+  }
+}
   } else if (!srcExists && dstExists) {
 
 // The rename already finished, so do nothing.
@@ -2442,4 +2464,4 @@ public class NativeAzureFileSystem extends FileSystem {
   }
 }
   }
-}
\ No newline at end of file
+}



[30/50] hadoop git commit: YARN-3738. Add support for recovery of reserved apps running under dynamic queues (subru via asuresh)

2015-10-27 Thread jing9
YARN-3738. Add support for recovery of reserved apps running under dynamic 
queues (subru via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ab8eb877
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ab8eb877
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ab8eb877

Branch: refs/heads/HDFS-8966
Commit: ab8eb8770c8b8bff41dacb1a399d75906abb1ac4
Parents: 446212a
Author: Arun Suresh 
Authored: Sat Oct 24 22:53:10 2015 -0700
Committer: Arun Suresh 
Committed: Sat Oct 24 22:53:10 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/capacity/CapacityScheduler.java   |  27 ++--
 .../scheduler/fair/FairScheduler.java   |  11 +-
 .../TestWorkPreservingRMRestart.java| 156 ++-
 .../reservation/ReservationSystemTestUtil.java  |  12 ++
 5 files changed, 196 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab8eb877/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0641091..22e4294 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -537,6 +537,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4296. DistributedShell Log.info is not friendly.
 (Xiaowei Wang via stevel)
 
+YARN-3738. Add support for recovery of reserved apps running under dynamic
+queues (subru via asuresh)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab8eb877/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 6e356b5..1075ee0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1320,10 +1320,9 @@ public class CapacityScheduler extends
 case APP_ADDED:
 {
   AppAddedSchedulerEvent appAddedEvent = (AppAddedSchedulerEvent) event;
-  String queueName =
-  resolveReservationQueueName(appAddedEvent.getQueue(),
-  appAddedEvent.getApplicationId(),
-  appAddedEvent.getReservationID());
+  String queueName = resolveReservationQueueName(appAddedEvent.getQueue(),
+  appAddedEvent.getApplicationId(), appAddedEvent.getReservationID(),
+  appAddedEvent.getIsAppRecovering());
   if (queueName != null) {
 if (!appAddedEvent.getIsAppRecovering()) {
   addApplication(appAddedEvent.getApplicationId(), queueName,
@@ -1664,8 +1663,13 @@ public class CapacityScheduler extends
 }
   }
 
+  private String getDefaultReservationQueueName(String planQueueName) {
+return planQueueName + ReservationConstants.DEFAULT_QUEUE_SUFFIX;
+  }
+
   private synchronized String resolveReservationQueueName(String queueName,
-  ApplicationId applicationId, ReservationId reservationID) {
+  ApplicationId applicationId, ReservationId reservationID,
+  boolean isRecovering) {
 CSQueue queue = getQueue(queueName);
 // Check if the queue is a plan queue
 if ((queue == null) || !(queue instanceof PlanQueue)) {
@@ -1675,10 +1679,15 @@ public class CapacityScheduler extends
   String resQName = reservationID.toString();
   queue = getQueue(resQName);
   if (queue == null) {
+// reservation has terminated during failover
+if (isRecovering
+&& conf.getMoveOnExpiry(getQueue(queueName).getQueuePath())) {
+  // move to the default child queue of the plan
+  return getDefaultReservationQueueName(queueName);
+}
 String message =
-"Application "
-+ applicationId
-+ " submitted to a reservation which is not yet currently 
active: "
+"Application " + applicationId
++ " submitted to a reservation which i

[06/50] hadoop git commit: HADOOP-12436. GlobPattern regex library has performance issues with wildcard characters (Matthew Paduano via aw)

2015-10-27 Thread jing9
HADOOP-12436. GlobPattern regex library has performance issues with  wildcard 
characters (Matthew Paduano via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4c0bae24
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4c0bae24
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4c0bae24

Branch: refs/heads/HDFS-8966
Commit: 4c0bae240bea9a475e8ee9a0b081bfce6d1cd1e5
Parents: 2798723
Author: Allen Wittenauer 
Authored: Thu Oct 22 08:19:56 2015 -0700
Committer: Allen Wittenauer 
Committed: Thu Oct 22 08:19:56 2015 -0700

--
 LICENSE.txt | 35 
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 hadoop-common-project/hadoop-common/pom.xml |  6 
 .../java/org/apache/hadoop/fs/GlobFilter.java   |  2 +-
 .../java/org/apache/hadoop/fs/GlobPattern.java  |  7 ++--
 .../metrics2/filter/AbstractPatternFilter.java  |  4 +--
 .../hadoop/metrics2/filter/GlobFilter.java  |  2 +-
 .../hadoop/metrics2/filter/RegexFilter.java |  2 +-
 .../apache/hadoop/security/SaslRpcClient.java   |  2 +-
 .../org/apache/hadoop/fs/TestGlobPattern.java   |  9 +++--
 hadoop-project/pom.xml  |  3 ++
 11 files changed, 64 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c0bae24/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index f339a70..46da0f8 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -320,6 +320,41 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 
LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+For com.google.re2j.* classes:
+-
+This is a work derived from Russ Cox's RE2 in Go, whose license
+http://golang.org/LICENSE is as follows:
+
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+   * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+   * Neither the name of Google Inc. nor the names of its contributors
+ may be used to endorse or promote products derived from this
+ software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 For 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
 -
 Copyright 2002 Niels Provos 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c0bae24/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index aebd4f3..ce2a2a7 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -243,6 +243,9 @@ Trunk (Unreleased)
 
 HADOOP-12249. pull argument parsing into a function (aw)
 
+HADOOP-12436. GlobPattern regex library has performance issues with
+wildcard characters (Matthew Paduano via aw)
+
   BUG FIXES
 
 HADOOP-11473. test-patch says "-1 overall" even when all checks are +1

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c0bae24/hadoop-common-project/hadoop-common/pom.xml
--
diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 21af670..4e47a3f 100644
--- a/hadoop-common-project/hadoop-common/pom.xml

[10/50] hadoop git commit: YARN-4243. Add retry on establishing Zookeeper conenction in EmbeddedElectorService#serviceInit. Contributed by Xuan Gong.

2015-10-27 Thread jing9
YARN-4243. Add retry on establishing Zookeeper conenction in 
EmbeddedElectorService#serviceInit. Contributed by Xuan Gong.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0fce5f9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0fce5f9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0fce5f9a

Branch: refs/heads/HDFS-8966
Commit: 0fce5f9a496925f0d53ea6c14318c9b513de9882
Parents: 960201b
Author: Junping Du 
Authored: Thu Oct 22 13:41:09 2015 -0700
Committer: Junping Du 
Committed: Thu Oct 22 13:41:09 2015 -0700

--
 .../apache/hadoop/ha/ActiveStandbyElector.java  | 53 ++--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../hadoop/yarn/conf/YarnConfiguration.java |  4 ++
 .../src/main/resources/yarn-default.xml |  7 +++
 .../resourcemanager/EmbeddedElectorService.java |  9 ++--
 5 files changed, 68 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fce5f9a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
index fcbcfdf..cb2e081 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
@@ -208,8 +208,49 @@ public class ActiveStandbyElector implements StatCallback, 
StringCallback {
*/
   public ActiveStandbyElector(String zookeeperHostPorts,
   int zookeeperSessionTimeout, String parentZnodeName, List acl,
-  List authInfo,
-  ActiveStandbyElectorCallback app, int maxRetryNum) throws IOException,
+  List authInfo, ActiveStandbyElectorCallback app,
+  int maxRetryNum) throws IOException, HadoopIllegalArgumentException,
+  KeeperException {
+this(zookeeperHostPorts, zookeeperSessionTimeout, parentZnodeName, acl,
+  authInfo, app, maxRetryNum, true);
+  }
+
+  /**
+   * Create a new ActiveStandbyElector object 
+   * The elector is created by providing to it the Zookeeper configuration, the
+   * parent znode under which to create the znode and a reference to the
+   * callback interface. 
+   * The parent znode name must be the same for all service instances and
+   * different across services. 
+   * After the leader has been lost, a new leader will be elected after the
+   * session timeout expires. Hence, the app must set this parameter based on
+   * its needs for failure response time. The session timeout must be greater
+   * than the Zookeeper disconnect timeout and is recommended to be 3X that
+   * value to enable Zookeeper to retry transient disconnections. Setting a 
very
+   * short session timeout may result in frequent transitions between active 
and
+   * standby states during issues like network outages/GS pauses.
+   * 
+   * @param zookeeperHostPorts
+   *  ZooKeeper hostPort for all ZooKeeper servers
+   * @param zookeeperSessionTimeout
+   *  ZooKeeper session timeout
+   * @param parentZnodeName
+   *  znode under which to create the lock
+   * @param acl
+   *  ZooKeeper ACL's
+   * @param authInfo a list of authentication credentials to add to the
+   * ZK connection
+   * @param app
+   *  reference to callback interface object
+   * @param failFast
+   *  whether need to add the retry when establishing ZK connection.
+   * @throws IOException
+   * @throws HadoopIllegalArgumentException
+   */
+  public ActiveStandbyElector(String zookeeperHostPorts,
+  int zookeeperSessionTimeout, String parentZnodeName, List acl,
+  List authInfo, ActiveStandbyElectorCallback app,
+  int maxRetryNum, boolean failFast) throws IOException,
   HadoopIllegalArgumentException, KeeperException {
 if (app == null || acl == null || parentZnodeName == null
 || zookeeperHostPorts == null || zookeeperSessionTimeout <= 0) {
@@ -225,8 +266,12 @@ public class ActiveStandbyElector implements StatCallback, 
StringCallback {
 zkBreadCrumbPath = znodeWorkingDir + "/" + BREADCRUMB_FILENAME;
 this.maxRetryNum = maxRetryNum;
 
-// createConnection for future API calls
-createConnection();
+// establish the ZK Connection for future API calls
+if (failFast) {
+  createConnection();
+} else {
+  reEstablishSession();
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fce5f9a/hadoop-yarn-project/CHANGES.txt
---

[20/50] hadoop git commit: HDFS-9264. Minor cleanup of operations on FsVolumeList#volumes. (Walter Su via lei)

2015-10-27 Thread jing9
HDFS-9264. Minor cleanup of operations on FsVolumeList#volumes.  (Walter Su via 
lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/533a2be5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/533a2be5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/533a2be5

Branch: refs/heads/HDFS-8966
Commit: 533a2be5ac7c7f0473fdd24d6201582d08964e21
Parents: 600ad7b
Author: Lei Xu 
Authored: Fri Oct 23 13:52:59 2015 -0700
Committer: Lei Xu 
Committed: Fri Oct 23 13:52:59 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../datanode/fsdataset/impl/FsVolumeList.java   | 109 ++-
 2 files changed, 38 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/533a2be5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index cf6558f..066ae02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1560,6 +1560,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7087. Ability to list /.reserved. (Xiao Chen via wang)
 
+HDFS-9264. Minor cleanup of operations on FsVolumeList#volumes.
+(Walter Su via lei)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/533a2be5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
index a73e129..608ee29 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
@@ -21,7 +21,6 @@ import java.io.File;
 import java.io.IOException;
 import java.nio.channels.ClosedChannelException;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashSet;
@@ -30,9 +29,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.TreeMap;
 import java.util.Set;
-import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.CopyOnWriteArrayList;
 
-import com.google.common.collect.Lists;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
@@ -46,8 +44,8 @@ import org.apache.hadoop.util.DiskChecker.DiskErrorException;
 import org.apache.hadoop.util.Time;
 
 class FsVolumeList {
-  private final AtomicReference volumes =
-  new AtomicReference<>(new FsVolumeImpl[0]);
+  private final CopyOnWriteArrayList volumes =
+  new CopyOnWriteArrayList<>();
   // Tracks volume failures, sorted by volume path.
   private final Map volumeFailureInfos =
   Collections.synchronizedMap(new TreeMap());
@@ -71,7 +69,7 @@ class FsVolumeList {
* Return an immutable list view of all the volumes.
*/
   List getVolumes() {
-return Collections.unmodifiableList(Arrays.asList(volumes.get()));
+return Collections.unmodifiableList(volumes);
   }
 
   private FsVolumeReference chooseVolume(List list, long 
blockSize)
@@ -98,10 +96,8 @@ class FsVolumeList {
*/
   FsVolumeReference getNextVolume(StorageType storageType, long blockSize)
   throws IOException {
-// Get a snapshot of currently available volumes.
-final FsVolumeImpl[] curVolumes = volumes.get();
-final List list = new ArrayList<>(curVolumes.length);
-for(FsVolumeImpl v : curVolumes) {
+final List list = new ArrayList<>(volumes.size());
+for(FsVolumeImpl v : volumes) {
   if (v.getStorageType() == storageType) {
 list.add(v);
   }
@@ -129,7 +125,7 @@ class FsVolumeList {
 
   long getDfsUsed() throws IOException {
 long dfsUsed = 0L;
-for (FsVolumeImpl v : volumes.get()) {
+for (FsVolumeImpl v : volumes) {
   try(FsVolumeReference ref = v.obtainReference()) {
 dfsUsed += v.getDfsUsed();
   } catch (ClosedChannelException e) {
@@ -141,7 +137,7 @@ class FsVolumeList {
 
   long getBlockPoolUsed(String bpid) throws IOException {
 long dfsUsed = 0L;
-for (FsVolumeImpl v : volumes.get()) {
+for (FsVolumeImpl v : volumes) {
   try (FsVolumeReference ref = v.obtainReference()) {
 dfsUsed += v.get

[46/50] hadoop git commit: HDFS-9305. Delayed heartbeat processing causes storm of subsequent heartbeats. (Contributed by Arpit Agarwal)

2015-10-27 Thread jing9
HDFS-9305. Delayed heartbeat processing causes storm of subsequent heartbeats. 
(Contributed by Arpit Agarwal)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8736eb9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8736eb9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8736eb9

Branch: refs/heads/HDFS-8966
Commit: d8736eb9ca351b82854601ea3b1fbc3c9fab44e4
Parents: e8aefdf
Author: Arpit Agarwal 
Authored: Mon Oct 26 15:45:02 2015 -0700
Committer: Arpit Agarwal 
Committed: Mon Oct 26 15:54:14 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../hdfs/server/datanode/BPServiceActor.java|  4 ++--
 .../datanode/TestBpServiceActorScheduler.java   | 22 
 3 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8736eb9/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 478d48b..e26abcc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2218,6 +2218,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-9290. DFSClient#callAppend() is not backward compatible for slightly
 older NameNodes. (Tony Wu via kihwal)
 
+HDFS-9305. Delayed heartbeat processing causes storm of subsequent
+heartbeats. (Arpit Agarwal)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8736eb9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 85ea6ae..575e7cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -538,6 +538,7 @@ class BPServiceActor implements Runnable {
   
   HeartbeatResponse sendHeartBeat(boolean requestBlockReportLease)
   throws IOException {
+scheduler.scheduleNextHeartbeat();
 StorageReport[] reports =
 dn.getFSDataset().getStorageReports(bpos.getBlockPoolId());
 if (LOG.isDebugEnabled()) {
@@ -651,7 +652,6 @@ class BPServiceActor implements Runnable {
   //
   boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
   scheduler.isBlockReportDue(startTime);
-  scheduler.scheduleNextHeartbeat();
   if (!dn.areHeartbeatsDisabledForTests()) {
 resp = sendHeartBeat(requestBlockReportLease);
 assert resp != null;
@@ -1064,7 +1064,7 @@ class BPServiceActor implements Runnable {
 
 long scheduleNextHeartbeat() {
   // Numerical overflow is possible here and is okay.
-  nextHeartbeatTime += heartbeatIntervalMs;
+  nextHeartbeatTime = monotonicNow() + heartbeatIntervalMs;
   return nextHeartbeatTime;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8736eb9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBpServiceActorScheduler.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBpServiceActorScheduler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBpServiceActorScheduler.java
index b9b6512..efdd87c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBpServiceActorScheduler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBpServiceActorScheduler.java
@@ -144,6 +144,28 @@ public class TestBpServiceActorScheduler {
 }
   }
 
+
+  /**
+   * Regression test for HDFS-9305.
+   * Delayed processing of a heartbeat can cause a subsequent heartbeat
+   * storm.
+   */
+  @Test
+  public void testScheduleDelayedHeartbeat() {
+for (final long now : getTimestamps()) {
+  Scheduler scheduler = makeMockScheduler(now);
+  scheduler.scheduleNextHeartbeat();
+  assertFalse(scheduler.isHeartbeatDue(now));
+
+  // Simulate a delayed heartbeat e.g. due to slow processing by NN.
+  scheduler.nextHeartbeatTime = now - (HEARTBEAT_INTERVAL_MS * 10);
+  scheduler.scheduleNextHeartbeat();
+
+  // Ensure that the next heartbeat i

[19/50] hadoop git commit: HDFS-9184. Logging HDFS operation's caller context into audit logs. Contributed by Mingliang Liu.

2015-10-27 Thread jing9
HDFS-9184. Logging HDFS operation's caller context into audit logs. Contributed 
by Mingliang Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/600ad7bf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/600ad7bf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/600ad7bf

Branch: refs/heads/HDFS-8966
Commit: 600ad7bf4104bcaeec00a4089d59bb1fdf423299
Parents: eb6379c
Author: Jitendra Pandey 
Authored: Fri Oct 23 12:15:01 2015 -0700
Committer: Jitendra Pandey 
Committed: Fri Oct 23 12:15:01 2015 -0700

--
 .../fs/CommonConfigurationKeysPublic.java   |  11 ++
 .../org/apache/hadoop/ipc/CallerContext.java| 147 
 .../main/java/org/apache/hadoop/ipc/Server.java |  22 ++-
 .../java/org/apache/hadoop/util/ProtoUtil.java  |  13 ++
 .../src/main/proto/RpcHeader.proto  |   9 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  42 -
 .../hdfs/server/namenode/HdfsAuditLogger.java   |   7 +-
 .../server/namenode/TestAuditLogAtDebug.java|   2 +-
 .../hdfs/server/namenode/TestAuditLogger.java   | 176 +++
 10 files changed, 421 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/600ad7bf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 9fff33e..f75edd5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -183,6 +183,17 @@ public class CommonConfigurationKeysPublic {
   /** Default value for TFILE_FS_OUTPUT_BUFFER_SIZE_KEY */
   public static final int TFILE_FS_OUTPUT_BUFFER_SIZE_DEFAULT = 256*1024;
 
+  public static final String  HADOOP_CALLER_CONTEXT_ENABLED_KEY =
+  "hadoop.caller.context.enabled";
+  public static final boolean HADOOP_CALLER_CONTEXT_ENABLED_DEFAULT = false;
+  public static final String  HADOOP_CALLER_CONTEXT_MAX_SIZE_KEY =
+  "hadoop.caller.context.max.size";
+  public static final int HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT = 128;
+  public static final String  HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_KEY =
+  "hadoop.caller.context.signature.max.size";
+  public static final int HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT 
=
+  40;
+
   /** See core-default.xml */
   public static final String  IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY =
 "ipc.client.connection.maxidletime";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/600ad7bf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
new file mode 100644
index 000..8be7e35
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ipc;
+
+import org.apache.commons.lang.builder.EqualsBuilder;
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+
+/**
+ * A class defining the caller context for auditing coarse granularity
+ * operations.
+ *
+ * This class is immutable.
+

[48/50] hadoop git commit: YARN-4169. Fix racing condition of TestNodeStatusUpdaterForLabels. (Naganarasimha G R via wangda)

2015-10-27 Thread jing9
YARN-4169. Fix racing condition of TestNodeStatusUpdaterForLabels. 
(Naganarasimha G R via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6f606214
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6f606214
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6f606214

Branch: refs/heads/HDFS-8966
Commit: 6f606214e734d9600bc0f25a63142714f0fea633
Parents: 399ad00
Author: Wangda Tan 
Authored: Mon Oct 26 16:36:34 2015 -0700
Committer: Wangda Tan 
Committed: Mon Oct 26 16:36:34 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../yarn/nodelabels/NodeLabelTestBase.java  | 116 ++-
 .../nodelabels/TestCommonNodeLabelsManager.java |   6 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  |   5 +
 .../TestNodeStatusUpdaterForLabels.java |  89 --
 .../nodelabels/TestRMNodeLabelsManager.java |  34 +++---
 6 files changed, 143 insertions(+), 110 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f606214/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 8a2cfc8..b51d89e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1014,6 +1014,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4284. condition for AM blacklisting is too narrow (Sangjin Lee via
 jlowe)
 
+YARN-4169. Fix racing condition of TestNodeStatusUpdaterForLabels. 
+(Naganarasimha G R via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f606214/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java
index f834d54..1f64e50 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java
@@ -22,8 +22,8 @@ import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Map;
-import java.util.Set;
 import java.util.Map.Entry;
+import java.util.Set;
 
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.NodeLabel;
@@ -33,47 +33,48 @@ import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Sets;
 
 public class NodeLabelTestBase {
-  public static void assertMapEquals(Map> m1,
-  ImmutableMap> m2) {
-Assert.assertEquals(m1.size(), m2.size());
-for (NodeId k : m1.keySet()) {
-  Assert.assertTrue(m2.containsKey(k));
-  assertCollectionEquals(m1.get(k), m2.get(k));
+  public static void assertMapEquals(Map> expected,
+  ImmutableMap> actual) {
+Assert.assertEquals(expected.size(), actual.size());
+for (NodeId k : expected.keySet()) {
+  Assert.assertTrue(actual.containsKey(k));
+  assertCollectionEquals(expected.get(k), actual.get(k));
 }
   }
 
-  public static void assertLabelInfoMapEquals(Map> m1,
-  ImmutableMap> m2) {
-Assert.assertEquals(m1.size(), m2.size());
-for (NodeId k : m1.keySet()) {
-  Assert.assertTrue(m2.containsKey(k));
-  assertNLCollectionEquals(m1.get(k), m2.get(k));
+  public static void assertLabelInfoMapEquals(
+  Map> expected,
+  ImmutableMap> actual) {
+Assert.assertEquals(expected.size(), actual.size());
+for (NodeId k : expected.keySet()) {
+  Assert.assertTrue(actual.containsKey(k));
+  assertNLCollectionEquals(expected.get(k), actual.get(k));
 }
   }
 
-  public static void assertLabelsToNodesEquals(Map> m1,
-  ImmutableMap> m2) {
-Assert.assertEquals(m1.size(), m2.size());
-for (String k : m1.keySet()) {
-  Assert.assertTrue(m2.containsKey(k));
-  Set s1 = new HashSet(m1.get(k));
-  Set s2 = new HashSet(m2.get(k));
-  Assert.assertEquals(s1, s2);
-  Assert.assertTrue(s1.containsAll(s2));
+  public static void assertLabelsToNodesEquals(
+  Map> expected,
+  ImmutableMap> actual) {
+Assert.assertEquals(expected.size(), actual.size());
+for (String k : expected.keySet()) {
+  Assert.assertTrue(actual.containsKey(k));
+  Set expectedS1 = new HashSet<>(expected.get(k));
+  Set actualS2 = new HashSet<>(actual.get(k));
+  Assert.assertEquals(expecte

[40/50] hadoop git commit: HDFS-7284. Add more debug info to BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas. (Wei-Chiu Chuang via Yongjun Zhang)

2015-10-27 Thread jing9
HDFS-7284. Add more debug info to 
BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas. (Wei-Chiu 
Chuang via Yongjun Zhang)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e718de5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e718de5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e718de5

Branch: refs/heads/HDFS-8966
Commit: 5e718de522328d1112ad38063596c204aa43f539
Parents: 3cc7377
Author: Yongjun Zhang 
Authored: Mon Oct 26 13:16:11 2015 -0700
Committer: Yongjun Zhang 
Committed: Mon Oct 26 13:25:31 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/protocol/Block.java| 13 -
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  4 
 .../hadoop/hdfs/server/blockmanagement/BlockInfo.java  |  4 ++--
 3 files changed, 18 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e718de5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
index 710897e..2b139b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
@@ -153,10 +153,21 @@ public class Block implements Writable, Comparable 
{
   }
 
   /**
+   * A helper method to output the string representation of the Block portion 
of
+   * a derived class' instance.
+   *
+   * @param b the target object
+   * @return the string representation of the block
+   */
+  public static String toString(final Block b) {
+return b.getBlockName() + "_" + b.getGenerationStamp();
+  }
+
+  /**
*/
   @Override
   public String toString() {
-return getBlockName() + "_" + getGenerationStamp();
+return toString(this);
   }
 
   public void appendStringTo(StringBuilder sb) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e718de5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7ce5a09..c748d29 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1569,6 +1569,10 @@ Release 2.8.0 - UNRELEASED
 HDFS-4015. Safemode should count and report orphaned blocks.
 (Anu Engineer via Arpit Agarwal)
 
+HDFS-7284. Add more debug info to
+BlockInfoUnderConstruction#setGenerationStampAndVerifyReplicas.
+(Wei-Chiu Chuang via Yongjun Zhang)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e718de5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 92a1135..e15b5ee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -393,8 +393,8 @@ public abstract class BlockInfo extends Block
 List staleReplicas = 
uc.getStaleReplicas(genStamp);
 for (ReplicaUnderConstruction r : staleReplicas) {
   r.getExpectedStorageLocation().removeBlock(this);
-  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica "
-  + "from location: {}", r.getExpectedStorageLocation());
+  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica {}"
+  + " of {}", r, Block.toString(r));
 }
   }
 



[42/50] hadoop git commit: HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. Contributed by Steve Loughran.

2015-10-27 Thread jing9
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. Contributed 
by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a01a209f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a01a209f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a01a209f

Branch: refs/heads/HDFS-8966
Commit: a01a209fbed33b2ecaf9e736631e64abefae01aa
Parents: 2f1eb2b
Author: Jing Zhao 
Authored: Mon Oct 26 14:03:15 2015 -0700
Committer: Jing Zhao 
Committed: Mon Oct 26 14:03:15 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +
 .../apache/hadoop/test/GenericTestUtils.java| 30 ++--
 .../hadoop/test/TestGenericTestUtils.java   | 78 
 3 files changed, 106 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a01a209f/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 4764488..1b4b27e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -907,6 +907,9 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-7266. Deprecate metrics v1. (Akira AJISAKA via ozawa)
 
+HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust.
+(Steve Loughran via jing9)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a01a209f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 5bc1252..3f0b89d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -142,12 +142,32 @@ public abstract class GenericTestUtils {
 Joiner.on(",").join(expectedSet),
 Joiner.on(",").join(found));
   }
-  
+
+  protected static String E_NULL_THROWABLE = "Null Throwable";
+  protected static String E_NULL_THROWABLE_STRING =
+  "Null Throwable.toString() value";
+  protected static String E_UNEXPECTED_EXCEPTION =
+  "but got unexpected exception";
+
+  /**
+   * Assert that an exception's toString() value
+   * contained the expected text.
+   * @param string expected string
+   * @param t thrown exception
+   * @throws AssertionError if the expected string is not found
+   */
   public static void assertExceptionContains(String string, Throwable t) {
-String msg = t.getMessage();
-Assert.assertTrue(
-"Expected to find '" + string + "' but got unexpected exception:"
-+ StringUtils.stringifyException(t), msg.contains(string));
+Assert.assertNotNull(E_NULL_THROWABLE, t);
+String msg = t.toString();
+if (msg == null) {
+  throw new AssertionError(E_NULL_THROWABLE_STRING, t);
+}
+if (!msg.contains(string)) {
+  throw new AssertionError("Expected to find '" + string + "' "
+  + E_UNEXPECTED_EXCEPTION + ":"
+  + StringUtils.stringifyException(t),
+  t);
+}
   }  
 
   public static void waitFor(Supplier check,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a01a209f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
new file mode 100644
index 000..8a7b5f6
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS

[34/50] hadoop git commit: YARN-4223. Fixed findbugs warnings in hadoop-yarn-server-nodemanager project. (varun saxena via rohithsharmaks)

2015-10-27 Thread jing9
YARN-4223. Fixed findbugs warnings in hadoop-yarn-server-nodemanager project. 
(varun saxena via rohithsharmaks)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/092883b3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/092883b3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/092883b3

Branch: refs/heads/HDFS-8966
Commit: 092883b34a0fe62568566bbf9108882f47003f96
Parents: ce60b4f
Author: Rohith Sharma K S 
Authored: Mon Oct 26 15:23:51 2015 +0530
Committer: Rohith Sharma K S 
Committed: Mon Oct 26 15:23:51 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../hadoop-yarn/dev-support/findbugs-exclude.xml| 5 +
 2 files changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/092883b3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d06d026..ba25adf 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1002,6 +1002,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3528. Tests with 12345 as hard-coded port break jenkins.
 (Brahma Reddy Battula via ozawa)
 
+YARN-4223. Fixed findbugs warnings in hadoop-yarn-server-nodemanager 
project
+(varun saxena via rohithsharmaks)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/092883b3/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 114851f..133e95d 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -109,6 +109,11 @@
 
 
   
+  
+
+
+
+  
 
 
   



[02/50] hadoop git commit: HDFS-7087. Ability to list /.reserved. Contributed by Xiao Chen.

2015-10-27 Thread jing9
HDFS-7087. Ability to list /.reserved. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3dadf369
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3dadf369
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3dadf369

Branch: refs/heads/HDFS-8966
Commit: 3dadf369d550c2ae393b751cb5a184dbfe2814df
Parents: 1b525a9
Author: Andrew Wang 
Authored: Wed Oct 21 16:58:47 2015 -0700
Committer: Andrew Wang 
Committed: Wed Oct 21 16:58:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 .../hdfs/server/namenode/FSDirAttrOp.java   |   7 +
 .../hdfs/server/namenode/FSDirDeleteOp.java |   5 +
 .../hdfs/server/namenode/FSDirRenameOp.java |   6 +
 .../server/namenode/FSDirStatAndListingOp.java  |  16 ++
 .../hdfs/server/namenode/FSDirSymlinkOp.java|   3 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |   7 +
 .../hdfs/server/namenode/FSDirectory.java   |  63 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   1 +
 .../org/apache/hadoop/fs/TestGlobPaths.java |   4 -
 .../org/apache/hadoop/hdfs/TestDFSShell.java| 194 +++
 .../hadoop/hdfs/TestReservedRawPaths.java   |  13 +-
 12 files changed, 307 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3dadf369/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 949dc80..316ee3b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1555,6 +1555,8 @@ Release 2.8.0 - UNRELEASED
 TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack.
 (Masatake Iwasaki via wang)
 
+HDFS-7087. Ability to list /.reserved. (Xiao Chen via wang)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3dadf369/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 9099970..2cba2cb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.namenode;
 
 import org.apache.hadoop.HadoopIllegalArgumentException;
+import org.apache.hadoop.fs.InvalidPathException;
 import org.apache.hadoop.fs.PathIsNotDirectoryException;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -50,6 +51,9 @@ public class FSDirAttrOp {
   FSDirectory fsd, final String srcArg, FsPermission permission)
   throws IOException {
 String src = srcArg;
+if (FSDirectory.isExactReservedName(src)) {
+  throw new InvalidPathException(src);
+}
 FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
 INodesInPath iip;
@@ -69,6 +73,9 @@ public class FSDirAttrOp {
   static HdfsFileStatus setOwner(
   FSDirectory fsd, String src, String username, String group)
   throws IOException {
+if (FSDirectory.isExactReservedName(src)) {
+  throw new InvalidPathException(src);
+}
 FSPermissionChecker pc = fsd.getPermissionChecker();
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
 INodesInPath iip;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3dadf369/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
index 51d643a..006fbc2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import org.apache.hadoop.fs.InvalidPathException;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathIsNotEm

[26/50] hadoop git commit: HDFS-4015. Safemode should count and report orphaned blocks. (Contributed by Anu Engineer)

2015-10-27 Thread jing9
HDFS-4015. Safemode should count and report orphaned blocks. (Contributed by 
Anu Engineer)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/86c92227
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/86c92227
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/86c92227

Branch: refs/heads/HDFS-8966
Commit: 86c92227fc56b6e06d879d250728e8dc8cbe98fe
Parents: 5679e46
Author: Arpit Agarwal 
Authored: Fri Oct 23 17:27:45 2015 -0700
Committer: Arpit Agarwal 
Committed: Fri Oct 23 18:07:17 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  10 +
 .../hadoop/hdfs/DistributedFileSystem.java  |  11 ++
 .../hadoop/hdfs/protocol/ClientProtocol.java|   5 +-
 .../hadoop/hdfs/protocol/HdfsConstants.java |   2 +-
 .../hadoop/hdfs/protocolPB/PBHelperClient.java  |  14 +-
 .../src/main/proto/ClientNamenodeProtocol.proto |   2 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/BlockManager.java|  61 +-
 .../blockmanagement/HeartbeatManager.java   |   1 +
 .../hdfs/server/namenode/FSNamesystem.java  |  73 +++-
 .../hadoop/hdfs/server/namenode/NameNode.java   |   7 +-
 .../server/namenode/NameNodeStatusMXBean.java   |   6 +
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java  |  18 +-
 .../src/site/markdown/HDFSCommands.md   |   6 +-
 .../TestNameNodeMetadataConsistency.java| 186 +++
 .../src/test/resources/testHDFSConf.xml |  20 +-
 16 files changed, 408 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c92227/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 08f25f5..ca0538e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -2006,6 +2006,16 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   }
 
   /**
+   * Returns number of bytes that reside in Blocks with future generation
+   * stamps.
+   * @return Bytes in Blocks with future generation stamps.
+   * @throws IOException
+   */
+  public long getBytesInFutureBlocks() throws IOException {
+return callGetStats()[ClientProtocol.GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX];
+  }
+
+  /**
* @return a list in which each entry describes a corrupt file/block
* @throws IOException
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c92227/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 39cc42b..f4ca265 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -537,6 +537,17 @@ public class DistributedFileSystem extends FileSystem {
   }
 
   /**
+   * Returns number of bytes within blocks with future generation stamp. These
+   * are bytes that will be potentially deleted if we forceExit from safe mode.
+   *
+   * @return number of bytes.
+   */
+  public long getBytesWithFutureGenerationStamps() throws IOException {
+statistics.incrementReadOps(1);
+return dfs.getBytesInFutureBlocks();
+  }
+
+  /**
* Deprecated. Prefer {@link FileSystem#getAllStoragePolicies()}
* @throws IOException
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c92227/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index 2a40047..6ebb01d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -712,10 +712,12 @@ public interface ClientProtocol {
   int GET_STATS_

[31/50] hadoop git commit: YARN-3724. Use POSIX nftw(3) instead of fts(3) (Alan Burlison via aw)

2015-10-27 Thread jing9
YARN-3724. Use POSIX nftw(3) instead of fts(3) (Alan Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1aa735c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1aa735c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1aa735c1

Branch: refs/heads/HDFS-8966
Commit: 1aa735c188a308ca608694546c595e3c51f38612
Parents: ab8eb87
Author: Allen Wittenauer 
Authored: Sun Oct 25 21:43:23 2015 -0700
Committer: Allen Wittenauer 
Committed: Sun Oct 25 21:43:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../impl/container-executor.c   | 229 ++-
 2 files changed, 117 insertions(+), 114 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1aa735c1/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 22e4294..0169163 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -995,6 +995,8 @@ Release 2.8.0 - UNRELEASED
 YARN-4289. TestDistributedShell failing with bind exception.
 (Brahma Reddy Battula via stevel)
 
+YARN-3724. Use POSIX nftw(3) instead of fts(3) (Alan Burlison via aw)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1aa735c1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index f721697..e81f0c7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -23,7 +23,11 @@
 #include 
 #include 
 #include 
-#include 
+#ifdef __sun
+#include 
+#define NAME_MAX MAXNAMELEN
+#endif
+#include 
 #include 
 #include 
 #include 
@@ -1574,134 +1578,131 @@ static int rmdir_as_nm(const char* path) {
 }
 
 /**
+ * nftw callback and associated TLS.
+ */
+
+typedef struct {
+  int errnum;
+  int chmods;
+} nftw_state_t;
+
+static __thread nftw_state_t nftws;
+
+static int nftw_cb(const char *path,
+   const struct stat *stat,
+   int type,
+   struct FTW *ftw) {
+
+  /* Leave the top-level directory to be deleted by the caller. */
+  if (ftw->level == 0) {
+return 0;
+  }
+
+  switch (type) {
+/* Directory, post-order. Should be empty so remove the directory. */
+case FTW_DP:
+  if (rmdir(path) != 0) {
+/* Record the first errno. */
+if (errno != ENOENT && nftws.errnum == 0) {
+  nftws.errnum = errno;
+}
+fprintf(LOGFILE, "Couldn't delete directory %s - %s\n", path, 
strerror(errno));
+/* Read-only filesystem, no point in continuing. */
+if (errno == EROFS) {
+  return -1;
+}
+  }
+  break;
+/* File or symlink. Remove. */
+case FTW_F:
+case FTW_SL:
+  if (unlink(path) != 0) {
+/* Record the first errno. */
+if (errno != ENOENT && nftws.errnum == 0) {
+  nftws.errnum = errno;
+}
+fprintf(LOGFILE, "Couldn't delete file %s - %s\n", path, 
strerror(errno));
+/* Read-only filesystem, no point in continuing. */
+if (errno == EROFS) {
+  return -1;
+}
+  }
+  break;
+/* Unreadable file or directory. Attempt to chmod. */
+case FTW_DNR:
+  if (chmod(path, 0700) == 0) {
+nftws.chmods++;
+  } else {
+/* Record the first errno. */
+if (errno != ENOENT && nftws.errnum == 0) {
+  nftws.errnum = errno;
+}
+fprintf(LOGFILE, "Couldn't chmod %s - %s.\n", path, strerror(errno));
+  }
+  break;
+/* Should never happen. */
+default:
+  fprintf(LOGFILE, "Internal error deleting %s\n", path);
+  return -1;
+  }
+  return 0;
+}
+
+/**
  * Recursively delete the given path.
  * full_path : the path to delete
  * needs_tt_user: the top level directory must be deleted by the tt user.
  */
 static int delete_path(const char *full_path, 
int needs_tt_user) {
-  int exit_code = 0;
 
+  /* Return an error if the path is null. */
   if (full_path == NULL) {
 f

[47/50] hadoop git commit: HDFS-9292. Make TestFileConcorruption independent to underlying FsDataset Implementation. (lei)

2015-10-27 Thread jing9
HDFS-9292. Make TestFileConcorruption independent to underlying FsDataset 
Implementation. (lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/399ad009
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/399ad009
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/399ad009

Branch: refs/heads/HDFS-8966
Commit: 399ad009158cbc6aca179396d390fe770801420f
Parents: d8736eb
Author: Lei Xu 
Authored: Mon Oct 26 16:08:06 2015 -0700
Committer: Lei Xu 
Committed: Mon Oct 26 16:09:22 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/TestFileCorruption.java  | 65 
 2 files changed, 30 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/399ad009/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e26abcc..f6904c3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1581,6 +1581,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8945. Update the description about replica placement in HDFS
 Architecture documentation. (Masatake Iwasaki via wang)
 
+HDFS-9292. Make TestFileConcorruption independent to underlying FsDataset
+Implementation. (lei)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/399ad009/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
index 8e0ffe7..c1a7ebb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
@@ -24,20 +24,16 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
-import java.io.File;
 import java.io.FileOutputStream;
 import java.util.ArrayList;
-import java.util.Collection;
-import java.util.List;
+import java.util.Map;
 
-import org.apache.commons.io.FileUtils;
-import org.apache.commons.io.filefilter.DirectoryFileFilter;
-import org.apache.commons.io.filefilter.PrefixFileFilter;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
@@ -45,6 +41,7 @@ import 
org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils;
 import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.PathUtils;
 import org.apache.log4j.Level;
@@ -74,17 +71,17 @@ public class TestFileCorruption {
   FileSystem fs = cluster.getFileSystem();
   util.createFiles(fs, "/srcdat");
   // Now deliberately remove the blocks
-  File storageDir = cluster.getInstanceStorageDir(2, 0);
   String bpid = cluster.getNamesystem().getBlockPoolId();
-  File data_dir = MiniDFSCluster.getFinalizedDir(storageDir, bpid);
-  assertTrue("data directory does not exist", data_dir.exists());
-  Collection blocks = FileUtils.listFiles(data_dir,
-  new PrefixFileFilter(Block.BLOCK_FILE_PREFIX),
-  DirectoryFileFilter.DIRECTORY);
-  assertTrue("Blocks do not exist in data-dir", blocks.size() > 0);
-  for (File block : blocks) {
-System.out.println("Deliberately removing file " + block.getName());
-assertTrue("Cannot remove file.", block.delete());
+  DataNode dn = cluster.getDataNodes().get(2);
+  Map blockReports =
+  dn.getFSDataset().getBlockReports(bpid);
+  assertTrue("Blocks do not exist on data-dir", !blockReports.isEmpty());
+  for (BlockListAsLongs report : blockReports.values()) {
+for (BlockReportReplica brr : report) {
+  LOG.info("Deliberately 

[44/50] hadoop git commit: HDFS-9291. Fix TestInterDatanodeProtocol to be FsDataset-agnostic. (lei)

2015-10-27 Thread jing9
HDFS-9291. Fix TestInterDatanodeProtocol to be FsDataset-agnostic. (lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/37bf6141
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/37bf6141
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/37bf6141

Branch: refs/heads/HDFS-8966
Commit: 37bf6141f10d6f4be138c965ea08032420b01f56
Parents: 677a936
Author: Lei Xu 
Authored: Mon Oct 26 15:16:09 2015 -0700
Committer: Lei Xu 
Committed: Mon Oct 26 15:18:24 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 2 ++
 .../hadoop/hdfs/server/datanode/FsDatasetTestUtils.java   | 7 +++
 .../datanode/fsdataset/impl/FsDatasetImplTestUtils.java   | 7 +++
 .../datanode/fsdataset/impl/TestInterDatanodeProtocol.java| 2 +-
 4 files changed, 17 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/37bf6141/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 397cb19..55c77f8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1576,6 +1576,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-9284. fsck command should not print exception trace when file not
 found. (Jagadesh Kiran N via wang)
 
+HDFS-9291. Fix TestInterDatanodeProtocol to be FsDataset-agnostic. (lei)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/37bf6141/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
index 252b285..eb986ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
@@ -192,4 +192,11 @@ public interface FsDatasetTestUtils {
*/
   Replica createReplicaUnderRecovery(ExtendedBlock block, long recoveryId)
   throws IOException;
+
+  /**
+   * Check the stored files / data of a replica.
+   * @param replica a replica object.
+   * @throws IOException
+   */
+  void checkStoredReplica(final Replica replica) throws IOException;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/37bf6141/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
index 3058b54..ed32fae 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
@@ -285,4 +285,11 @@ public class FsDatasetImplTestUtils implements 
FsDatasetTestUtils {
   return rur;
 }
   }
+
+  @Override
+  public void checkStoredReplica(Replica replica) throws IOException {
+Preconditions.checkArgument(replica instanceof ReplicaInfo);
+ReplicaInfo r = (ReplicaInfo) replica;
+FsDatasetImpl.checkReplicaFiles(r);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/37bf6141/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
index 6cc3d7e..8581807 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
@@ -359,7 +359,7 @@ public class TestInterDatanodeProtoco

[22/50] hadoop git commit: HDFS-8808. dfs.image.transfer.bandwidthPerSec should not apply to -bootstrapStandby. Contributed by Zhe Zhang.

2015-10-27 Thread jing9
HDFS-8808. dfs.image.transfer.bandwidthPerSec should not apply to 
-bootstrapStandby. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ab3c4cff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ab3c4cff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ab3c4cff

Branch: refs/heads/HDFS-8966
Commit: ab3c4cff4af338caaa23be0ec383fc1fe473714f
Parents: d3a34a4
Author: Zhe Zhang 
Authored: Fri Oct 23 13:58:26 2015 -0700
Committer: Zhe Zhang 
Committed: Fri Oct 23 14:01:49 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  5 ++
 .../hdfs/server/namenode/Checkpointer.java  |  2 +-
 .../hdfs/server/namenode/ImageServlet.java  | 38 +---
 .../hdfs/server/namenode/SecondaryNameNode.java |  2 +-
 .../hdfs/server/namenode/TransferFsImage.java   |  5 +-
 .../server/namenode/ha/BootstrapStandby.java|  2 +-
 .../src/main/resources/hdfs-default.xml | 24 ++--
 .../hdfs/server/namenode/TestCheckpoint.java|  2 +-
 .../namenode/ha/TestBootstrapStandby.java   | 63 +++-
 10 files changed, 127 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3c4cff/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 066ae02..23a54eb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1563,6 +1563,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9264. Minor cleanup of operations on FsVolumeList#volumes.
 (Walter Su via lei)
 
+HDFS-8808. dfs.image.transfer.bandwidthPerSec should not apply to
+-bootstrapStandby (zhz)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3c4cff/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 275e638..5f9dde0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -520,6 +520,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{

"dfs.image.transfer.bandwidthPerSec";
   public static final long DFS_IMAGE_TRANSFER_RATE_DEFAULT = 0;  //no 
throttling
 
+  public static final String DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_KEY =
+  "dfs.image.transfer-bootstrap-standby.bandwidthPerSec";
+  public static final long DFS_IMAGE_TRANSFER_BOOTSTRAP_STANDBY_RATE_DEFAULT =
+  0;  //no throttling
+
   // Image transfer timeout
   public static final String DFS_IMAGE_TRANSFER_TIMEOUT_KEY = 
"dfs.image.transfer.timeout";
   public static final int DFS_IMAGE_TRANSFER_TIMEOUT_DEFAULT = 60 * 1000;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3c4cff/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
index 9087629..83d835ac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
@@ -222,7 +222,7 @@ class Checkpointer extends Daemon {
 "image with txid " + sig.mostRecentCheckpointTxId);
 MD5Hash downloadedHash = TransferFsImage.downloadImageToStorage(
 backupNode.nnHttpAddress, sig.mostRecentCheckpointTxId, bnStorage,
-true);
+true, false);
 bnImage.saveDigestAndRenameCheckpointImage(NameNodeFile.IMAGE,
 sig.mostRecentCheckpointTxId, downloadedHash);
 lastApplied = sig.mostRecentCheckpointTxId;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3c4cff/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/h

[24/50] hadoop git commit: HDFS-9301. HDFS clients can't construct HdfsConfiguration instances. Contributed by Mingliang Liu.

2015-10-27 Thread jing9
HDFS-9301. HDFS clients can't construct HdfsConfiguration instances. 
Contributed by Mingliang Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15eb84b3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15eb84b3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15eb84b3

Branch: refs/heads/HDFS-8966
Commit: 15eb84b37e6c0195d59d3a29fbc5b7417bf022ff
Parents: b9e0417
Author: Haohui Mai 
Authored: Fri Oct 23 14:50:25 2015 -0700
Committer: Haohui Mai 
Committed: Fri Oct 23 14:53:34 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   2 +-
 .../hadoop/hdfs/DistributedFileSystem.java  |   2 +-
 .../apache/hadoop/hdfs/HdfsConfiguration.java   | 152 +++
 .../hadoop/hdfs/HdfsConfigurationLoader.java|  44 --
 .../hdfs/client/HdfsClientConfigKeys.java   |  53 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  90 +++
 .../apache/hadoop/hdfs/HdfsConfiguration.java   | 149 --
 8 files changed, 270 insertions(+), 225 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15eb84b3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 8416918..08f25f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -722,7 +722,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 static {
   //Ensure that HDFS Configuration files are loaded before trying to use
   // the renewer.
-  HdfsConfigurationLoader.init();
+  HdfsConfiguration.init();
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/15eb84b3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 8ed892c..39cc42b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -112,7 +112,7 @@ public class DistributedFileSystem extends FileSystem {
   private boolean verifyChecksum = true;
 
   static{
-HdfsConfigurationLoader.init();
+HdfsConfiguration.init();
   }
 
   public DistributedFileSystem() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/15eb84b3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
new file mode 100644
index 000..e93e6f1
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
@@ -0,0 +1,152 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys;
+
+/**
+ * Adds deprecated keys into the configuration.

[32/50] hadoop git commit: YARN-4246. NPE while listing app attempt. (Nijel S F via rohithsharmaks)

2015-10-27 Thread jing9
YARN-4246. NPE while listing app attempt. (Nijel S F via rohithsharmaks)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b57f08c0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b57f08c0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b57f08c0

Branch: refs/heads/HDFS-8966
Commit: b57f08c0d2dae57b545a3baa213e18464060ae3b
Parents: 1aa735c
Author: Rohith Sharma K S 
Authored: Mon Oct 26 11:56:36 2015 +0530
Committer: Rohith Sharma K S 
Committed: Mon Oct 26 11:56:36 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt |  2 ++
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |  6 +++--
 .../hadoop/yarn/client/cli/TestYarnCLI.java | 23 
 3 files changed, 29 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b57f08c0/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0169163..192843e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -997,6 +997,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3724. Use POSIX nftw(3) instead of fts(3) (Alan Burlison via aw)
 
+YARN-4246. NPE while listing app attempt. (Nijel S F via rohithsharmaks)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b57f08c0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
index e34675a..b486074 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
@@ -349,8 +349,9 @@ public class ApplicationCLI extends YarnCLI {
   appAttemptReportStr.println(appAttemptReport
   .getYarnApplicationAttemptState());
   appAttemptReportStr.print("\tAMContainer : ");
-  appAttemptReportStr.println(appAttemptReport.getAMContainerId()
-  .toString());
+  appAttemptReportStr
+  .println(appAttemptReport.getAMContainerId() == null ? "N/A"
+  : appAttemptReport.getAMContainerId().toString());
   appAttemptReportStr.print("\tTracking-URL : ");
   appAttemptReportStr.println(appAttemptReport.getTrackingUrl());
   appAttemptReportStr.print("\tRPC Port : ");
@@ -667,6 +668,7 @@ public class ApplicationCLI extends YarnCLI {
   writer.printf(APPLICATION_ATTEMPTS_PATTERN, appAttemptReport
   .getApplicationAttemptId(), appAttemptReport
   .getYarnApplicationAttemptState(), appAttemptReport
+  .getAMContainerId() == null ? "N/A" : appAttemptReport
   .getAMContainerId().toString(), appAttemptReport.getTrackingUrl());
 }
 writer.flush();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b57f08c0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
index 069ff7d..3dab504 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
@@ -1590,4 +1590,27 @@ public class TestYarnCLI {
   private static String normalize(String s) {
 return SPACES_PATTERN.matcher(s).replaceAll(" "); // single space
   }
+
+  @Test
+  public void testAppAttemptReportWhileContainerIsNotAssigned()
+  throws Exception {
+ApplicationCLI cli = createAndGetAppCLI();
+ApplicationId applicationId = ApplicationId.newInstance(1234, 5);
+ApplicationAttemptId attemptId =
+ApplicationAttemptId.newInstance(applicationId, 1);
+ApplicationAttemptReport attemptReport =
+ApplicationAttemptReport.newInstance(attemptId, "host", 124, "url",
+"oUrl", "diagnostics", YarnApplicationAttemptState.SCHEDULED, null,
+1000l, 2000l);
+when(client.getApplicationAttemptReport(any(Applicatio

[17/50] hadoop git commit: YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for each queue. (Siqi Li via mingma)

2015-10-27 Thread jing9
YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for 
each queue. (Siqi Li via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/934d96a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/934d96a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/934d96a3

Branch: refs/heads/HDFS-8966
Commit: 934d96a334598fcf0e5aba2043ff539469025f69
Parents: f8adeb7
Author: Ming Ma 
Authored: Fri Oct 23 08:36:33 2015 -0700
Committer: Ming Ma 
Committed: Fri Oct 23 08:36:33 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../scheduler/fair/AllocationConfiguration.java | 26 +---
 .../fair/AllocationFileLoaderService.java   | 15 ---
 .../fair/TestAllocationFileLoaderService.java   | 18 +-
 .../src/site/markdown/FairScheduler.md  |  3 +++
 5 files changed, 56 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/934d96a3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 53bf85a..125ff94 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -531,6 +531,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4243. Add retry on establishing Zookeeper conenction in 
 EmbeddedElectorService#serviceInit. (Xuan Gong via junping_du)
 
+YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for
+each queue. (Siqi Li via mingma)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/934d96a3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index 0ea7314..bf4eae8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.Resource;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceWeights;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -36,7 +38,8 @@ import com.google.common.annotations.VisibleForTesting;
 public class AllocationConfiguration extends ReservationSchedulerConfiguration 
{
   private static final AccessControlList EVERYBODY_ACL = new 
AccessControlList("*");
   private static final AccessControlList NOBODY_ACL = new AccessControlList(" 
");
-  
+  private static final ResourceCalculator RESOURCE_CALCULATOR =
+  new DefaultResourceCalculator();
   // Minimum resource allocation for each queue
   private final Map minQueueResources;
   // Maximum amount of resources per queue
@@ -53,6 +56,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   final Map userMaxApps;
   private final int userMaxAppsDefault;
   private final int queueMaxAppsDefault;
+  private final Resource queueMaxResourcesDefault;
 
   // Maximum resource share for each leaf queue that can be used to run AMs
   final Map queueMaxAMShares;
@@ -99,7 +103,8 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   Map queueMaxApps, Map userMaxApps,
   Map queueWeights,
   Map queueMaxAMShares, int userMaxAppsDefault,
-  int queueMaxAppsDefault, float queueMaxAMShareDefault,
+  int queueMaxAppsDefault, Resource queueMaxResourcesDefault,
+  float queueMaxAMShareDefault,
   Map schedulingPolicies,
   SchedulingPolicy defaultSchedulingPolicy,
   Map minSharePreempti

[37/50] hadoop git commit: HDFS-9304. Add HdfsClientConfigKeys class to TestHdfsConfigFields#configurationClasses. Contributed by Mingliang Liu.

2015-10-27 Thread jing9
HDFS-9304. Add HdfsClientConfigKeys class to 
TestHdfsConfigFields#configurationClasses. Contributed by Mingliang Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/67e3d75a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/67e3d75a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/67e3d75a

Branch: refs/heads/HDFS-8966
Commit: 67e3d75aed1c1a90cabffc552d5743a69ea28b54
Parents: 123b3db
Author: Haohui Mai 
Authored: Mon Oct 26 12:02:52 2015 -0700
Committer: Haohui Mai 
Committed: Mon Oct 26 12:02:52 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java  | 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/67e3d75a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e4dc598..7ce5a09 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2135,6 +2135,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9301. HDFS clients can't construct HdfsConfiguration instances.
 (Mingliang Liu via wheat9)
 
+HDFS-9304. Add HdfsClientConfigKeys class to TestHdfsConfigFields
+#configurationClasses. (Mingliang Liu via wheat9)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67e3d75a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
index 1c029fe..9637f59 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
@@ -22,6 +22,7 @@ import java.util.HashSet;
 
 import org.apache.hadoop.conf.TestConfigurationFieldsBase;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 
 /**
  * Unit test class to compare the following MR Configuration classes:
@@ -39,7 +40,8 @@ public class TestHdfsConfigFields extends 
TestConfigurationFieldsBase {
   @Override
   public void initializeMemberVariables() {
 xmlFilename = new String("hdfs-default.xml");
-configurationClasses = new Class[] { DFSConfigKeys.class };
+configurationClasses = new Class[] { HdfsClientConfigKeys.class,
+DFSConfigKeys.class};
 
 // Set error modes
 errorIfMissingConfigProps = true;



[23/50] hadoop git commit: HDFS-9290. DFSClient#callAppend() is not backward compatible for slightly older NameNodes. Contributed by Tony Wu.

2015-10-27 Thread jing9
HDFS-9290. DFSClient#callAppend() is not backward compatible for slightly older 
NameNodes. Contributed by Tony Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9e0417b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9e0417b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9e0417b

Branch: refs/heads/HDFS-8966
Commit: b9e0417bdf2b9655dc4256bdb43683eca1ab46be
Parents: ab3c4cf
Author: Kihwal Lee 
Authored: Fri Oct 23 16:37:45 2015 -0500
Committer: Kihwal Lee 
Committed: Fri Oct 23 16:37:45 2015 -0500

--
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java  | 8 +++-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 2 files changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9e0417b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index b460f26..8416918 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1296,8 +1296,14 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 try {
   LastBlockWithStatus blkWithStatus = namenode.append(src, clientName,
   new EnumSetWritable<>(flag, CreateFlag.class));
+  HdfsFileStatus status = blkWithStatus.getFileStatus();
+  if (status == null) {
+DFSClient.LOG.debug("NameNode is on an older version, request file " +
+"info with additional RPC call for file: " + src);
+status = getFileInfo(src);
+  }
   return DFSOutputStream.newStreamForAppend(this, src, flag, progress,
-  blkWithStatus.getLastBlock(), blkWithStatus.getFileStatus(),
+  blkWithStatus.getLastBlock(), status,
   dfsClientConf.createChecksum(null), favoredNodes);
 } catch(RemoteException re) {
   throw re.unwrapRemoteException(AccessControlException.class,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9e0417b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 23a54eb..e9e07d4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -2189,6 +2189,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-9220. Reading small file (< 512 bytes) that is open for append fails
 due to incorrect checksum (Jing Zhao via kihwal)
 
+HDFS-9290. DFSClient#callAppend() is not backward compatible for slightly
+older NameNodes. (Tony Wu via kihwal)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES



[03/50] hadoop git commit: HADOOP-9692. SequenceFile reader throws EOFException on zero-length file. Contributed by Zhe Zhang and Chu Tong.

2015-10-27 Thread jing9
HADOOP-9692. SequenceFile reader throws EOFException on zero-length file. 
Contributed by Zhe Zhang and Chu Tong.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5ca649b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5ca649b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5ca649b

Branch: refs/heads/HDFS-8966
Commit: b5ca649bff01c906033d71c9f983b4cdaa71a9d1
Parents: 3dadf36
Author: Tsuyoshi Ozawa 
Authored: Thu Oct 22 11:55:25 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Thu Oct 22 11:55:25 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/io/SequenceFile.java | 15 +++---
 .../org/apache/hadoop/io/TestSequenceFile.java  | 21 
 3 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5ca649b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7edf5cd..5acf369 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1251,6 +1251,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC responses.
 (Daryn Sharp via yliu)
 
+HADOOP-9692. SequenceFile reader throws EOFException on zero-length file.
+(Zhe Zhang and Chu Tong via ozawa)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5ca649b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
index e37e855..ed57eee 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
@@ -1912,17 +1912,26 @@ public class SequenceFile {
  */
 private void init(boolean tempReader) throws IOException {
   byte[] versionBlock = new byte[VERSION.length];
-  in.readFully(versionBlock);
+  String exceptionMsg = this + " not a SequenceFile";
+
+  // Try to read sequence file header.
+  try {
+in.readFully(versionBlock);
+  } catch (EOFException e) {
+throw new EOFException(exceptionMsg);
+  }
 
   if ((versionBlock[0] != VERSION[0]) ||
   (versionBlock[1] != VERSION[1]) ||
-  (versionBlock[2] != VERSION[2]))
+  (versionBlock[2] != VERSION[2])) {
 throw new IOException(this + " not a SequenceFile");
+  }
 
   // Set 'version'
   version = versionBlock[3];
-  if (version > VERSION[3])
+  if (version > VERSION[3]) {
 throw new VersionMismatchException(VERSION[3], version);
+  }
 
   if (version < BLOCK_COMPRESS_VERSION) {
 UTF8 className = new UTF8();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5ca649b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
index 7495c6e..99c97db 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
@@ -522,6 +522,27 @@ public class TestSequenceFile extends TestCase {
 assertTrue("InputStream for " + path + " should have been closed.", 
openedFile[0].isClosed());
   }
 
+  /**
+   * Test to makes sure zero length sequence file is handled properly while
+   * initializing.
+   */
+  public void testInitZeroLengthSequenceFile() throws IOException {
+Configuration conf = new Configuration();
+LocalFileSystem fs = FileSystem.getLocal(conf);
+
+// create an empty file (which is not a valid sequence file)
+Path path = new Path(System.getProperty("test.build.data", ".") +
+  "/zerolength.seq");
+fs.create(path).close();
+
+try {
+  new SequenceFile.Reader(conf, SequenceFile.Reader.file(path));
+  fail("IOException expected.");
+} catch (IOException expected) {
+  assertTrue(expected instanceof EOFException);
+}
+  }
+
/

[15/50] hadoop git commit: HADOOP-7266. Deprecate metrics v1. Contributed by Akira AJISAKA.

2015-10-27 Thread jing9
HADOOP-7266. Deprecate metrics v1. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/35a303df
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/35a303df
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/35a303df

Branch: refs/heads/HDFS-8966
Commit: 35a303dfbe348f96c465fb8778ced6b5bb617e89
Parents: 124a412
Author: Tsuyoshi Ozawa 
Authored: Fri Oct 23 23:47:51 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Fri Oct 23 23:47:51 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 .../src/main/java/org/apache/hadoop/http/HttpServer2.java   | 2 ++
 .../src/main/java/org/apache/hadoop/metrics/ContextFactory.java | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/MetricsContext.java | 3 +++
 .../main/java/org/apache/hadoop/metrics/MetricsException.java   | 2 ++
 .../src/main/java/org/apache/hadoop/metrics/MetricsRecord.java  | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/MetricsServlet.java | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/MetricsUtil.java| 2 ++
 .../src/main/java/org/apache/hadoop/metrics/Updater.java| 3 +++
 .../java/org/apache/hadoop/metrics/ganglia/GangliaContext.java  | 5 -
 .../org/apache/hadoop/metrics/ganglia/GangliaContext31.java | 4 
 .../main/java/org/apache/hadoop/metrics/jvm/EventCounter.java   | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/jvm/JvmMetrics.java | 3 +++
 .../org/apache/hadoop/metrics/spi/AbstractMetricsContext.java   | 3 +++
 .../java/org/apache/hadoop/metrics/spi/CompositeContext.java| 4 
 .../main/java/org/apache/hadoop/metrics/spi/MetricValue.java| 1 +
 .../java/org/apache/hadoop/metrics/spi/MetricsRecordImpl.java   | 4 
 .../org/apache/hadoop/metrics/spi/NoEmitMetricsContext.java | 3 +++
 .../main/java/org/apache/hadoop/metrics/spi/NullContext.java| 2 ++
 .../apache/hadoop/metrics/spi/NullContextWithUpdateThread.java  | 2 ++
 .../main/java/org/apache/hadoop/metrics/spi/OutputRecord.java   | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/spi/Util.java   | 3 +++
 .../src/main/java/org/apache/hadoop/metrics/util/MBeanUtil.java | 2 ++
 .../main/java/org/apache/hadoop/metrics/util/MetricsBase.java   | 2 ++
 .../org/apache/hadoop/metrics/util/MetricsDynamicMBeanBase.java | 3 ++-
 .../java/org/apache/hadoop/metrics/util/MetricsIntValue.java| 2 ++
 .../java/org/apache/hadoop/metrics/util/MetricsLongValue.java   | 2 ++
 .../java/org/apache/hadoop/metrics/util/MetricsRegistry.java| 2 ++
 .../org/apache/hadoop/metrics/util/MetricsTimeVaryingInt.java   | 3 ++-
 .../org/apache/hadoop/metrics/util/MetricsTimeVaryingLong.java  | 3 ++-
 .../org/apache/hadoop/metrics/util/MetricsTimeVaryingRate.java  | 2 ++
 .../test/java/org/apache/hadoop/metrics/TestMetricsServlet.java | 1 +
 .../org/apache/hadoop/metrics/ganglia/TestGangliaContext.java   | 1 +
 .../java/org/apache/hadoop/metrics/spi/TestOutputRecord.java| 1 +
 34 files changed, 83 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/35a303df/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a7a1d1b..87308f9 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -905,6 +905,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-9692. Improving log message when SequenceFile reader throws
 EOFException on zero-length file. (Zhe Zhang and Chu Tong via ozawa)
 
+HADOOP-7266. Deprecate metrics v1. (Akira AJISAKA via ozawa)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/35a303df/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index 6fd34d5..d593205 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -541,7 +541,9 @@ public final class HttpServer2 implements FilterContainer {
 
   /**
* Add default servlets.
+   * Note: /metrics servlet will be removed in 3.X release.
*/
+  @SuppressWarnings("deprecation")
   protected void addDefaultServlets() {
 // set up default servlets
 addServlet("stacks", "/stacks", StackServlet.class)

[49/50] hadoop git commit: YARN-3216. Max-AM-Resource-Percentage should respect node labels. (Sunil G via wangda)

2015-10-27 Thread jing9
YARN-3216. Max-AM-Resource-Percentage should respect node labels. (Sunil G via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56e4f623
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56e4f623
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56e4f623

Branch: refs/heads/HDFS-8966
Commit: 56e4f6237ae8b1852e82b186e08db3934f79a9db
Parents: 6f60621
Author: Wangda Tan 
Authored: Mon Oct 26 16:44:39 2015 -0700
Committer: Wangda Tan 
Committed: Mon Oct 26 16:44:39 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/SchedulerApplicationAttempt.java  |  21 +-
 .../scheduler/capacity/CSQueueUtils.java|   5 +
 .../CapacitySchedulerConfiguration.java |  19 +-
 .../scheduler/capacity/LeafQueue.java   | 242 ++---
 .../scheduler/capacity/QueueCapacities.java |  15 +-
 .../scheduler/common/fica/FiCaSchedulerApp.java |  46 +-
 .../yarn/server/resourcemanager/MockRM.java |  36 +-
 .../capacity/TestApplicationLimits.java |   5 +
 .../TestApplicationLimitsByPartition.java   | 511 +++
 .../TestCapacitySchedulerNodeLabelUpdate.java   | 103 +++-
 .../scheduler/capacity/TestUtils.java   |   6 +-
 12 files changed, 904 insertions(+), 108 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56e4f623/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b51d89e..5f40669 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -546,6 +546,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4285. Display resource usage as percentage of queue and cluster in the
 RM UI (Varun Vasudev via wangda)
 
+YARN-3216. Max-AM-Resource-Percentage should respect node labels. 
+(Sunil G via wangda)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56e4f623/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
index 23f00e0..c5f8def 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
 import org.apache.hadoop.yarn.server.api.ContainerType;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.AggregateAppResourceUsage;
@@ -146,7 +147,9 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
 
   protected Queue queue;
   protected boolean isStopped = false;
-  
+
+  private String appAMNodePartitionName = CommonNodeLabelsManager.NO_LABEL;
+
   protected final RMContext rmContext;
   
   public SchedulerApplicationAttempt(ApplicationAttemptId 
applicationAttemptId, 
@@ -247,10 +250,18 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
 return attemptResourceUsage.getAMUsed();
   }
 
+  public Resource getAMResource(String label) {
+return attemptResourceUsage.getAMUsed(label);
+  }
+
   public void setAMResource(Resource amResource) {
 attemptResourceUsage.setAMUsed(amResource);
   }
 
+  public void setAMResource(String label, Resource amResource) {
+attemptResourceUsage.setAMUsed(label, amResource);
+  }
+
   public boolean isAmRunning() {
 return amRunning;
   }
@@ -886,4 +897,12 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
   SchedContainerChangeRequest increaseRequest) {
 changeContainerResource(increaseRequest, true);
   }
+

[33/50] hadoop git commit: YARN-3528. Tests with 12345 as hard-coded port break jenkins. Contributed by Brahma Reddy Battula.

2015-10-27 Thread jing9
YARN-3528. Tests with 12345 as hard-coded port break jenkins. Contributed by 
Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce60b4fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce60b4fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce60b4fc

Branch: refs/heads/HDFS-8966
Commit: ce60b4fc8b72afaf475517df3638900abb8843ae
Parents: b57f08c
Author: Tsuyoshi Ozawa 
Authored: Mon Oct 26 16:45:11 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Mon Oct 26 16:45:11 2015 +0900

--
 .../org/apache/hadoop/net/ServerSocketUtil.java |  2 +-
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../nodemanager/TestNodeManagerReboot.java  | 13 ++
 .../nodemanager/TestNodeManagerResync.java  | 22 +++-
 .../nodemanager/TestNodeManagerShutdown.java| 27 
 .../nodemanager/TestNodeStatusUpdater.java  | 27 +---
 .../BaseContainerManagerTest.java   |  4 +--
 .../TestContainerManagerRecovery.java   |  3 ++-
 .../monitor/TestContainersMonitor.java  |  1 -
 9 files changed, 66 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce60b4fc/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
index 3685162..a3e1fff 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
@@ -48,8 +48,8 @@ public class ServerSocketUtil {
   if (tryPort == 0) {
 continue;
   }
-  LOG.info("Using port " + tryPort);
   try (ServerSocket s = new ServerSocket(tryPort)) {
+LOG.info("Using port " + tryPort);
 return tryPort;
   } catch (IOException e) {
 tries++;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce60b4fc/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 192843e..d06d026 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -999,6 +999,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-4246. NPE while listing app attempt. (Nijel S F via rohithsharmaks)
 
+YARN-3528. Tests with 12345 as hard-coded port break jenkins.
+(Brahma Reddy Battula via ozawa)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce60b4fc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
index 41c16a9..9fb8ebf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.yarn.api.ContainerManagementProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest;
@@ -215,7 +216,7 @@ public class TestNodeManagerReboot {
 
   }
 
-  private void restartNM(int maxTries) {
+  private void restartNM(int maxTries) throws IOException {
 nm.stop();
 nm = new MyNodeManager();
 nm.start();
@@ -296,7 +297,7 @@ public class TestNodeManagerReboot {
 
   private class MyNodeManager extends NodeManager {
 
-public MyNodeManager() {
+public MyNodeManager() throws IOException {
   super();
   this.init(createNMConfig());
 }
@@ -315,11 +316,13 @@ public class TestNodeManagerRe

[18/50] hadoop git commit: MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation token changes on RM. Contributed by Akira AJISAKA

2015-10-27 Thread jing9
MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation token 
changes on RM. Contributed by Akira AJISAKA


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eb6379ca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eb6379ca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eb6379ca

Branch: refs/heads/HDFS-8966
Commit: eb6379ca25e1bb6d3978bd3a021723c38c95bec9
Parents: 934d96a
Author: Junping Du 
Authored: Fri Oct 23 10:05:46 2015 -0700
Committer: Junping Du 
Committed: Fri Oct 23 10:05:46 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +++
 .../org/apache/hadoop/mapred/TestNetworkedJob.java  | 16 
 2 files changed, 3 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb6379ca/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index c29a92c..c6d72e8 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -605,6 +605,9 @@ Release 2.8.0 - UNRELEASED
 
MAPREDUCE-6495. Docs for archive-logs tool (rkanter)
 
+   MAPREDUCE-6508. TestNetworkedJob fails consistently due to delegation 
+   token changes on RM. (Akira AJISAKA via junping_du)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb6379ca/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestNetworkedJob.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestNetworkedJob.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestNetworkedJob.java
index cfe4705..2e0887e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestNetworkedJob.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestNetworkedJob.java
@@ -36,8 +36,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import 
org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier;
-import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.ClusterStatus.BlackListInfo;
 import org.apache.hadoop.mapred.JobClient.NetworkedJob;
 import org.apache.hadoop.mapred.JobClient.TaskStatusFilter;
@@ -49,8 +47,6 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import org.junit.Test;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.security.token.Token;
 
 public class TestNetworkedJob {
   private static String TEST_ROOT_DIR = new File(System.getProperty(
@@ -221,11 +217,6 @@ public class TestNetworkedJob {
   status2.getBlackListedTrackersInfo());
   assertEquals(status.getMapTasks(), status2.getMapTasks());
 
-  try {
-  } catch (RuntimeException e) {
-assertTrue(e.getMessage().endsWith("not found on CLASSPATH"));
-  }
-
   // test taskStatusfilter
   JobClient.setTaskOutputFilter(job, TaskStatusFilter.ALL);
   assertEquals(JobClient.getTaskOutputFilter(job), TaskStatusFilter.ALL);
@@ -256,15 +247,8 @@ public class TestNetworkedJob {
   assertEquals(aai.length, 2);
   assertEquals(aai[0].getQueueName(), "root");
   assertEquals(aai[1].getQueueName(), "default");
-  // test token
-  Token token = client
-  .getDelegationToken(new Text(UserGroupInformation.getCurrentUser()
-  .getShortUserName()));
-  assertEquals(token.getKind().toString(), "RM_DELEGATION_TOKEN");
   
   // test JobClient
-  
-   
   // The following asserts read JobStatus twice and ensure the returned
   // JobStatus objects correspond to the same Job.
   assertEquals("Expected matching JobIDs", jobId, client.getJob(jobId)



[36/50] hadoop git commit: HADOOP-12513. Dockerfile lacks initial 'apt-get update'. Contributed by Akihiro Suda.

2015-10-27 Thread jing9
HADOOP-12513. Dockerfile lacks initial 'apt-get update'. Contributed by Akihiro 
Suda.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/123b3db7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/123b3db7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/123b3db7

Branch: refs/heads/HDFS-8966
Commit: 123b3db743a86aa18e46ec44a08f7b2e7c7f6350
Parents: 5acdde4
Author: Tsuyoshi Ozawa 
Authored: Mon Oct 26 23:17:45 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Mon Oct 26 23:18:52 2015 +0900

--
 dev-support/docker/Dockerfile   | 7 ---
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/123b3db7/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index c8453cc..bc09ef2 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -24,9 +24,10 @@ FROM ubuntu:trusty
 
 WORKDIR /root
 
-RUN apt-get install -y software-properties-common
-RUN add-apt-repository -y ppa:webupd8team/java
-RUN apt-get update
+RUN apt-get update && \
+apt-get install -y software-properties-common && \
+add-apt-repository -y ppa:webupd8team/java && \
+apt-get update
 
 # Auto-accept the Oracle JDK license
 RUN echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select 
true | sudo /usr/bin/debconf-set-selections

http://git-wip-us.apache.org/repos/asf/hadoop/blob/123b3db7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 87308f9..4764488 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1259,6 +1259,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC responses.
 (Daryn Sharp via yliu)
 
+HADOOP-12513. Dockerfile lacks initial 'apt-get update'.
+(Akihiro Suda via ozawa)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()



[45/50] hadoop git commit: HDFS-8945. Update the description about replica placement in HDFS Architecture documentation. Contributed by Masatake Iwasaki.

2015-10-27 Thread jing9
HDFS-8945. Update the description about replica placement in HDFS Architecture 
documentation. Contributed by Masatake Iwasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8aefdf0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8aefdf0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8aefdf0

Branch: refs/heads/HDFS-8966
Commit: e8aefdf08bc79a0ad537c1b7a1dc288aabd399b9
Parents: 37bf614
Author: Andrew Wang 
Authored: Mon Oct 26 15:47:26 2015 -0700
Committer: Andrew Wang 
Committed: Mon Oct 26 15:47:59 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop-hdfs/src/site/markdown/HdfsDesign.md  | 19 +++
 2 files changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8aefdf0/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 55c77f8..478d48b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1578,6 +1578,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-9291. Fix TestInterDatanodeProtocol to be FsDataset-agnostic. (lei)
 
+HDFS-8945. Update the description about replica placement in HDFS
+Architecture documentation. (Masatake Iwasaki via wang)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8aefdf0/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
index 5b1f66e..4c0d44e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
@@ -121,8 +121,27 @@ A simple but non-optimal policy is to place replicas on 
unique racks. This preve
 
 For the common case, when the replication factor is three, HDFS’s placement 
policy is to put one replica on one node in the local rack, another on a 
different node in the local rack, and the last on a different node in a 
different rack. This policy cuts the inter-rack write traffic which generally 
improves write performance. The chance of rack failure is far less than that of 
node failure; this policy does not impact data reliability and availability 
guarantees. However, it does reduce the aggregate network bandwidth used when 
reading data since a block is placed in only two unique racks rather than 
three. With this policy, the replicas of a file do not evenly distribute across 
the racks. One third of replicas are on one node, two thirds of replicas are on 
one rack, and the other third are evenly distributed across the remaining 
racks. This policy improves write performance without compromising data 
reliability or read performance.
 
+If the replication factor is greater than 3,
+the placement of the 4th and following replicas are determined randomly
+while keeping the number of replicas per rack below the upper limit
+(which is basically `(replicas - 1) / racks + 2`).
+
+Because the NameNode does not allow DataNodes to have multiple replicas of the 
same block,
+maximum number of replicas created is the total number of DataNodes at that 
time.
+
+After the support for
+[Storage Types and Storage Policies](ArchivalStorage.html) was added to HDFS,
+the NameNode takes the policy into account for replica placement
+in addition to the rack awareness described above.
+The NameNode chooses nodes based on rack awareness at first,
+then checks that the candidate node have storage required by the policy 
associated with the file.
+If the candidate node does not have the storage type, the NameNode looks for 
another node.
+If enough nodes to place replicas can not be found in the first path,
+the NameNode looks for nodes having fallback storage types in the second path.
+
 The current, default replica placement policy described here is a work in 
progress.
 
+
 ### Replica Selection
 
 To minimize global bandwidth consumption and read latency, HDFS tries to 
satisfy a read request from a replica that is closest to the reader. If there 
exists a replica on the same rack as the reader node, then that replica is 
preferred to satisfy the read request. If angg/ HDFS cluster spans multiple 
data centers, then a replica that is resident in the local data center is 
preferred over any remote replica.



[16/50] hadoop git commit: YARN-4009. CORS support for ResourceManager REST API. ( Varun Vasudev via jeagles)

2015-10-27 Thread jing9
YARN-4009. CORS support for ResourceManager REST API. ( Varun Vasudev via 
jeagles)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f8adeb71
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f8adeb71
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f8adeb71

Branch: refs/heads/HDFS-8966
Commit: f8adeb712dc834c27cec15c04a986f2f635aba83
Parents: 35a303d
Author: Jonathan Eagles 
Authored: Fri Oct 23 10:34:08 2015 -0500
Committer: Jonathan Eagles 
Committed: Fri Oct 23 10:34:08 2015 -0500

--
 .../HttpCrossOriginFilterInitializer.java   |  74 +
 .../hadoop/security/http/CrossOriginFilter.java | 259 +++
 .../src/main/resources/core-default.xml |  36 ++
 .../src/site/markdown/HttpAuthentication.md |  14 +
 .../TestHttpCrossOriginFilterInitializer.java   |  58 
 .../security/http/TestCrossOriginFilter.java| 330 +++
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  10 +
 .../src/main/resources/yarn-default.xml |  16 +
 .../ApplicationHistoryServer.java   |  14 +-
 .../timeline/webapp/CrossOriginFilter.java  | 259 ---
 .../webapp/CrossOriginFilterInitializer.java|  40 +--
 .../timeline/webapp/TestCrossOriginFilter.java  | 324 --
 .../TestCrossOriginFilterInitializer.java   |  57 
 .../hadoop-yarn-server-common/pom.xml   |   5 +
 .../server/nodemanager/webapp/WebServer.java|  10 +-
 .../server/resourcemanager/ResourceManager.java |  14 +-
 .../src/site/markdown/NodeManagerRest.md|  10 +-
 .../src/site/markdown/ResourceManagerRest.md|   8 +
 19 files changed, 871 insertions(+), 669 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8adeb71/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/HttpCrossOriginFilterInitializer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/HttpCrossOriginFilterInitializer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/HttpCrossOriginFilterInitializer.java
new file mode 100644
index 000..f9c1816
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/HttpCrossOriginFilterInitializer.java
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.security;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.FilterContainer;
+import org.apache.hadoop.http.FilterInitializer;
+import org.apache.hadoop.security.http.CrossOriginFilter;
+
+public class HttpCrossOriginFilterInitializer extends FilterInitializer {
+
+  public static final String PREFIX = "hadoop.http.cross-origin.";
+  public static final String ENABLED_SUFFIX = "enabled";
+
+  private static final Log LOG =
+  LogFactory.getLog(HttpCrossOriginFilterInitializer.class);
+
+  @Override
+  public void initFilter(FilterContainer container, Configuration conf) {
+
+String key = getEnabledConfigKey();
+boolean enabled = conf.getBoolean(key, false);
+if (enabled) {
+  container.addGlobalFilter("Cross Origin Filter",
+  CrossOriginFilter.class.getName(),
+  getFilterParameters(conf, getPrefix()));
+} else {
+  LOG.info("CORS filter not enabled. Please set " + key
+  + " to 'true' to enable it");
+}
+  }
+
+  protected static Map getFilterParameters(Configuration conf,
+  String prefix) {
+Map filterParams = new HashMap();
+for (Map.Entry entry : conf.getValByRegex(prefix)
+.entrySet()) {
+  String name = entry.getKey();
+  String value = entry.getValue();
+  name = name.substring(prefix.length());
+  filterParams.put(name, va

[38/50] hadoop git commit: YARN-4284. condition for AM blacklisting is too narrow. Contributed by Sangjin Lee

2015-10-27 Thread jing9
YARN-4284. condition for AM blacklisting is too narrow. Contributed by Sangjin 
Lee


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/33a03af3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/33a03af3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/33a03af3

Branch: refs/heads/HDFS-8966
Commit: 33a03af3c396097929b9cd9c790d7f52eddc13e0
Parents: 67e3d75
Author: Jason Lowe 
Authored: Mon Oct 26 19:53:03 2015 +
Committer: Jason Lowe 
Committed: Mon Oct 26 19:53:03 2015 +

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../rmapp/attempt/RMAppAttemptImpl.java |   3 +-
 .../applicationsmanager/TestAMRestart.java  | 112 +--
 3 files changed, 83 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/33a03af3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 8b489e1..8cc1bbd 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1008,6 +1008,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4223. Fixed findbugs warnings in hadoop-yarn-server-nodemanager 
project
 (varun saxena via rohithsharmaks)
 
+YARN-4284. condition for AM blacklisting is too narrow (Sangjin Lee via
+jlowe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/33a03af3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
index 36fb9fc..48144c3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
@@ -1417,7 +1417,8 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
   }
 
   private boolean shouldCountTowardsNodeBlacklisting(int exitStatus) {
-return exitStatus == ContainerExitStatus.DISKS_FAILED;
+return !(exitStatus == ContainerExitStatus.SUCCESS
+|| exitStatus == ContainerExitStatus.PREEMPTED);
   }
 
   private static final class UnmanagedAMAttemptSavedTransition

http://git-wip-us.apache.org/repos/asf/hadoop/blob/33a03af3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
index 688ca9a..acacc40 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
@@ -383,7 +383,7 @@ public class TestAMRestart {
   public void testAMBlacklistPreventsRestartOnSameNode() throws Exception {
 YarnConfiguration conf = new YarnConfiguration();
 conf.setBoolean(YarnConfiguration.AM_BLACKLISTING_ENABLED, true);
-testAMBlacklistPreventRestartOnSameNode(conf);
+testAMBlacklistPreventRestartOnSameNode(false, conf);
   }
 
   @Test(timeout = 10)
@@ -393,11 +393,28 @@ public class TestAMRestart {
 conf.setBoolean(YarnConfiguration.AM_BLACKLISTING_ENABLED, true);
 conf.setBoolean(YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME,
 true);
-testAMBlacklistPreventRestartOnSameNode(conf);
+testAMBlacklistPreventRestartOnSameNode(false, conf);
   }
 
-  pr

[29/50] hadoop git commit: YARN-4296. DistributedShell Log.info is not friendly. (Xiaowei Wang via stevel)

2015-10-27 Thread jing9
YARN-4296. DistributedShell Log.info is not friendly. (Xiaowei Wang via stevel)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/446212a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/446212a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/446212a3

Branch: refs/heads/HDFS-8966
Commit: 446212a39ecf295cdd0e0b06ab870b48c5e6b500
Parents: 14f8dd0
Author: Steve Loughran 
Authored: Sat Oct 24 12:54:48 2015 +0100
Committer: Steve Loughran 
Committed: Sat Oct 24 12:55:36 2015 +0100

--
 hadoop-yarn-project/CHANGES.txt| 3 +++
 .../hadoop/yarn/applications/distributedshell/Client.java  | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/446212a3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f3dc16c..0641091 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -534,6 +534,9 @@ Release 2.8.0 - UNRELEASED
 YARN-2913. Fair scheduler should have ability to set MaxResourceDefault for
 each queue. (Siqi Li via mingma)
 
+YARN-4296. DistributedShell Log.info is not friendly.
+(Xiaowei Wang via stevel)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/446212a3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index 5a90880..68d2bde 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -453,9 +453,9 @@ public class Client {
 for (NodeReport node : clusterNodeReports) {
   LOG.info("Got node report from ASM for"
   + ", nodeId=" + node.getNodeId() 
-  + ", nodeAddress" + node.getHttpAddress()
-  + ", nodeRackName" + node.getRackName()
-  + ", nodeNumContainers" + node.getNumContainers());
+  + ", nodeAddress=" + node.getHttpAddress()
+  + ", nodeRackName=" + node.getRackName()
+  + ", nodeNumContainers=" + node.getNumContainers());
 }
 
 QueueInfo queueInfo = yarnClient.getQueueInfo(this.amQueue);



[21/50] hadoop git commit: YARN-4041. Slow delegation token renewal can severely prolong RM recovery. Contributed by Sunil G

2015-10-27 Thread jing9
YARN-4041. Slow delegation token renewal can severely prolong RM recovery. 
Contributed by Sunil G


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d3a34a4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d3a34a4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d3a34a4f

Branch: refs/heads/HDFS-8966
Commit: d3a34a4f388155f6a7ef040e244ce7be788cd28b
Parents: 533a2be
Author: Jason Lowe 
Authored: Fri Oct 23 20:57:01 2015 +
Committer: Jason Lowe 
Committed: Fri Oct 23 20:57:01 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../server/resourcemanager/rmapp/RMAppImpl.java | 14 ++--
 .../security/DelegationTokenRenewer.java| 69 ++--
 .../server/resourcemanager/TestRMRestart.java   | 24 +++
 4 files changed, 86 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3a34a4f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 125ff94..7e30ac9 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1076,6 +1076,9 @@ Release 2.7.2 - UNRELEASED
 YARN-4209. RMStateStore FENCED state doesn’t work due to 
updateFencedState called 
 by stateMachine.doTransition. (Zhihai Xu via rohithsharmaks)
 
+YARN-4041. Slow delegation token renewal can severely prolong RM recovery
+(Sunil G via jlowe)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3a34a4f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
index 43a3a51..41254d8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
@@ -946,14 +946,16 @@ public class RMAppImpl implements RMApp, Recoverable {
   }
 
   if (UserGroupInformation.isSecurityEnabled()) {
-// synchronously renew delegation token on recovery.
+// asynchronously renew delegation token on recovery.
 try {
-  app.rmContext.getDelegationTokenRenewer().addApplicationSync(
-app.getApplicationId(), app.parseCredentials(),
-app.submissionContext.getCancelTokensWhenComplete(), 
app.getUser());
+  app.rmContext.getDelegationTokenRenewer()
+  .addApplicationAsyncDuringRecovery(app.getApplicationId(),
+  app.parseCredentials(),
+  app.submissionContext.getCancelTokensWhenComplete(),
+  app.getUser());
 } catch (Exception e) {
-  String msg = "Failed to renew token for " + app.applicationId
-  + " on recovery : " + e.getMessage();
+  String msg = "Failed to fetch user credentials from application:"
+  + e.getMessage();
   app.diagnostics.append(msg);
   LOG.error(msg, e);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3a34a4f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index 426e460..cca14e9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/Delegati

[39/50] hadoop git commit: YARN-4285. Display resource usage as percentage of queue and cluster in the RM UI (Varun Vasudev via wangda)

2015-10-27 Thread jing9
YARN-4285. Display resource usage as percentage of queue and cluster in the RM 
UI (Varun Vasudev via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3cc73773
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3cc73773
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3cc73773

Branch: refs/heads/HDFS-8966
Commit: 3cc73773eb26f7469c99b25a76814d6fad0be28e
Parents: 33a03af
Author: Wangda Tan 
Authored: Mon Oct 26 13:06:08 2015 -0700
Committer: Wangda Tan 
Committed: Mon Oct 26 13:07:39 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../records/ApplicationResourceUsageReport.java | 38 +++-
 .../src/main/proto/yarn_protos.proto|  2 +
 .../hadoop/yarn/client/cli/TestYarnCLI.java |  2 +-
 .../ApplicationResourceUsageReportPBImpl.java   | 24 
 ...pplicationHistoryManagerOnTimelineStore.java |  5 +-
 .../hadoop/yarn/server/webapp/WebPageUtils.java |  2 +-
 .../scheduler/SchedulerApplicationAttempt.java  | 10 ++-
 .../scheduler/capacity/AbstractCSQueue.java |  4 +-
 .../scheduler/capacity/LeafQueue.java   |  2 +-
 .../resourcemanager/webapp/RMAppsBlock.java | 40 
 .../resourcemanager/webapp/dao/AppInfo.java |  6 +-
 .../applicationsmanager/MockAsm.java|  2 +-
 .../TestSchedulerApplicationAttempt.java| 64 +---
 .../webapp/TestRMWebServicesApps.java   |  9 ++-
 15 files changed, 182 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3cc73773/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 8cc1bbd..8a2cfc8 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -543,6 +543,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3738. Add support for recovery of reserved apps running under dynamic
 queues (subru via asuresh)
 
+YARN-4285. Display resource usage as percentage of queue and cluster in the
+RM UI (Varun Vasudev via wangda)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3cc73773/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport.java
index b20d832..34efee8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport.java
@@ -36,7 +36,7 @@ public abstract class ApplicationResourceUsageReport {
   public static ApplicationResourceUsageReport newInstance(
   int numUsedContainers, int numReservedContainers, Resource usedResources,
   Resource reservedResources, Resource neededResources, long memorySeconds,
-  long vcoreSeconds) {
+  long vcoreSeconds, float queueUsagePerc, float clusterUsagePerc) {
 ApplicationResourceUsageReport report =
 Records.newRecord(ApplicationResourceUsageReport.class);
 report.setNumUsedContainers(numUsedContainers);
@@ -46,6 +46,8 @@ public abstract class ApplicationResourceUsageReport {
 report.setNeededResources(neededResources);
 report.setMemorySeconds(memorySeconds);
 report.setVcoreSeconds(vcoreSeconds);
+report.setQueueUsagePercentage(queueUsagePerc);
+report.setClusterUsagePercentage(clusterUsagePerc);
 return report;
   }
 
@@ -152,4 +154,38 @@ public abstract class ApplicationResourceUsageReport {
   @Public
   @Unstable
   public abstract long getVcoreSeconds();
+
+  /**
+   * Get the percentage of resources of the queue that the app is using.
+   * @return the percentage of resources of the queue that the app is using.
+   */
+  @Public
+  @Stable
+  public abstract float getQueueUsagePercentage();
+
+  /**
+   * Set the percentage of resources of the queue that the app is using.
+   * @param queueUsagePerc the percentage of resources of the queue that
+   *   the app is using.
+   */
+  @Private
+  @Unstable
+  public abstract void setQueueUsagePercentage(float queueUsagePerc);
+
+  /**
+   * Get the percentage of resources of the cluster that the app is using.
+   * @return the percentage of r

[09/50] hadoop git commit: YARN-4256. YARN fair scheduler vcores with decimal values. Contributed by Jun Gong

2015-10-27 Thread jing9
YARN-4256. YARN fair scheduler vcores with decimal values. Contributed by Jun 
Gong


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/960201b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/960201b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/960201b7

Branch: refs/heads/HDFS-8966
Commit: 960201b79b9f2ca40f8eadb21a2f9fe37dde2b5d
Parents: 47641fc
Author: Zhihai Xu 
Authored: Thu Oct 22 12:27:48 2015 -0700
Committer: Zhihai Xu 
Committed: Thu Oct 22 12:28:03 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt  | 2 ++
 .../scheduler/fair/FairSchedulerConfiguration.java   | 2 +-
 .../scheduler/fair/TestFairSchedulerConfiguration.java   | 4 
 3 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/960201b7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 79df1ce..024255c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -975,6 +975,8 @@ Release 2.8.0 - UNRELEASED
 YARN-4270. Limit application resource reservation on nodes for 
non-node/rack
 specific requests (asuresh)
 
+YARN-4256. YARN fair scheduler vcores with decimal values. (Jun Gong via 
zxu)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960201b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
index 74af70f..5dfee95 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
@@ -283,7 +283,7 @@ public class FairSchedulerConfiguration extends 
Configuration {
   
   private static int findResource(String val, String units)
 throws AllocationConfigurationException {
-Pattern pattern = Pattern.compile("(\\d+)\\s*" + units);
+Pattern pattern = Pattern.compile("(\\d+)(\\.\\d*)?\\s*" + units);
 Matcher matcher = pattern.matcher(val);
 if (!matcher.find()) {
   throw new AllocationConfigurationException("Missing resource: " + units);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/960201b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
index 82b50a6..8e7b666 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java
@@ -45,6 +45,10 @@ public class TestFairSchedulerConfiguration {
 parseResourceConfigValue("1024 Mb, 2 vCores"));
 assertEquals(BuilderUtils.newResource(1024, 2),
 parseResourceConfigValue("  1024 mb, 2 vcores  "));
+assertEquals(BuilderUtils.newResource(1024, 2),
+parseResourceConfigValue("  1024.3 mb, 2.35 vcores  "));
+assertEquals(BuilderUtils.newResource(1024, 2),
+parseResourceConfigValue("  1024. mb, 2. vcores  "));
   }
   
   @Test(expected = AllocationConfigurationException.class)

[12/50] hadoop git commit: Revert HADOOP-9692.

2015-10-27 Thread jing9
Revert HADOOP-9692.

This reverts commit 381610da620121c02073dbbaac669b80b41959b4.
This reverts commit b5ca649bff01c906033d71c9f983b4cdaa71a9d1.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/039a1f9e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/039a1f9e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/039a1f9e

Branch: refs/heads/HDFS-8966
Commit: 039a1f9e968690cb66af224858e6e64b4f0b2926
Parents: cb282d5
Author: Tsuyoshi Ozawa 
Authored: Fri Oct 23 06:43:15 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Fri Oct 23 06:48:26 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ---
 .../java/org/apache/hadoop/io/SequenceFile.java | 15 +++---
 .../org/apache/hadoop/io/TestSequenceFile.java  | 21 
 3 files changed, 3 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/039a1f9e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 87ba2ba..0d3daa2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -902,9 +902,6 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-10406. TestIPC.testIpcWithReaderQueuing may fail. (Xiao Chen via 
wang)
 
-HADOOP-9692. Improving log message when SequenceFile reader throws
-EOFException on zero-length file. Contributed by Zhe Zhang and Chu Tong.
-
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/039a1f9e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
index ed57eee..e37e855 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
@@ -1912,26 +1912,17 @@ public class SequenceFile {
  */
 private void init(boolean tempReader) throws IOException {
   byte[] versionBlock = new byte[VERSION.length];
-  String exceptionMsg = this + " not a SequenceFile";
-
-  // Try to read sequence file header.
-  try {
-in.readFully(versionBlock);
-  } catch (EOFException e) {
-throw new EOFException(exceptionMsg);
-  }
+  in.readFully(versionBlock);
 
   if ((versionBlock[0] != VERSION[0]) ||
   (versionBlock[1] != VERSION[1]) ||
-  (versionBlock[2] != VERSION[2])) {
+  (versionBlock[2] != VERSION[2]))
 throw new IOException(this + " not a SequenceFile");
-  }
 
   // Set 'version'
   version = versionBlock[3];
-  if (version > VERSION[3]) {
+  if (version > VERSION[3])
 throw new VersionMismatchException(VERSION[3], version);
-  }
 
   if (version < BLOCK_COMPRESS_VERSION) {
 UTF8 className = new UTF8();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/039a1f9e/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
index 99c97db..7495c6e 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java
@@ -522,27 +522,6 @@ public class TestSequenceFile extends TestCase {
 assertTrue("InputStream for " + path + " should have been closed.", 
openedFile[0].isClosed());
   }
 
-  /**
-   * Test to makes sure zero length sequence file is handled properly while
-   * initializing.
-   */
-  public void testInitZeroLengthSequenceFile() throws IOException {
-Configuration conf = new Configuration();
-LocalFileSystem fs = FileSystem.getLocal(conf);
-
-// create an empty file (which is not a valid sequence file)
-Path path = new Path(System.getProperty("test.build.data", ".") +
-  "/zerolength.seq");
-fs.create(path).close();
-
-try {
-  new SequenceFile.Reader(conf, SequenceFile.Reader.file(path));
-  fail("IOException expected.");
-} catch (IOException expected) {
-  assertTrue(expected instanceof EO

[14/50] hadoop git commit: HDFS-9286. HttpFs does not parse ACL syntax correctly for operation REMOVEACLENTRIES. Contributed by Wei-Chiu Chuang.

2015-10-27 Thread jing9
HDFS-9286. HttpFs does not parse ACL syntax correctly for operation 
REMOVEACLENTRIES. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/124a412a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/124a412a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/124a412a

Branch: refs/heads/HDFS-8966
Commit: 124a412a3711bd42eaeebe531376004c739a15d6
Parents: 513ec3d
Author: cnauroth 
Authored: Thu Oct 22 15:25:10 2015 -0700
Committer: cnauroth 
Committed: Thu Oct 22 15:25:10 2015 -0700

--
 .../java/org/apache/hadoop/fs/http/server/FSOperations.java | 2 +-
 .../org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java| 3 ++-
 .../java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java | 3 ++-
 .../apache/hadoop/fs/http/server/TestHttpFSServerNoACLs.java| 5 +++--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 5 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/124a412a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 11cdb4d..57bf025 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -1025,7 +1025,7 @@ public class FSOperations {
  */
 public FSRemoveAclEntries(String path, String aclSpec) {
   this.path = new Path(path);
-  this.aclEntries = AclEntry.parseAclSpec(aclSpec, true);
+  this.aclEntries = AclEntry.parseAclSpec(aclSpec, false);
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/124a412a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index 0e082cc..575a477 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -738,6 +738,7 @@ public abstract class BaseTestHttpFSWith extends 
HFSTestCase {
 }
 
 final String aclUser1 = "user:foo:rw-";
+final String rmAclUser1 = "user:foo:";
 final String aclUser2 = "user:bar:r--";
 final String aclGroup1 = "group::r--";
 final String aclSet = "user::rwx," + aclUser1 + ","
@@ -765,7 +766,7 @@ public abstract class BaseTestHttpFSWith extends 
HFSTestCase {
 httpfsAclStat = httpfs.getAclStatus(path);
 assertSameAcls(httpfsAclStat, proxyAclStat);
 
-httpfs.removeAclEntries(path, AclEntry.parseAclSpec(aclUser1, true));
+httpfs.removeAclEntries(path, AclEntry.parseAclSpec(rmAclUser1, false));
 proxyAclStat = proxyFs.getAclStatus(path);
 httpfsAclStat = httpfs.getAclStatus(path);
 assertSameAcls(httpfsAclStat, proxyAclStat);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/124a412a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
index 14b7a43..c6a7a9d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
@@ -501,12 +501,13 @@ public class TestHttpFSServer extends HFSTestCase {
   @TestHdfs
   public void testFileAcls() throws Exception {
 final String aclUser1 = "user:foo:rw-";
+final String remAclUser1 = "user:foo:";
 final String aclUser2 = "user:bar:r--";
 final String aclGroup1 = "group::r--";
 final String aclSpec = "aclspec=user::rwx," + aclUser1 + ","
 + aclGroup1 + ",other::---";
 final String modAclSpec = "aclspec=" + aclUser2;
-final String remAclSpec = "aclspec=" + aclUser1;
+ 

[08/50] hadoop git commit: HADOOP-12334. Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries. Contributed by Gaurav Kanade.

2015-10-27 Thread jing9
HADOOP-12334. Change Mode Of Copy Operation of HBase WAL Archiving to bypass 
Azure Storage Throttling after retries. Contributed by Gaurav Kanade.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47641fcb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47641fcb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47641fcb

Branch: refs/heads/HDFS-8966
Commit: 47641fcbc9c41f4a338d8899501e4a310d2e81ad
Parents: aea26bf
Author: cnauroth 
Authored: Thu Oct 22 12:21:32 2015 -0700
Committer: cnauroth 
Committed: Thu Oct 22 12:21:32 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../fs/azure/AzureNativeFileSystemStore.java| 47 +---
 2 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47641fcb/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ce2a2a7..74c62cb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1335,6 +1335,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently.
 (kihwal)
 
+HADOOP-12334. Change Mode Of Copy Operation of HBase WAL Archiving to 
bypass
+Azure Storage Throttling after retries. (Gaurav Kanade via cnauroth)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47641fcb/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 679413a..8a33742 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -60,6 +60,7 @@ import org.apache.hadoop.fs.azure.metrics.ErrorMetricUpdater;
 import org.apache.hadoop.fs.azure.metrics.ResponseReceivedMetricUpdater;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.permission.PermissionStatus;
+import org.apache.hadoop.io.IOUtils;
 import org.mortbay.util.ajax.JSON;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -76,6 +77,7 @@ import com.microsoft.azure.storage.StorageException;
 import com.microsoft.azure.storage.blob.BlobListingDetails;
 import com.microsoft.azure.storage.blob.BlobProperties;
 import com.microsoft.azure.storage.blob.BlobRequestOptions;
+import com.microsoft.azure.storage.blob.BlobType;
 import com.microsoft.azure.storage.blob.CloudBlob;
 import com.microsoft.azure.storage.blob.CopyStatus;
 import com.microsoft.azure.storage.blob.DeleteSnapshotsOption;
@@ -2373,6 +2375,9 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
   throw new IOException("Cannot acquire new lease if one already exists.");
 }
 
+CloudBlobWrapper srcBlob = null;
+CloudBlobWrapper dstBlob = null;
+SelfRenewingLease lease = null;
 try {
   // Attempts rename may occur before opening any streams so first,
   // check if a session exists, if not create a session with the Azure
@@ -2388,8 +2393,8 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
   // Get the source blob and assert its existence. If the source key
   // needs to be normalized then normalize it.
   //
-  CloudBlobWrapper srcBlob = getBlobReference(srcKey);
 
+  srcBlob = getBlobReference(srcKey);
   if (!srcBlob.exists(getInstrumentedContext())) {
 throw new AzureException ("Source blob " + srcKey +
 " does not exist.");
@@ -2406,7 +2411,6 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
* when HBase runs on HDFS, where the region server recovers the lease
* on a log file, to gain exclusive access to it, before it splits it.
*/
-  SelfRenewingLease lease = null;
   if (acquireLease) {
 lease = srcBlob.acquireLease();
   } else if (existingLease != null) {
@@ -2416,7 +2420,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
   // Get the destination blob. The destination key always needs to be
   // normalized.
   //
-  CloudBlobWrapper dstBlob = getBlobReference(dstKey);
+  dstBlob = getBlobReference(dstKey);
 
   // Rename the sourc

[04/50] hadoop git commit: HADOOP-9692. Improving log message when SequenceFile reader throws EOFException on zero-length file. Contributed by Zhe Zhang and Chu Tong.

2015-10-27 Thread jing9
HADOOP-9692. Improving log message when SequenceFile reader throws EOFException 
on zero-length file. Contributed by Zhe Zhang and Chu Tong.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/381610da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/381610da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/381610da

Branch: refs/heads/HDFS-8966
Commit: 381610da620121c02073dbbaac669b80b41959b4
Parents: b5ca649
Author: Tsuyoshi Ozawa 
Authored: Thu Oct 22 11:59:12 2015 +0900
Committer: Tsuyoshi Ozawa 
Committed: Thu Oct 22 11:59:12 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/381610da/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 5acf369..aebd4f3 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -899,6 +899,9 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-10406. TestIPC.testIpcWithReaderQueuing may fail. (Xiao Chen via 
wang)
 
+HADOOP-9692. Improving log message when SequenceFile reader throws
+EOFException on zero-length file. Contributed by Zhe Zhang and Chu Tong.
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp
@@ -1251,9 +1254,6 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC responses.
 (Daryn Sharp via yliu)
 
-HADOOP-9692. SequenceFile reader throws EOFException on zero-length file.
-(Zhe Zhang and Chu Tong via ozawa)
-
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()



hadoop git commit: HDFS-8950. NameNode refresh doesn't remove DataNodes that are no longer in the allowed list. Contributed by Daniel Templeton.

2015-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 1d23e1ec0 -> 59a207213


HDFS-8950. NameNode refresh doesn't remove DataNodes that are no longer in the 
allowed list.  Contributed by  Daniel Templeton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/59a20721
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/59a20721
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/59a20721

Branch: refs/heads/branch-2.7
Commit: 59a207213597f1dc94afc9d22e693165d6fd2792
Parents: 1d23e1e
Author: Kihwal Lee 
Authored: Tue Oct 27 16:42:33 2015 -0500
Committer: Kihwal Lee 
Committed: Tue Oct 27 16:42:33 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/DatanodeManager.java |   9 +-
 .../server/blockmanagement/HostFileManager.java |  19 
 .../apache/hadoop/hdfs/TestDecommission.java|  15 +--
 .../blockmanagement/TestDatanodeManager.java| 110 ++-
 .../blockmanagement/TestHostFileManager.java|   7 +-
 6 files changed, 146 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/59a20721/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7c09896..9b28d3b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -47,6 +47,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-8879. Quota by storage type usage incorrectly initialized upon 
namenode
 restart. (xyao)
 
+HDFS-8950. NameNode refresh doesn't remove DataNodes that are no longer in
+the allowed list (Daniel Templeton)
+
 HDFS-8995. Flaw in registration bookeeping can make DN die on reconnect.
 (Kihwal Lee via yliu)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59a20721/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index d7e0721..0cf1eee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -1272,11 +1272,14 @@ public class DatanodeManager {
   for (DatanodeDescriptor dn : datanodeMap.values()) {
 final boolean isDead = isDatanodeDead(dn);
 final boolean isDecommissioning = dn.isDecommissionInProgress();
-if ((listLiveNodes && !isDead) ||
+
+if (((listLiveNodes && !isDead) ||
 (listDeadNodes && isDead) ||
-(listDecommissioningNodes && isDecommissioning)) {
-nodes.add(dn);
+(listDecommissioningNodes && isDecommissioning)) &&
+hostFileManager.isIncluded(dn)) {
+  nodes.add(dn);
 }
+
 foundNodes.add(HostFileManager.resolvedAddressFromDatanodeID(dn));
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59a20721/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
index 0b8d6c5..e05ef9a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
@@ -126,9 +126,28 @@ class HostFileManager {
 return !includes.isEmpty();
   }
 
+  /**
+   * Read the includes and excludes lists from the named files.  Any previous
+   * includes and excludes lists are discarded.
+   * @param includeFile the path to the new includes list
+   * @param excludeFile the path to the new excludes list
+   * @throws IOException thrown if there is a problem reading one of the files
+   */
   void refresh(String includeFile, String excludeFile) throws IOException {
 HostSet newIncludes = readFile("included", includeFile);
 HostSet newExcludes = readFile("excluded", excludeFile);
+
+refresh(newIncludes, newExcludes);
+  }
+
+  /**
+   * Set the includes and excludes

hadoop git commit: MAPREDUCE-6435. MapReduce client assumes the world is x86 (Alan Burlison via aw)

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5c24fe7f9 -> 68ce93c32


MAPREDUCE-6435. MapReduce client assumes the world is x86 (Alan Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/68ce93c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/68ce93c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/68ce93c3

Branch: refs/heads/trunk
Commit: 68ce93c32e1cf4344929b26597780ec387efa107
Parents: 5c24fe7
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:29:41 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:29:41 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt |  3 +++
 .../src/main/native/src/util/Checksum.cc | 15 ++-
 2 files changed, 9 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/68ce93c3/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 49244d3..51e2cf7 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -217,6 +217,9 @@ Trunk (Unreleased)
 MAPREDUCE-6416. Not all platforms have d_type in struct dirent
 (Alan Burlison via aw)
 
+MAPREDUCE-6435. MapReduce client assumes the world is x86
+(Alan Burlison via aw)
+
   BREAKDOWN OF MAPREDUCE-2841 (NATIVE TASK) SUBTASKS
 
 MAPREDUCE-5985. native-task: Fix build on macosx. Contributed by

http://git-wip-us.apache.org/repos/asf/hadoop/blob/68ce93c3/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
index be800c5..f427350 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
@@ -579,16 +579,13 @@ const uint32_t CRC32C_T8_7[256] = {0x, 
0x493C7D27, 0x9278FA4E, 0xDB44876
 0xCF56CE31, 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 
0x3E08B2C8, 0xC451B7CC,
 0x8D6DCAEB, 0x56294D82, 0x1F1530A5};
 
-#ifdef __aarch64__
-// Awaiting HW implementation
-#define SOFTWARE_CRC
-#endif
 
-#ifndef SOFTWARE_CRC
-#define USE_HARDWARE_CRC32C 1
+/* Use CRC32 intrinsics on x86 */
+#if defined(__x86_64__) || defined(_M_X64) || defined(__i386) || 
defined(_M_IX86)
+#define USE_X86_CRC32
 #endif
 
-#ifdef USE_HARDWARE_CRC32C
+#ifdef USE_X86_CRC32
 
 static int cached_cpu_supports_crc32; // initialized by constructor below
 static uint32_t crc32c_hardware(uint32_t crc, const uint8_t* data, size_t 
length);
@@ -644,7 +641,7 @@ inline uint32_t _mm_crc32_u8(uint32_t crc, uint8_t value) {
 }
 
 /**
- * Hardware-accelerated CRC32C calculation using the 64-bit instructions.
+ * Hardware-accelerated x86 CRC32C calculation using the 64-bit instructions.
  */
 static uint32_t crc32c_hardware(uint32_t crc, const uint8_t* p_buf, size_t 
length) {
   // start directly at p_buf, even if it's an unaligned address. According
@@ -739,7 +736,7 @@ uint32_t crc32c_sb8_software(uint32_t crc, const uint8_t 
*buf, size_t length) {
 #endif
 
 uint32_t crc32c_sb8(uint32_t crc, const uint8_t *buf, size_t length) {
-#ifdef USE_HARDWARE_CRC32C
+#ifdef USE_X86_CRC32
   if (likely(cached_cpu_supports_crc32)) {
 return crc32c_hardware(crc, buf, length);
   } else {



hadoop git commit: MAPREDUCE-6416. Not all platforms have d_type in struct dirent (Alan Burlison via aw)

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk ab99d953e -> 5c24fe7f9


MAPREDUCE-6416. Not all platforms have d_type in struct dirent (Alan Burlison 
via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5c24fe7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5c24fe7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5c24fe7f

Branch: refs/heads/trunk
Commit: 5c24fe7f91970dedae35906ff7990ea5410f070e
Parents: ab99d95
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:22:08 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:22:08 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt| 3 +++
 .../src/main/native/src/lib/FileSystem.cc   | 9 -
 2 files changed, 11 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c24fe7f/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 4b5413f..49244d3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -214,6 +214,9 @@ Trunk (Unreleased)
 MAPREDUCE-6412. Make hadoop-mapreduce-client Native code -Wall-clean
 (Alan Burlison via aw)
 
+MAPREDUCE-6416. Not all platforms have d_type in struct dirent
+(Alan Burlison via aw)
+
   BREAKDOWN OF MAPREDUCE-2841 (NATIVE TASK) SUBTASKS
 
 MAPREDUCE-5985. native-task: Fix build on macosx. Contributed by

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c24fe7f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/FileSystem.cc
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/FileSystem.cc
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/FileSystem.cc
index e0a698c..2fb78a8 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/FileSystem.cc
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/FileSystem.cc
@@ -167,10 +167,17 @@ class RawFileSystem : public FileSystem {
 FileEntry temp;
 while ((dirp = readdir(dp)) != NULL) {
   temp.name = dirp->d_name;
-  temp.isDirectory = dirp->d_type & DT_DIR;
   if (temp.name == "." || temp.name == "..") {
 continue;
   }
+/* Use Linux d_type if available, otherwise stat(2) the path */
+#ifdef DT_DIR
+  temp.isDirectory = dirp->d_type & DT_DIR;
+#else
+  const string p = path + "/" + temp.name;
+  struct stat sb;
+  temp.isDirectory = stat(p.c_str(), &sb) == 0 && S_ISDIR(sb.st_mode) == 0;
+#endif
   status.push_back(temp);
 }
 closedir(dp);



hadoop git commit: MAPREDUCE-6412. Make hadoop-mapreduce-client Native code -Wall-clean (Alan Burlison via aw)

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0e344902a -> ab99d953e


MAPREDUCE-6412. Make hadoop-mapreduce-client Native code -Wall-clean (Alan 
Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ab99d953
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ab99d953
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ab99d953

Branch: refs/heads/trunk
Commit: ab99d953e464f240691b6eb45c28f2f4db27d8f0
Parents: 0e34490
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:15:11 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:15:11 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +++
 .../src/main/native/src/lib/Buffers.h   | 21 ++--
 .../src/main/native/src/lib/MemoryBlock.cc  |  4 ++--
 .../src/main/native/src/lib/MemoryBlock.h   |  2 +-
 .../src/main/native/src/util/Random.cc  |  4 ++--
 .../src/main/native/src/util/WritableUtils.cc   |  3 ++-
 .../src/main/native/test/TestPrimitives.cc  |  2 +-
 7 files changed, 21 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab99d953/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index f09a353..4b5413f 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -211,6 +211,9 @@ Trunk (Unreleased)
 MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME
 (Alan Burlison via aw)
 
+MAPREDUCE-6412. Make hadoop-mapreduce-client Native code -Wall-clean
+(Alan Burlison via aw)
+
   BREAKDOWN OF MAPREDUCE-2841 (NATIVE TASK) SUBTASKS
 
 MAPREDUCE-5985. native-task: Fix build on macosx. Contributed by

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab99d953/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Buffers.h
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Buffers.h
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Buffers.h
index 13cd545..4929426 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Buffers.h
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Buffers.h
@@ -349,31 +349,31 @@ public:
 return this->_capacity;
   }
 
-  int remain() {
+  uint32_t remain() {
 return _limit - _position;
   }
 
-  int limit() {
+  uint32_t limit() {
 return _limit;
   }
 
-  int advance(int positionOffset) {
+  uint32_t advance(int positionOffset) {
 _position += positionOffset;
 return _position;
   }
 
-  int position() {
+  uint32_t position() {
 return this->_position;
   }
 
-  void position(int newPos) {
+  void position(uint32_t newPos) {
 this->_position = newPos;
   }
 
-  void rewind(int newPos, int newLimit) {
+  void rewind(uint32_t newPos, uint32_t newLimit) {
 this->_position = newPos;
-if (newLimit < 0 || newLimit > this->_capacity) {
-  THROW_EXCEPTION(IOException, "length smaller than zero or larger than 
input buffer capacity");
+if (newLimit > this->_capacity) {
+  THROW_EXCEPTION(IOException, "length larger than input buffer capacity");
 }
 this->_limit = newLimit;
   }
@@ -474,11 +474,10 @@ public:
* return the length of actually filled data.
*/
   uint32_t fill(const char * source, uint32_t maxSize) {
-int remain = _size - _pos;
-if (remain <= 0) {
+if (_pos > _size) {
   return 0;
 }
-
+uint32_t remain = _size - _pos;
 uint32_t length = (maxSize < remain) ? maxSize : remain;
 simple_memcpy(_buff + _pos, source, length);
 _pos += length;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab99d953/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/MemoryBlock.cc
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/MemoryBlock.cc
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/MemoryBlock.cc
index b3734a1..cd6872a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/MemoryBlock.cc
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask

hadoop git commit: fix changes.txt for MAPREDUCE-6391.

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk bcc4c746a -> 0e344902a


fix changes.txt for MAPREDUCE-6391.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e344902
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e344902
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e344902

Branch: refs/heads/trunk
Commit: 0e344902aaca417813c7191395cd5c73dfcd4220
Parents: bcc4c74
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:07:18 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:07:18 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e344902/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index a725481..f09a353 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -208,6 +208,9 @@ Trunk (Unreleased)
 
 MAPREDUCE-6257. Document encrypted spills (Bibin A Chundatt via aw)
 
+MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME
+(Alan Burlison via aw)
+
   BREAKDOWN OF MAPREDUCE-2841 (NATIVE TASK) SUBTASKS
 
 MAPREDUCE-5985. native-task: Fix build on macosx. Contributed by
@@ -589,9 +592,6 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6484. Yarn Client uses local address instead of RM address as
 token renewer in a secure cluster when RM HA is enabled. (Zhihai Xu)
 
-MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME
-(Alan Burlison via aw)
-
MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
 
MAPREDUCE-6494. Permission issue when running archive-logs tool as



hadoop git commit: MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME (Alan Burlison via aw)

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1396867b5 -> bcc4c746a


MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME (Alan 
Burlison via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bcc4c746
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bcc4c746
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bcc4c746

Branch: refs/heads/trunk
Commit: bcc4c746a88cc698b2937b85ec845b9d8cc736d2
Parents: 1396867
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:04:55 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:04:55 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/main/native/src/util/Timer.cc | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcc4c746/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index c6d72e8..a725481 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -589,6 +589,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6484. Yarn Client uses local address instead of RM address as
 token renewer in a secure cluster when RM HA is enabled. (Zhihai Xu)
 
+MAPREDUCE-6391. util/Timer.cc completely misunderstands _POSIX_CPUTIME
+(Alan Burlison via aw)
+
MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)
 
MAPREDUCE-6494. Permission issue when running archive-logs tool as

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcc4c746/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Timer.cc
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Timer.cc
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Timer.cc
index 9f8a9ad..e146777 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Timer.cc
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Timer.cc
@@ -40,7 +40,7 @@ static uint64_t clock_get() {
 
 static uint64_t clock_get() {
   timespec ts;
-  clock_gettime(_POSIX_CPUTIME, &ts);
+  clock_gettime(CLOCK_REALTIME, &ts);
   return 10 * ts.tv_sec + ts.tv_nsec;
 }
 



hadoop git commit: HADOOP-12494. fetchdt stores the token based on token kind instead of token service (HeeSoo Kim via aw)

2015-10-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk faeb6a3f8 -> 1396867b5


HADOOP-12494. fetchdt stores the token based on token kind instead of token 
service (HeeSoo Kim via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1396867b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1396867b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1396867b

Branch: refs/heads/trunk
Commit: 1396867b52533ecf894158a464c6cd3abc7041b9
Parents: faeb6a3
Author: Allen Wittenauer 
Authored: Tue Oct 27 12:01:50 2015 -0700
Committer: Allen Wittenauer 
Committed: Tue Oct 27 12:02:43 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1396867b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 9ba6275..e1addb2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -516,6 +516,9 @@ Trunk (Unreleased)
 HADOOP-12515. Mockito dependency is missing in hadoop-kafka module.
 (Kai Zheng via aajisaka)
 
+HADOOP-12494. fetchdt stores the token based on token kind instead
+of token service (HeeSoo Kim via aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1396867b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
index 803402d..39821aa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
@@ -178,7 +178,7 @@ public class DelegationTokenFetcher {
 Token token = fs.getDelegationToken(renewer);
 if (null != token) {
   Credentials cred = new Credentials();
-  cred.addToken(token.getKind(), token);
+  cred.addToken(token.getService(), token);
   cred.writeTokenStorageFile(tokenFile, conf);
 
   if (LOG.isDebugEnabled()) {



hadoop git commit: HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu via Colin P. McCabe)

2015-10-27 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bb87b4222 -> 7eb0daefc


HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu via 
Colin P. McCabe)

(cherry picked from commit faeb6a3f89f3580a5b1a40c6a1f6205269a5aa7a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7eb0daef
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7eb0daef
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7eb0daef

Branch: refs/heads/branch-2
Commit: 7eb0daefc89dc1700cadab800b25f0f69cc6f3d4
Parents: bb87b42
Author: Colin Patrick Mccabe 
Authored: Tue Oct 27 11:41:05 2015 -0700
Committer: Colin Patrick Mccabe 
Committed: Tue Oct 27 11:41:27 2015 -0700

--
 .../src/main/native/fuse-dfs/fuse_connect.c   | 16 +++-
 .../src/main/native/fuse-dfs/fuse_connect.h   | 18 ++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  3 +++
 3 files changed, 20 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eb0daef/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
index 79106bc..e696072 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
@@ -522,7 +522,21 @@ error:
   return ret;
 }
 
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
+/**
+ * Get a libhdfs connection.
+ *
+ * If there is an existing connection, it will be reused.  If not, a new one
+ * will be created.
+ *
+ * You must call hdfsConnRelease on the connection you get back!
+ *
+ * @param usrnameThe username to use
+ * @param ctxThe FUSE context to use (contains UID, PID of requestor)
+ * @param conn   (out param) The HDFS connection
+ *
+ * @return   0 on success; error code otherwise
+ */
+static int fuseConnect(const char *usrname, struct fuse_context *ctx,
 struct hdfsConn **out)
 {
   int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eb0daef/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
index 35645c6..73b4f97 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
@@ -39,25 +39,11 @@ int fuseConnectInit(const char *nnUri, int port);
  * Get a libhdfs connection.
  *
  * If there is an existing connection, it will be reused.  If not, a new one
- * will be created.
+ * will be created. The username will be determined from the FUSE thread
+ * context.
  *
  * You must call hdfsConnRelease on the connection you get back!
  *
- * @param usrnameThe username to use
- * @param ctxThe FUSE context to use (contains UID, PID of requestor)
- * @param conn   (out param) The HDFS connection
- *
- * @return   0 on success; error code otherwise
- */
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
-struct hdfsConn **out);
-
-/**
- * Get a libhdfs connection.
- *
- * The same as fuseConnect, except the username will be determined from the 
FUSE
- * thread context.
- *
  * @param conn   (out param) The HDFS connection
  *
  * @return   0 on success; error code otherwise

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eb0daef/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c7a867a..0118dbe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -754,6 +754,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg
 via Colin P. McCabe)
 
+HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu
+via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than



hadoop git commit: HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu via Colin P. McCabe)

2015-10-27 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/trunk fe93577fa -> faeb6a3f8


HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu via 
Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/faeb6a3f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/faeb6a3f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/faeb6a3f

Branch: refs/heads/trunk
Commit: faeb6a3f89f3580a5b1a40c6a1f6205269a5aa7a
Parents: fe93577
Author: Colin Patrick Mccabe 
Authored: Tue Oct 27 11:41:05 2015 -0700
Committer: Colin Patrick Mccabe 
Committed: Tue Oct 27 11:41:05 2015 -0700

--
 .../src/main/native/fuse-dfs/fuse_connect.c   | 16 +++-
 .../src/main/native/fuse-dfs/fuse_connect.h   | 18 ++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  3 +++
 3 files changed, 20 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/faeb6a3f/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
index 79106bc..e696072 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.c
@@ -522,7 +522,21 @@ error:
   return ret;
 }
 
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
+/**
+ * Get a libhdfs connection.
+ *
+ * If there is an existing connection, it will be reused.  If not, a new one
+ * will be created.
+ *
+ * You must call hdfsConnRelease on the connection you get back!
+ *
+ * @param usrnameThe username to use
+ * @param ctxThe FUSE context to use (contains UID, PID of requestor)
+ * @param conn   (out param) The HDFS connection
+ *
+ * @return   0 on success; error code otherwise
+ */
+static int fuseConnect(const char *usrname, struct fuse_context *ctx,
 struct hdfsConn **out)
 {
   int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/faeb6a3f/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
index 35645c6..73b4f97 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_connect.h
@@ -39,25 +39,11 @@ int fuseConnectInit(const char *nnUri, int port);
  * Get a libhdfs connection.
  *
  * If there is an existing connection, it will be reused.  If not, a new one
- * will be created.
+ * will be created. The username will be determined from the FUSE thread
+ * context.
  *
  * You must call hdfsConnRelease on the connection you get back!
  *
- * @param usrnameThe username to use
- * @param ctxThe FUSE context to use (contains UID, PID of requestor)
- * @param conn   (out param) The HDFS connection
- *
- * @return   0 on success; error code otherwise
- */
-int fuseConnect(const char *usrname, struct fuse_context *ctx,
-struct hdfsConn **out);
-
-/**
- * Get a libhdfs connection.
- *
- * The same as fuseConnect, except the username will be determined from the 
FUSE
- * thread context.
- *
  * @param conn   (out param) The HDFS connection
  *
  * @return   0 on success; error code otherwise

http://git-wip-us.apache.org/repos/asf/hadoop/blob/faeb6a3f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 277152e..fd28c02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1590,6 +1590,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg
 via Colin P. McCabe)
 
+HDFS-9307. fuseConnect should be private to fuse_connect.c (Mingliang Liu
+via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than



hadoop git commit: HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg via Colin P. McCabe)

2015-10-27 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 73612294b -> bb87b4222


HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg via 
Colin P. McCabe)

(cherry picked from commit fe93577faf49ceb2ee47a7762a61625313ea773b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb87b422
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb87b422
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb87b422

Branch: refs/heads/branch-2
Commit: bb87b4222c75cf1f3b7ac5e0323da0f5f21233ff
Parents: 7361229
Author: Colin Patrick Mccabe 
Authored: Tue Oct 27 11:37:26 2015 -0700
Committer: Colin Patrick Mccabe 
Committed: Tue Oct 27 11:37:54 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java   | 1 +
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb87b422/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d19ba63..c7a867a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -751,6 +751,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs
 write scenario. (Mingliang Liu via mingma)
 
+HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg
+via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb87b422/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 74277ef..11c556d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -468,6 +468,7 @@ public class BlockManager implements BlockStatsMXBean {
   public void activate(Configuration conf) {
 pendingReplications.start();
 datanodeManager.activate(conf);
+this.replicationThread.setName("ReplicationMonitor");
 this.replicationThread.start();
 mxBeanName = MBeans.register("NameNode", "BlockStats", this);
   }



hadoop git commit: HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg via Colin P. McCabe)

2015-10-27 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/trunk ed9806ea4 -> fe93577fa


HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg via 
Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe93577f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe93577f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe93577f

Branch: refs/heads/trunk
Commit: fe93577faf49ceb2ee47a7762a61625313ea773b
Parents: ed9806e
Author: Colin Patrick Mccabe 
Authored: Tue Oct 27 11:37:26 2015 -0700
Committer: Colin Patrick Mccabe 
Committed: Tue Oct 27 11:37:26 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java   | 1 +
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe93577f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e66cdc7..277152e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1587,6 +1587,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs
 write scenario. (Mingliang Liu via mingma)
 
+HDFS-9299. Give ReplicationMonitor a readable thread name (Staffan Friberg
+via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe93577f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 5f55ece..897df1e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -491,6 +491,7 @@ public class BlockManager implements BlockStatsMXBean {
   public void activate(Configuration conf) {
 pendingReplications.start();
 datanodeManager.activate(conf);
+this.replicationThread.setName("ReplicationMonitor");
 this.replicationThread.start();
 mxBeanName = MBeans.register("NameNode", "BlockStats", this);
   }



hadoop git commit: HADOOP-12178. NPE during handling of SASL setup if problem with SASL resolver class. Contributed by Steve Loughran

2015-10-27 Thread zxu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2c335a843 -> 73612294b


HADOOP-12178. NPE during handling of SASL setup if problem with SASL resolver 
class. Contributed by Steve Loughran

(cherry picked from commit ed9806ea40b945df0637c21b68964d1d2bd204f3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73612294
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73612294
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73612294

Branch: refs/heads/branch-2
Commit: 73612294bdb7440a7b22dabe6eeb2857602e766b
Parents: 2c335a8
Author: Zhihai Xu 
Authored: Tue Oct 27 09:51:26 2015 -0700
Committer: Zhihai Xu 
Committed: Tue Oct 27 10:09:45 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 9 +++--
 2 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73612294/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8398ad0..b99fd7b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -680,6 +680,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc.
 (Akira AJISAKA via ozawa)
 
+HADOOP-12178. NPE during handling of SASL setup if problem with SASL
+resolver class. (Steve Loughran via zxu)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73612294/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 9c70382..ad43b62 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -749,7 +749,12 @@ public class Client {
   return setupSaslConnection(in2, out2);
 }
   });
-} catch (Exception ex) {
+} catch (IOException ex) {
+  if (saslRpcClient == null) {
+// whatever happened -it can't be handled, so rethrow
+throw ex;
+  }
+  // otherwise, assume a connection problem
   authMethod = saslRpcClient.getAuthMethod();
   if (rand == null) {
 rand = new Random();
@@ -811,7 +816,7 @@ public class Client {
 if (t instanceof IOException) {
   markClosed((IOException)t);
 } else {
-  markClosed(new IOException("Couldn't set up IO streams", t));
+  markClosed(new IOException("Couldn't set up IO streams: " + t, t));
 }
 close();
   }



hadoop git commit: HADOOP-12178. NPE during handling of SASL setup if problem with SASL resolver class. Contributed by Steve Loughran

2015-10-27 Thread zxu
Repository: hadoop
Updated Branches:
  refs/heads/trunk aa09880ab -> ed9806ea4


HADOOP-12178. NPE during handling of SASL setup if problem with SASL resolver 
class. Contributed by Steve Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ed9806ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ed9806ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ed9806ea

Branch: refs/heads/trunk
Commit: ed9806ea40b945df0637c21b68964d1d2bd204f3
Parents: aa09880
Author: Zhihai Xu 
Authored: Tue Oct 27 09:51:26 2015 -0700
Committer: Zhihai Xu 
Committed: Tue Oct 27 09:51:26 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 9 +++--
 2 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed9806ea/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 01512bd..9ba6275 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1271,6 +1271,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc.
 (Akira AJISAKA via ozawa)
 
+HADOOP-12178. NPE during handling of SASL setup if problem with SASL
+resolver class. (Steve Loughran via zxu)
+
   OPTIMIZATIONS
 
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed9806ea/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index f067d59..5917e09 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -749,7 +749,12 @@ public class Client {
   return setupSaslConnection(in2, out2);
 }
   });
-} catch (Exception ex) {
+} catch (IOException ex) {
+  if (saslRpcClient == null) {
+// whatever happened -it can't be handled, so rethrow
+throw ex;
+  }
+  // otherwise, assume a connection problem
   authMethod = saslRpcClient.getAuthMethod();
   if (rand == null) {
 rand = new Random();
@@ -811,7 +816,7 @@ public class Client {
 if (t instanceof IOException) {
   markClosed((IOException)t);
 } else {
-  markClosed(new IOException("Couldn't set up IO streams", t));
+  markClosed(new IOException("Couldn't set up IO streams: " + t, t));
 }
 close();
   }



hadoop git commit: HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario. (Mingliang Liu via mingma)

2015-10-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c5bf1cb7a -> 2c335a843


HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write 
scenario. (Mingliang Liu via mingma)

(cherry picked from commit aa09880ab85f3c35c12373976e7b03f3140b65c8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2c335a84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2c335a84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2c335a84

Branch: refs/heads/branch-2
Commit: 2c335a8434d94528d1923fffae35356447a5a5dc
Parents: c5bf1cb
Author: Ming Ma 
Authored: Tue Oct 27 09:28:40 2015 -0700
Committer: Ming Ma 
Committed: Tue Oct 27 09:29:37 2015 -0700

--
 .../org/apache/hadoop/hdfs/DataStreamer.java|  4 +-
 .../hdfs/client/HdfsClientConfigKeys.java   |  5 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java  | 12 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../src/main/resources/hdfs-default.xml | 12 +++
 .../hadoop/hdfs/TestDFSClientSocketSize.java| 96 
 6 files changed, 131 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 5bb7837..9b140be 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -139,7 +139,9 @@ class DataStreamer extends Daemon {
 NetUtils.connect(sock, isa, client.getRandomLocalInterfaceAddr(),
 conf.getSocketTimeout());
 sock.setSoTimeout(timeout);
-sock.setSendBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
+if (conf.getSocketSendBufferSize() > 0) {
+  sock.setSendBufferSize(conf.getSocketSendBufferSize());
+}
 LOG.debug("Send buf size {}", sock.getSendBufferSize());
 return sock;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 2e72769..992cf3a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.client;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 
 import java.util.concurrent.TimeUnit;
 
@@ -62,6 +63,10 @@ public interface HdfsClientConfigKeys {
   String  DFS_CLIENT_WRITE_PACKET_SIZE_KEY = "dfs.client-write-packet-size";
   int DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT = 64*1024;
   String  DFS_CLIENT_SOCKET_TIMEOUT_KEY = "dfs.client.socket-timeout";
+  String  DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_KEY =
+  "dfs.client.socket.send.buffer.size";
+  int DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT =
+  HdfsConstants.DEFAULT_DATA_SOCKET_SIZE;
   String  DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY =
   "dfs.client.socketcache.capacity";
   int DFS_CLIENT_SOCKET_CACHE_CAPACITY_DEFAULT = 16;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c335a84/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
index 15387bb..7f3ae04 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
@@ -55,6 +55,8 @@ import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCK
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientC

hadoop git commit: HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario. (Mingliang Liu via mingma)

2015-10-27 Thread mingma
Repository: hadoop
Updated Branches:
  refs/heads/trunk c28e16b40 -> aa09880ab


HDFS-9259. Make SO_SNDBUF size configurable at DFSClient side for hdfs write 
scenario. (Mingliang Liu via mingma)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa09880a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa09880a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa09880a

Branch: refs/heads/trunk
Commit: aa09880ab85f3c35c12373976e7b03f3140b65c8
Parents: c28e16b
Author: Ming Ma 
Authored: Tue Oct 27 09:28:40 2015 -0700
Committer: Ming Ma 
Committed: Tue Oct 27 09:28:40 2015 -0700

--
 .../org/apache/hadoop/hdfs/DataStreamer.java|  4 +-
 .../hdfs/client/HdfsClientConfigKeys.java   |  5 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java  | 12 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../src/main/resources/hdfs-default.xml | 12 +++
 .../hadoop/hdfs/TestDFSClientSocketSize.java| 96 
 6 files changed, 131 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index b0c5be6..03c2c52 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -131,7 +131,9 @@ class DataStreamer extends Daemon {
 NetUtils.connect(sock, isa, client.getRandomLocalInterfaceAddr(),
 conf.getSocketTimeout());
 sock.setSoTimeout(timeout);
-sock.setSendBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
+if (conf.getSocketSendBufferSize() > 0) {
+  sock.setSendBufferSize(conf.getSocketSendBufferSize());
+}
 LOG.debug("Send buf size {}", sock.getSendBufferSize());
 return sock;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 17c3654..fcfd49c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.client;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 
 import java.util.concurrent.TimeUnit;
 
@@ -58,6 +59,10 @@ public interface HdfsClientConfigKeys {
   String  DFS_CLIENT_WRITE_PACKET_SIZE_KEY = "dfs.client-write-packet-size";
   int DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT = 64*1024;
   String  DFS_CLIENT_SOCKET_TIMEOUT_KEY = "dfs.client.socket-timeout";
+  String  DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_KEY =
+  "dfs.client.socket.send.buffer.size";
+  int DFS_CLIENT_SOCKET_SEND_BUFFER_SIZE_DEFAULT =
+  HdfsConstants.DEFAULT_DATA_SOCKET_SIZE;
   String  DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY =
   "dfs.client.socketcache.capacity";
   int DFS_CLIENT_SOCKET_CACHE_CAPACITY_DEFAULT = 16;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa09880a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
index 43fba7b..194f3ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
@@ -56,6 +56,8 @@ import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCK
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_CACHE_CAPACITY_KEY;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_SOCKET_CACHE_EXPIRY_MSEC_DEFAULT;
 import static 
org.a

hadoop git commit: HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. Contributed by Ming Ma. (cherry picked from commit 8104d522690fe9556177893770a388291cea0749)

2015-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 653ef52ef -> 1d23e1ec0


HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. 
Contributed by Ming Ma.
(cherry picked from commit 8104d522690fe9556177893770a388291cea0749)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d23e1ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d23e1ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d23e1ec

Branch: refs/heads/branch-2.7
Commit: 1d23e1ec073489bfc8a04a08350a2c46efbd466f
Parents: 653ef52
Author: Kihwal Lee 
Authored: Tue Oct 27 11:09:05 2015 -0500
Committer: Kihwal Lee 
Committed: Tue Oct 27 11:09:05 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../blockmanagement/DecommissionManager.java| 28 +--
 .../blockmanagement/HeartbeatManager.java   | 29 ++--
 .../namenode/TestNamenodeCapacityReport.java|  5 
 4 files changed, 41 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d23e1ec/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 50bc0c4..7c09896 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -28,6 +28,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-6945. BlockManager should remove a block from excessReplicateMap and
 decrement ExcessBlocks metric when the block is removed. (aajisaka)
 
+HDFS-7725. Incorrect "nodes in service" metrics caused all writes to fail.
+(Ming Ma via wang)
+
 HDFS-8806. Inconsistent metrics: number of missing blocks with replication
 factor 1 not properly cleared. (Zhe Zhang via aajisaka)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d23e1ec/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
index 9355329..7f3d778 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
@@ -197,23 +197,21 @@ public class DecommissionManager {
*/
   @VisibleForTesting
   public void startDecommission(DatanodeDescriptor node) {
-if (!node.isDecommissionInProgress()) {
-  if (!node.isAlive) {
-LOG.info("Dead node {} is decommissioned immediately.", node);
-node.setDecommissioned();
-  } else if (!node.isDecommissioned()) {
+if (!node.isDecommissionInProgress() && !node.isDecommissioned()) {
+  // Update DN stats maintained by HeartbeatManager
+  hbManager.startDecommission(node);
+  // hbManager.startDecommission will set dead node to decommissioned.
+  if (node.isDecommissionInProgress()) {
 for (DatanodeStorageInfo storage : node.getStorageInfos()) {
-  LOG.info("Starting decommission of {} {} with {} blocks", 
+  LOG.info("Starting decommission of {} {} with {} blocks",
   node, storage, storage.numBlocks());
 }
-// Update DN stats maintained by HeartbeatManager
-hbManager.startDecommission(node);
 node.decommissioningStatus.setStartTime(monotonicNow());
 pendingNodes.add(node);
   }
 } else {
-  LOG.trace("startDecommission: Node {} is already decommission in "
-  + "progress, nothing to do.", node);
+  LOG.trace("startDecommission: Node {} in {}, nothing to do." +
+  node, node.getAdminState());
 }
   }
 
@@ -221,12 +219,12 @@ public class DecommissionManager {
* Stop decommissioning the specified datanode. 
* @param node
*/
-  void stopDecommission(DatanodeDescriptor node) {
+  @VisibleForTesting
+  public void stopDecommission(DatanodeDescriptor node) {
 if (node.isDecommissionInProgress() || node.isDecommissioned()) {
-  LOG.info("Stopping decommissioning of node {}", node);
   // Update DN stats maintained by HeartbeatManager
   hbManager.stopDecommission(node);
-  // Over-replicated blocks will be detected and processed when 
+  // Over-replicated blocks will be detected and processed when
   // the dead node comes back and send in its full block report.
   if (node.isAlive) {
 blo

[2/2] hadoop git commit: YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt. Contributed by Bibin A Chundatt.

2015-10-27 Thread vvasudev
YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt. 
Contributed by Bibin A Chundatt.

(cherry picked from commit c28e16b40caf1e22f72cf2214ebc2fe2eaca4d03)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5bf1cb7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5bf1cb7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5bf1cb7

Branch: refs/heads/branch-2
Commit: c5bf1cb7af33d93cff783b00d945eb3be66c3c5e
Parents: 3138b43
Author: Varun Vasudev 
Authored: Tue Oct 27 20:56:00 2015 +0530
Committer: Varun Vasudev 
Committed: Tue Oct 27 20:56:45 2015 +0530

--
 .../hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java   | 2 +-
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5bf1cb7/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
index d94a31d..e9d829e 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
@@ -933,7 +933,7 @@ public class ResourceSchedulerWrapper
   @LimitedPrivate("yarn")
   @Unstable
   public Resource getClusterResource() {
-return null;
+return super.getClusterResource();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5bf1cb7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2d2770a..fe804a2 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -973,6 +973,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3573. MiniMRYarnCluster constructor that starts the timeline server
 using a boolean should be marked deprecated. (Brahma Reddy Battula via 
ozawa)
 
+YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt
+(Bibin A Chundatt via vvasudev)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[1/2] hadoop git commit: YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt. Contributed by Bibin A Chundatt.

2015-10-27 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3138b43f1 -> c5bf1cb7a
  refs/heads/trunk bcb2386e3 -> c28e16b40


YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt. 
Contributed by Bibin A Chundatt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c28e16b4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c28e16b4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c28e16b4

Branch: refs/heads/trunk
Commit: c28e16b40caf1e22f72cf2214ebc2fe2eaca4d03
Parents: bcb2386
Author: Varun Vasudev 
Authored: Tue Oct 27 20:56:00 2015 +0530
Committer: Varun Vasudev 
Committed: Tue Oct 27 20:56:00 2015 +0530

--
 .../hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java   | 2 +-
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c28e16b4/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
index 310b3b5..fce220b 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
@@ -932,7 +932,7 @@ final public class ResourceSchedulerWrapper
   @LimitedPrivate("yarn")
   @Unstable
   public Resource getClusterResource() {
-return null;
+return super.getClusterResource();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c28e16b4/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 29259c4..f5a3a83 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1025,6 +1025,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3573. MiniMRYarnCluster constructor that starts the timeline server
 using a boolean should be marked deprecated. (Brahma Reddy Battula via 
ozawa)
 
+YARN-4302. SLS not able start due to NPE in SchedulerApplicationAttempt
+(Bibin A Chundatt via vvasudev)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. Contributed by Kai Zheng.

2015-10-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 96677bef0 -> bcb2386e3


HADOOP-12515. Mockito dependency is missing in hadoop-kafka module. Contributed 
by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bcb2386e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bcb2386e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bcb2386e

Branch: refs/heads/trunk
Commit: bcb2386e39433a81f3bf4470b0a425292f47aa73
Parents: 96677be
Author: Akira Ajisaka 
Authored: Tue Oct 27 20:17:16 2015 +0900
Committer: Akira Ajisaka 
Committed: Tue Oct 27 20:17:16 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 hadoop-tools/hadoop-kafka/pom.xml   | 5 +
 2 files changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcb2386e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a0c2fa1..01512bd 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -513,6 +513,9 @@ Trunk (Unreleased)
 HADOOP-12364. Deleting pid file after stop is causing the daemons to
 keep restarting (Siqi Li via aw)
 
+HADOOP-12515. Mockito dependency is missing in hadoop-kafka module.
+(Kai Zheng via aajisaka)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcb2386e/hadoop-tools/hadoop-kafka/pom.xml
--
diff --git a/hadoop-tools/hadoop-kafka/pom.xml 
b/hadoop-tools/hadoop-kafka/pom.xml
index 75405e1..b478845 100644
--- a/hadoop-tools/hadoop-kafka/pom.xml
+++ b/hadoop-tools/hadoop-kafka/pom.xml
@@ -125,5 +125,10 @@
   junit
   test
 
+
+  org.mockito
+  mockito-all
+  test
+