hadoop git commit: HDFS-11780. Ozone: KSM: Add putKey. Contributed by Chen Liang.

2017-05-25 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7240 67da8be74 -> e641bee7b


HDFS-11780. Ozone: KSM: Add putKey. Contributed by Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e641bee7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e641bee7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e641bee7

Branch: refs/heads/HDFS-7240
Commit: e641bee7b7770fc30b9f6bbc688c6025b601e5bd
Parents: 67da8be
Author: Xiaoyu Yao 
Authored: Thu May 25 22:06:17 2017 -0700
Committer: Xiaoyu Yao 
Committed: Thu May 25 22:06:17 2017 -0700

--
 .../apache/hadoop/ksm/helpers/KsmKeyArgs.java   |  88 +++
 .../apache/hadoop/ksm/helpers/KsmKeyInfo.java   | 156 +++
 .../ksm/protocol/KeySpaceManagerProtocol.java   |   7 +
 ...ceManagerProtocolClientSideTranslatorPB.java |  38 +
 .../org/apache/hadoop/scm/ScmConfigKeys.java|   2 +-
 .../main/proto/KeySpaceManagerProtocol.proto|  33 
 .../org/apache/hadoop/ozone/ksm/KSMMetrics.java |  19 +++
 .../org/apache/hadoop/ozone/ksm/KeyManager.java |  45 ++
 .../apache/hadoop/ozone/ksm/KeyManagerImpl.java | 109 +
 .../hadoop/ozone/ksm/KeySpaceManager.java   |  50 ++
 .../hadoop/ozone/ksm/MetadataManager.java   |   9 ++
 .../hadoop/ozone/ksm/MetadataManagerImpl.java   |   8 +
 .../ozone/ksm/exceptions/KSMException.java  |   1 +
 ...ceManagerProtocolServerSideTranslatorPB.java |  31 
 .../ozone/scm/block/BlockManagerImpl.java   |   4 +-
 .../web/storage/DistributedStorageHandler.java  |  38 -
 .../hadoop/ozone/ksm/TestKeySpaceManager.java   |  37 -
 17 files changed, 665 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e641bee7/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ksm/helpers/KsmKeyArgs.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ksm/helpers/KsmKeyArgs.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ksm/helpers/KsmKeyArgs.java
new file mode 100644
index 000..a034ed3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ksm/helpers/KsmKeyArgs.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ksm.helpers;
+
+/**
+ * Args for key. Client use this to specify key's attributes on  key creation
+ * (putKey()).
+ */
+public final class KsmKeyArgs {
+  private final String volumeName;
+  private final String bucketName;
+  private final String keyName;
+
+  private final long dataSize;
+
+  private KsmKeyArgs(String volumeName, String bucketName, String keyName,
+  long dataSize) {
+this.volumeName = volumeName;
+this.bucketName = bucketName;
+this.keyName = keyName;
+this.dataSize = dataSize;
+  }
+
+  public String getVolumeName() {
+return volumeName;
+  }
+
+  public String getBucketName() {
+return bucketName;
+  }
+
+  public String getKeyName() {
+return keyName;
+  }
+
+  public long getDataSize() {
+return dataSize;
+  }
+
+  /**
+   * Builder class of KsmKeyArgs.
+   */
+  public static class Builder {
+private String volumeName;
+private String bucketName;
+private String keyName;
+private long dataSize;
+
+public Builder setVolumeName(String volume) {
+  this.volumeName = volume;
+  return this;
+}
+
+public Builder setBucketName(String bucket) {
+  this.bucketName = bucket;
+  return this;
+}
+
+public Builder setKeyName(String key) {
+  this.keyName = key;
+  return this;
+}
+
+public Builder setDataSize(long size) {
+  this.dataSize = size;
+  return this;
+}
+
+public KsmKeyArgs build() {
+  return new KsmKeyArgs(volumeName, bucketName, keyName, dataSize);
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e641bee7/hadoop-hdfs-project/hadoop-hdfs-client/src/main/

hadoop git commit: YARN-6555. Store application flow context in NM state store for work-preserving restart. (Rohith Sharma K S via Haibo Chen)

2017-05-25 Thread haibochen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5355-branch-2 f1f7d6534 -> 303d7e0a2


YARN-6555. Store application flow context in NM state store for work-preserving 
restart. (Rohith Sharma K S via Haibo Chen)

(cherry picked from commit 47474fffac085e0e5ea46336bf80ccd0677017a3)
(cherry picked from commit 8817cb5c8424359b880c6d700e53092f0269c1bb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/303d7e0a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/303d7e0a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/303d7e0a

Branch: refs/heads/YARN-5355-branch-2
Commit: 303d7e0a284544b13d5ea04ef699823d31b7933e
Parents: f1f7d65
Author: Haibo Chen 
Authored: Thu May 25 21:15:27 2017 -0700
Committer: Haibo Chen 
Committed: Thu May 25 21:38:58 2017 -0700

--
 .../containermanager/ContainerManagerImpl.java  | 71 +---
 .../application/ApplicationImpl.java| 27 ++--
 .../yarn_server_nodemanager_recovery.proto  |  7 ++
 .../TestContainerManagerRecovery.java   | 40 +--
 4 files changed, 111 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/303d7e0a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index 1d822fe..a9d5f47 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -85,6 +85,7 @@ import org.apache.hadoop.yarn.ipc.RPCUtil;
 import org.apache.hadoop.yarn.ipc.YarnRPC;
 import org.apache.hadoop.yarn.proto.YarnProtos.ApplicationACLMapProto;
 import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.ContainerManagerApplicationProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.FlowContextProto;
 import org.apache.hadoop.yarn.security.ContainerTokenIdentifier;
 import org.apache.hadoop.yarn.security.NMTokenIdentifier;
 import org.apache.hadoop.yarn.server.api.ContainerType;
@@ -384,10 +385,20 @@ public class ContainerManagerImpl extends 
CompositeService implements
   new LogAggregationContextPBImpl(p.getLogAggregationContext());
 }
 
+FlowContext fc = null;
+if (p.getFlowContext() != null) {
+  FlowContextProto fcp = p.getFlowContext();
+  fc = new FlowContext(fcp.getFlowName(), fcp.getFlowVersion(),
+  fcp.getFlowRunId());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"Recovering Flow context: " + fc + " for an application " + appId);
+  }
+}
+
 LOG.info("Recovering application " + appId);
-//TODO: Recover flow and flow run ID
-ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), appId,
-creds, context, p.getAppLogAggregationInitedTime());
+ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), fc,
+appId, creds, context, p.getAppLogAggregationInitedTime());
 context.getApplications().put(appId, app);
 app.handle(new ApplicationInitEvent(appId, acls, logAggregationContext));
   }
@@ -941,7 +952,7 @@ public class ContainerManagerImpl extends CompositeService 
implements
   private ContainerManagerApplicationProto buildAppProto(ApplicationId appId,
   String user, Credentials credentials,
   Map appAcls,
-  LogAggregationContext logAggregationContext) {
+  LogAggregationContext logAggregationContext, FlowContext flowContext) {
 
 ContainerManagerApplicationProto.Builder builder =
 ContainerManagerApplicationProto.newBuilder();
@@ -976,6 +987,16 @@ public class ContainerManagerImpl extends CompositeService 
implements
   }
 }
 
+builder.clearFlowContext();
+if (flowContext != null && flowContext.getFlowName() != null
+&& flowContext.getFlowVersion() != null) {
+  FlowContextProto fcp =
+  FlowContextProto.newBuilder().setFlowName(flowContext.getFlowName())
+  .setFlowVersion(flowContext.getFlowVersion())
+  .setFlowRunId(flowContext.getFlowRunId()).b

hadoop git commit: YARN-6555. Store application flow context in NM state store for work-preserving restart. (Rohith Sharma K S via Haibo Chen)

2017-05-25 Thread haibochen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5355 3e052dbbe -> 8817cb5c8


YARN-6555. Store application flow context in NM state store for work-preserving 
restart. (Rohith Sharma K S via Haibo Chen)

(cherry picked from commit 47474fffac085e0e5ea46336bf80ccd0677017a3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8817cb5c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8817cb5c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8817cb5c

Branch: refs/heads/YARN-5355
Commit: 8817cb5c8424359b880c6d700e53092f0269c1bb
Parents: 3e052db
Author: Haibo Chen 
Authored: Thu May 25 21:15:27 2017 -0700
Committer: Haibo Chen 
Committed: Thu May 25 21:35:58 2017 -0700

--
 .../containermanager/ContainerManagerImpl.java  | 71 +---
 .../application/ApplicationImpl.java| 27 ++--
 .../yarn_server_nodemanager_recovery.proto  |  7 ++
 .../TestContainerManagerRecovery.java   | 40 +--
 4 files changed, 111 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8817cb5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index 125b046..37dd598 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -85,6 +85,7 @@ import org.apache.hadoop.yarn.ipc.RPCUtil;
 import org.apache.hadoop.yarn.ipc.YarnRPC;
 import org.apache.hadoop.yarn.proto.YarnProtos.ApplicationACLMapProto;
 import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.ContainerManagerApplicationProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.FlowContextProto;
 import org.apache.hadoop.yarn.security.ContainerTokenIdentifier;
 import org.apache.hadoop.yarn.security.NMTokenIdentifier;
 import org.apache.hadoop.yarn.server.api.ContainerType;
@@ -384,10 +385,20 @@ public class ContainerManagerImpl extends 
CompositeService implements
   new LogAggregationContextPBImpl(p.getLogAggregationContext());
 }
 
+FlowContext fc = null;
+if (p.getFlowContext() != null) {
+  FlowContextProto fcp = p.getFlowContext();
+  fc = new FlowContext(fcp.getFlowName(), fcp.getFlowVersion(),
+  fcp.getFlowRunId());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"Recovering Flow context: " + fc + " for an application " + appId);
+  }
+}
+
 LOG.info("Recovering application " + appId);
-//TODO: Recover flow and flow run ID
-ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), appId,
-creds, context, p.getAppLogAggregationInitedTime());
+ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), fc,
+appId, creds, context, p.getAppLogAggregationInitedTime());
 context.getApplications().put(appId, app);
 app.handle(new ApplicationInitEvent(appId, acls, logAggregationContext));
   }
@@ -949,7 +960,7 @@ public class ContainerManagerImpl extends CompositeService 
implements
   private ContainerManagerApplicationProto buildAppProto(ApplicationId appId,
   String user, Credentials credentials,
   Map appAcls,
-  LogAggregationContext logAggregationContext) {
+  LogAggregationContext logAggregationContext, FlowContext flowContext) {
 
 ContainerManagerApplicationProto.Builder builder =
 ContainerManagerApplicationProto.newBuilder();
@@ -984,6 +995,16 @@ public class ContainerManagerImpl extends CompositeService 
implements
   }
 }
 
+builder.clearFlowContext();
+if (flowContext != null && flowContext.getFlowName() != null
+&& flowContext.getFlowVersion() != null) {
+  FlowContextProto fcp =
+  FlowContextProto.newBuilder().setFlowName(flowContext.getFlowName())
+  .setFlowVersion(flowContext.getFlowVersion())
+  .setFlowRunId(flowContext.getFlowRunId()).build();
+  builder.setFlowContext(fcp);
+}
+
 return builder.build();
   }

hadoop git commit: YARN-6555. Store application flow context in NM state store for work-preserving restart. (Rohith Sharma K S via Haibo Chen)

2017-05-25 Thread haibochen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2b5ad4876 -> 47474fffa


YARN-6555. Store application flow context in NM state store for work-preserving 
restart. (Rohith Sharma K S via Haibo Chen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47474fff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47474fff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47474fff

Branch: refs/heads/trunk
Commit: 47474fffac085e0e5ea46336bf80ccd0677017a3
Parents: 2b5ad48
Author: Haibo Chen 
Authored: Thu May 25 21:15:27 2017 -0700
Committer: Haibo Chen 
Committed: Thu May 25 21:15:27 2017 -0700

--
 .../containermanager/ContainerManagerImpl.java  | 71 +---
 .../application/ApplicationImpl.java| 27 ++--
 .../yarn_server_nodemanager_recovery.proto  |  7 ++
 .../TestContainerManagerRecovery.java   | 40 +--
 4 files changed, 111 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47474fff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index f65f1ac..50268b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -85,6 +85,7 @@ import org.apache.hadoop.yarn.ipc.RPCUtil;
 import org.apache.hadoop.yarn.ipc.YarnRPC;
 import org.apache.hadoop.yarn.proto.YarnProtos.ApplicationACLMapProto;
 import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.ContainerManagerApplicationProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerNodemanagerRecoveryProtos.FlowContextProto;
 import org.apache.hadoop.yarn.security.ContainerTokenIdentifier;
 import org.apache.hadoop.yarn.security.NMTokenIdentifier;
 import org.apache.hadoop.yarn.server.api.ContainerType;
@@ -381,10 +382,20 @@ public class ContainerManagerImpl extends 
CompositeService implements
   new LogAggregationContextPBImpl(p.getLogAggregationContext());
 }
 
+FlowContext fc = null;
+if (p.getFlowContext() != null) {
+  FlowContextProto fcp = p.getFlowContext();
+  fc = new FlowContext(fcp.getFlowName(), fcp.getFlowVersion(),
+  fcp.getFlowRunId());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"Recovering Flow context: " + fc + " for an application " + appId);
+  }
+}
+
 LOG.info("Recovering application " + appId);
-//TODO: Recover flow and flow run ID
-ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), appId,
-creds, context, p.getAppLogAggregationInitedTime());
+ApplicationImpl app = new ApplicationImpl(dispatcher, p.getUser(), fc,
+appId, creds, context, p.getAppLogAggregationInitedTime());
 context.getApplications().put(appId, app);
 app.handle(new ApplicationInitEvent(appId, acls, logAggregationContext));
   }
@@ -936,7 +947,7 @@ public class ContainerManagerImpl extends CompositeService 
implements
   private ContainerManagerApplicationProto buildAppProto(ApplicationId appId,
   String user, Credentials credentials,
   Map appAcls,
-  LogAggregationContext logAggregationContext) {
+  LogAggregationContext logAggregationContext, FlowContext flowContext) {
 
 ContainerManagerApplicationProto.Builder builder =
 ContainerManagerApplicationProto.newBuilder();
@@ -971,6 +982,16 @@ public class ContainerManagerImpl extends CompositeService 
implements
   }
 }
 
+builder.clearFlowContext();
+if (flowContext != null && flowContext.getFlowName() != null
+&& flowContext.getFlowVersion() != null) {
+  FlowContextProto fcp =
+  FlowContextProto.newBuilder().setFlowName(flowContext.getFlowName())
+  .setFlowVersion(flowContext.getFlowVersion())
+  .setFlowRunId(flowContext.getFlowRunId()).build();
+  builder.setFlowContext(fcp);
+}
+
 return builder.build();
   }
 
@@ -1016,25 +1037,29 @@ public class ContainerManagerImpl extends 
Composite

hadoop git commit: HDFS-11421. Make WebHDFS' ACLs RegEx configurable. Contributed by Harsh J.

2017-05-25 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2cb63433a -> 54971c419


HDFS-11421. Make WebHDFS' ACLs RegEx configurable. Contributed by Harsh J.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54971c41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54971c41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54971c41

Branch: refs/heads/branch-2
Commit: 54971c4195cdd204f93dc6f1b395072541d7393d
Parents: 2cb6343
Author: Xiao Chen 
Authored: Thu May 25 21:01:39 2017 -0700
Committer: Xiao Chen 
Committed: Thu May 25 21:01:39 2017 -0700

--
 .../hdfs/client/HdfsClientConfigKeys.java   |  2 ++
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  |  6 +++-
 .../hdfs/web/resources/AclPermissionParam.java  | 21 ++--
 .../datanode/web/webhdfs/WebHdfsHandler.java| 10 --
 .../server/namenode/NameNodeHttpServer.java |  5 +++
 .../src/main/resources/hdfs-default.xml |  8 +
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 31 +-
 .../hadoop/hdfs/web/resources/TestParam.java| 34 
 8 files changed, 109 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54971c41/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index f1a4699..1c03f6b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -35,6 +35,8 @@ public interface HdfsClientConfigKeys {
   String  DFS_WEBHDFS_USER_PATTERN_KEY =
   "dfs.webhdfs.user.provider.user.pattern";
   String  DFS_WEBHDFS_USER_PATTERN_DEFAULT = "^[A-Za-z_][A-Za-z0-9._-]*[$]?$";
+  String  DFS_WEBHDFS_ACL_PERMISSION_PATTERN_KEY =
+  "dfs.webhdfs.acl.provider.permission.pattern";
   String DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT =
   
"^(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?(,(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?)*$";
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54971c41/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 1c32657..97033ce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -182,10 +182,14 @@ public class WebHdfsFileSystem extends FileSystem
   ) throws IOException {
 super.initialize(uri, conf);
 setConf(conf);
-/** set user pattern based on configuration file */
+
+// set user and acl patterns based on configuration file
 UserParam.setUserPattern(conf.get(
 HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
 HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));
+AclPermissionParam.setAclPermissionPattern(conf.get(
+HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_KEY,
+HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT));
 
 boolean isOAuth = conf.getBoolean(
 HdfsClientConfigKeys.DFS_WEBHDFS_OAUTH_ENABLED_KEY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54971c41/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
index 130c8fd..0771506 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
@@ -24,6 +24,7 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.regex.Pattern;
 
+import com.google.common.annot

hadoop git commit: HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. Contributed by Kihwal Lee. (Updated TestBlockUnderConstruction for nextGenerationStamp method) (cherry pi

2017-05-25 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 8739c4d2f -> bbef16b84


HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. 
Contributed by Kihwal Lee.
(Updated TestBlockUnderConstruction for nextGenerationStamp method)
(cherry picked from commit 2cb63433abb21cd2b74bd266b70a682caf9e2d98)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bbef16b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bbef16b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bbef16b8

Branch: refs/heads/branch-2.8
Commit: bbef16b84eb3ef994216530b5e4579101c45675f
Parents: 8739c4d
Author: Kihwal Lee 
Authored: Thu May 25 17:34:56 2017 -0500
Committer: Kihwal Lee 
Committed: Thu May 25 17:34:56 2017 -0500

--
 .../BlockUnderConstructionFeature.java  | 30 ++---
 .../server/blockmanagement/DatanodeManager.java |  3 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |  2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../hdfs/server/namenode/LeaseManager.java  | 15 +--
 .../TestBlockUnderConstructionFeature.java  |  8 ++--
 .../namenode/TestBlockUnderConstruction.java| 45 
 .../TestCommitBlockSynchronization.java |  2 +-
 .../namenode/ha/TestRetryCacheWithHA.java   |  9 +++-
 9 files changed, 98 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbef16b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
index ddcdd0f..c43ace7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
@@ -69,11 +69,24 @@ public class BlockUnderConstructionFeature {
 
   /** Set expected locations */
   public void setExpectedLocations(Block block, DatanodeStorageInfo[] targets) 
{
-int numLocations = targets == null ? 0 : targets.length;
+if (targets == null) {
+  return;
+}
+int numLocations = 0;
+for (DatanodeStorageInfo target : targets) {
+  if (target != null) {
+numLocations++;
+  }
+}
+
 this.replicas = new ReplicaUnderConstruction[numLocations];
-for(int i = 0; i < numLocations; i++) {
-  replicas[i] = new ReplicaUnderConstruction(block, targets[i],
-  ReplicaState.RBW);
+int offset = 0;
+for(int i = 0; i < targets.length; i++) {
+  // Only store non-null DatanodeStorageInfo.
+  if (targets[i] != null) {
+replicas[i] = new ReplicaUnderConstruction(block,
+targets[i], ReplicaState.RBW);
+  }
 }
   }
 
@@ -142,10 +155,17 @@ public class BlockUnderConstructionFeature {
* Initialize lease recovery for this block.
* Find the first alive data-node starting from the previous primary and
* make it primary.
+   * @param blockInfo Block to be recovered
+   * @param recoveryId Recovery ID (new gen stamp)
+   * @param startRecovery Issue recovery command to datanode if true.
*/
-  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId) {
+  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId,
+  boolean startRecovery) {
 setBlockUCState(BlockUCState.UNDER_RECOVERY);
 blockRecoveryId = recoveryId;
+if (!startRecovery) {
+  return;
+}
 if (replicas.length == 0) {
   NameNode.blockStateChangeLog.warn("BLOCK*" +
   " BlockUnderConstructionFeature.initializeBlockRecovery:" +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbef16b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index eeda5b1..4d79e8e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -519,10 +519,11 @@ public class DatanodeManager 

hadoop git commit: HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. Contributed by Kihwal Lee.

2017-05-25 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 fc6cb4b2d -> 2cb63433a


HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. 
Contributed by Kihwal Lee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cb63433
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cb63433
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cb63433

Branch: refs/heads/branch-2
Commit: 2cb63433abb21cd2b74bd266b70a682caf9e2d98
Parents: fc6cb4b
Author: Kihwal Lee 
Authored: Thu May 25 17:21:56 2017 -0500
Committer: Kihwal Lee 
Committed: Thu May 25 17:21:56 2017 -0500

--
 .../BlockUnderConstructionFeature.java  | 30 ++---
 .../server/blockmanagement/DatanodeManager.java |  3 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |  2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../hdfs/server/namenode/LeaseManager.java  | 15 +--
 .../TestBlockUnderConstructionFeature.java  |  8 ++--
 .../namenode/TestBlockUnderConstruction.java| 45 
 .../TestCommitBlockSynchronization.java |  2 +-
 .../namenode/ha/TestRetryCacheWithHA.java   |  9 +++-
 9 files changed, 98 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cb63433/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
index ddcdd0f..c43ace7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
@@ -69,11 +69,24 @@ public class BlockUnderConstructionFeature {
 
   /** Set expected locations */
   public void setExpectedLocations(Block block, DatanodeStorageInfo[] targets) 
{
-int numLocations = targets == null ? 0 : targets.length;
+if (targets == null) {
+  return;
+}
+int numLocations = 0;
+for (DatanodeStorageInfo target : targets) {
+  if (target != null) {
+numLocations++;
+  }
+}
+
 this.replicas = new ReplicaUnderConstruction[numLocations];
-for(int i = 0; i < numLocations; i++) {
-  replicas[i] = new ReplicaUnderConstruction(block, targets[i],
-  ReplicaState.RBW);
+int offset = 0;
+for(int i = 0; i < targets.length; i++) {
+  // Only store non-null DatanodeStorageInfo.
+  if (targets[i] != null) {
+replicas[i] = new ReplicaUnderConstruction(block,
+targets[i], ReplicaState.RBW);
+  }
 }
   }
 
@@ -142,10 +155,17 @@ public class BlockUnderConstructionFeature {
* Initialize lease recovery for this block.
* Find the first alive data-node starting from the previous primary and
* make it primary.
+   * @param blockInfo Block to be recovered
+   * @param recoveryId Recovery ID (new gen stamp)
+   * @param startRecovery Issue recovery command to datanode if true.
*/
-  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId) {
+  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId,
+  boolean startRecovery) {
 setBlockUCState(BlockUCState.UNDER_RECOVERY);
 blockRecoveryId = recoveryId;
+if (!startRecovery) {
+  return;
+}
 if (replicas.length == 0) {
   NameNode.blockStateChangeLog.warn("BLOCK*" +
   " BlockUnderConstructionFeature.initializeBlockRecovery:" +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cb63433/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index dd96181..5eb6760 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -561,10 +561,11 @@ public class DatanodeManager {
   DatanodeID[] datanodeID, String[] storageIDs,
   String format, Object... args) throws UnregisteredNodeException {
 if (data

hadoop git commit: HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. Contributed by Kihwal Lee.

2017-05-25 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 87590090c -> 2b5ad4876


HDFS-11817. A faulty node can cause a lease leak and NPE on accessing data. 
Contributed by Kihwal Lee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b5ad487
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b5ad487
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b5ad487

Branch: refs/heads/trunk
Commit: 2b5ad48762587abbcd8bdb50d0ae98f8080d926c
Parents: 8759009
Author: Kihwal Lee 
Authored: Thu May 25 17:17:38 2017 -0500
Committer: Kihwal Lee 
Committed: Thu May 25 17:17:38 2017 -0500

--
 .../BlockUnderConstructionFeature.java  |  9 +++-
 .../server/blockmanagement/DatanodeManager.java |  3 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |  2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../hdfs/server/namenode/LeaseManager.java  | 15 +--
 .../TestBlockUnderConstructionFeature.java  |  8 ++--
 .../namenode/TestBlockUnderConstruction.java| 45 
 .../TestCommitBlockSynchronization.java |  2 +-
 8 files changed, 73 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b5ad487/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
index 7453184..61390d9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java
@@ -223,10 +223,17 @@ public class BlockUnderConstructionFeature {
* Initialize lease recovery for this block.
* Find the first alive data-node starting from the previous primary and
* make it primary.
+   * @param blockInfo Block to be recovered
+   * @param recoveryId Recovery ID (new gen stamp)
+   * @param startRecovery Issue recovery command to datanode if true.
*/
-  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId) {
+  public void initializeBlockRecovery(BlockInfo blockInfo, long recoveryId,
+  boolean startRecovery) {
 setBlockUCState(BlockUCState.UNDER_RECOVERY);
 blockRecoveryId = recoveryId;
+if (!startRecovery) {
+  return;
+}
 if (replicas.length == 0) {
   NameNode.blockStateChangeLog.warn("BLOCK*" +
   " BlockUnderConstructionFeature.initializeBlockRecovery:" +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b5ad487/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 7dcc9fd..c303594 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -642,10 +642,11 @@ public class DatanodeManager {
   String format, Object... args) throws UnregisteredNodeException {
 storageIDs = storageIDs == null ? new String[0] : storageIDs;
 if (datanodeID.length != storageIDs.length) {
+  // Error for pre-2.0.0-alpha clients.
   final String err = (storageIDs.length == 0?
   "Missing storageIDs: It is likely that the HDFS client,"
   + " who made this call, is running in an older version of Hadoop"
-  + " which does not support storageIDs."
+  + "(pre-2.0.0-alpha)  which does not support storageIDs."
   : "Length mismatched: storageIDs.length=" + storageIDs.length + " != 
"
   ) + " datanodeID.length=" + datanodeID.length;
   throw new HadoopIllegalArgumentException(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b5ad487/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main

hadoop git commit: YARN-6582. FSAppAttempt demand can be updated atomically in updateDemand(). (Karthik Kambatla via Yufei Gu)

2017-05-25 Thread yufei
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 4c44ff69d -> fc6cb4b2d


YARN-6582. FSAppAttempt demand can be updated atomically in updateDemand(). 
(Karthik Kambatla via Yufei Gu)

(cherry picked from commit 87590090c887829e874a7132be9cf8de061437d6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc6cb4b2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc6cb4b2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc6cb4b2

Branch: refs/heads/branch-2
Commit: fc6cb4b2dd90400a83e44e6177d83666b16af14f
Parents: 4c44ff6
Author: Yufei Gu 
Authored: Thu May 25 14:22:13 2017 -0700
Committer: Yufei Gu 
Committed: Thu May 25 14:25:49 2017 -0700

--
 .../scheduler/fair/FSAppAttempt.java| 23 +---
 1 file changed, 10 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc6cb4b2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index c8c0c32..80914cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -1284,24 +1284,21 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 
   @Override
   public void updateDemand() {
-demand = Resources.createResource(0);
 // Demand is current consumption plus outstanding requests
-Resources.addTo(demand, getCurrentConsumption());
+Resource tmpDemand = Resources.clone(getCurrentConsumption());
 
 // Add up outstanding resource requests
-try {
-  writeLock.lock();
-  for (SchedulerRequestKey k : getSchedulerKeys()) {
-PendingAsk pendingAsk = getPendingAsk(k, ResourceRequest.ANY);
-if (pendingAsk.getCount() > 0) {
-  Resources.multiplyAndAddTo(demand,
-  pendingAsk.getPerAllocationResource(),
-  pendingAsk.getCount());
-}
+for (SchedulerRequestKey k : getSchedulerKeys()) {
+  PendingAsk pendingAsk = getPendingAsk(k, ResourceRequest.ANY);
+  if (pendingAsk.getCount() > 0) {
+Resources.multiplyAndAddTo(tmpDemand,
+pendingAsk.getPerAllocationResource(),
+pendingAsk.getCount());
   }
-} finally {
-  writeLock.unlock();
 }
+
+// Update demand
+demand = tmpDemand;
   }
 
   @Override


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6582. FSAppAttempt demand can be updated atomically in updateDemand(). (Karthik Kambatla via Yufei Gu)

2017-05-25 Thread yufei
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3fd6a2da4 -> 87590090c


YARN-6582. FSAppAttempt demand can be updated atomically in updateDemand(). 
(Karthik Kambatla via Yufei Gu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87590090
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87590090
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87590090

Branch: refs/heads/trunk
Commit: 87590090c887829e874a7132be9cf8de061437d6
Parents: 3fd6a2d
Author: Yufei Gu 
Authored: Thu May 25 14:22:13 2017 -0700
Committer: Yufei Gu 
Committed: Thu May 25 14:22:13 2017 -0700

--
 .../scheduler/fair/FSAppAttempt.java| 23 +---
 1 file changed, 10 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87590090/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index 4f7e164..a5772ba 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -1286,24 +1286,21 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 
   @Override
   public void updateDemand() {
-demand = Resources.createResource(0);
 // Demand is current consumption plus outstanding requests
-Resources.addTo(demand, getCurrentConsumption());
+Resource tmpDemand = Resources.clone(getCurrentConsumption());
 
 // Add up outstanding resource requests
-try {
-  writeLock.lock();
-  for (SchedulerRequestKey k : getSchedulerKeys()) {
-PendingAsk pendingAsk = getPendingAsk(k, ResourceRequest.ANY);
-if (pendingAsk.getCount() > 0) {
-  Resources.multiplyAndAddTo(demand,
-  pendingAsk.getPerAllocationResource(),
-  pendingAsk.getCount());
-}
+for (SchedulerRequestKey k : getSchedulerKeys()) {
+  PendingAsk pendingAsk = getPendingAsk(k, ResourceRequest.ANY);
+  if (pendingAsk.getCount() > 0) {
+Resources.multiplyAndAddTo(tmpDemand,
+pendingAsk.getPerAllocationResource(),
+pendingAsk.getCount());
   }
-} finally {
-  writeLock.unlock();
 }
+
+// Update demand
+demand = tmpDemand;
   }
 
   @Override


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by Robert Kanter

2017-05-25 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 4f5846f1e -> 8739c4d2f


YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by 
Robert Kanter

(cherry picked from commit 3fd6a2da4e537423d1462238e10cc9e1f698d1c2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8739c4d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8739c4d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8739c4d2

Branch: refs/heads/branch-2.8
Commit: 8739c4d2f864059cf1944515452b448e6c0dc7d9
Parents: 4f5846f
Author: Jason Lowe 
Authored: Thu May 25 16:07:52 2017 -0500
Committer: Jason Lowe 
Committed: Thu May 25 16:09:29 2017 -0500

--
 .../hadoop/yarn/server/resourcemanager/HATestUtil.java  | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8739c4d2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
index 710ce87..ac245c3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
@@ -18,16 +18,19 @@
 package org.apache.hadoop.yarn.server.resourcemanager;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 
+import java.io.IOException;
+
 public class HATestUtil {
 
   public static void setRpcAddressForRM(String rmId, int base,
-  Configuration conf) {
+  Configuration conf) throws IOException {
 for (String confKey : YarnConfiguration.getServiceAddressConfKeys(conf)) {
-  setConfForRM(rmId, confKey, "0.0.0.0:" + (base +
-  YarnConfiguration.getRMDefaultPortNumber(confKey, conf)), conf);
+  setConfForRM(rmId, confKey, "0.0.0.0:" + ServerSocketUtil.getPort(base +
+  YarnConfiguration.getRMDefaultPortNumber(confKey, conf), 10), conf);
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by Robert Kanter

2017-05-25 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7bad74809 -> 4c44ff69d


YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by 
Robert Kanter

(cherry picked from commit 3fd6a2da4e537423d1462238e10cc9e1f698d1c2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4c44ff69
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4c44ff69
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4c44ff69

Branch: refs/heads/branch-2
Commit: 4c44ff69df4979d40dc78a0dbd00de967627643a
Parents: 7bad748
Author: Jason Lowe 
Authored: Thu May 25 16:07:52 2017 -0500
Committer: Jason Lowe 
Committed: Thu May 25 16:09:08 2017 -0500

--
 .../hadoop/yarn/server/resourcemanager/HATestUtil.java  | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c44ff69/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
index 710ce87..ac245c3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
@@ -18,16 +18,19 @@
 package org.apache.hadoop.yarn.server.resourcemanager;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 
+import java.io.IOException;
+
 public class HATestUtil {
 
   public static void setRpcAddressForRM(String rmId, int base,
-  Configuration conf) {
+  Configuration conf) throws IOException {
 for (String confKey : YarnConfiguration.getServiceAddressConfKeys(conf)) {
-  setConfForRM(rmId, confKey, "0.0.0.0:" + (base +
-  YarnConfiguration.getRMDefaultPortNumber(confKey, conf)), conf);
+  setConfForRM(rmId, confKey, "0.0.0.0:" + ServerSocketUtil.getPort(base +
+  YarnConfiguration.getRMDefaultPortNumber(confKey, conf), 10), conf);
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by Robert Kanter

2017-05-25 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 116156313 -> 3fd6a2da4


YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed by 
Robert Kanter


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fd6a2da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fd6a2da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fd6a2da

Branch: refs/heads/trunk
Commit: 3fd6a2da4e537423d1462238e10cc9e1f698d1c2
Parents: 1161563
Author: Jason Lowe 
Authored: Thu May 25 16:07:52 2017 -0500
Committer: Jason Lowe 
Committed: Thu May 25 16:07:52 2017 -0500

--
 .../hadoop/yarn/server/resourcemanager/HATestUtil.java  | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fd6a2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
index 710ce87..ac245c3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java
@@ -18,16 +18,19 @@
 package org.apache.hadoop.yarn.server.resourcemanager;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 
+import java.io.IOException;
+
 public class HATestUtil {
 
   public static void setRpcAddressForRM(String rmId, int base,
-  Configuration conf) {
+  Configuration conf) throws IOException {
 for (String confKey : YarnConfiguration.getServiceAddressConfKeys(conf)) {
-  setConfForRM(rmId, confKey, "0.0.0.0:" + (base +
-  YarnConfiguration.getRMDefaultPortNumber(confKey, conf)), conf);
+  setConfForRM(rmId, confKey, "0.0.0.0:" + ServerSocketUtil.getPort(base +
+  YarnConfiguration.getRMDefaultPortNumber(confKey, conf), 10), conf);
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11879. Fix JN sync interval in case of exception. Contributed by Hanisha Koneru.

2017-05-25 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 29b7df960 -> 116156313


HDFS-11879. Fix JN sync interval in case of exception. Contributed by Hanisha 
Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/11615631
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/11615631
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/11615631

Branch: refs/heads/trunk
Commit: 11615631360ba49c1e9d256ed4f65119d99fd67d
Parents: 29b7df9
Author: Arpit Agarwal 
Authored: Thu May 25 14:01:53 2017 -0700
Committer: Arpit Agarwal 
Committed: Thu May 25 14:01:53 2017 -0700

--
 .../hdfs/qjournal/server/JournalNodeSyncer.java | 40 
 1 file changed, 25 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/11615631/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 99bd499..479f6a0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -172,7 +172,6 @@ public class JournalNodeSyncer {
   } else {
 syncJournals();
   }
-  Thread.sleep(journalSyncInterval);
 } catch (Throwable t) {
   if (!shouldSync) {
 if (t instanceof InterruptedException) {
@@ -194,6 +193,17 @@ public class JournalNodeSyncer {
   LOG.error(
   "JournalNodeSyncer daemon received Runtime exception. ", t);
 }
+try {
+  Thread.sleep(journalSyncInterval);
+} catch (InterruptedException e) {
+  if (!shouldSync) {
+LOG.info("Stopping JournalNode Sync.");
+  } else {
+LOG.warn("JournalNodeSyncer interrupted", e);
+  }
+  Thread.currentThread().interrupt();
+  return;
+}
   }
 });
 syncJournalDaemon.start();
@@ -320,30 +330,30 @@ public class JournalNodeSyncer {
 
 List missingEditLogs = Lists.newArrayList();
 
-int thisJnIndex = 0, otherJnIndex = 0;
-int thisJnNumLogs = thisJournalEditLogs.size();
-int otherJnNumLogs = otherJournalEditLogs.size();
+int localJnIndex = 0, remoteJnIndex = 0;
+int localJnNumLogs = thisJournalEditLogs.size();
+int remoteJnNumLogs = otherJournalEditLogs.size();
 
-while (thisJnIndex < thisJnNumLogs && otherJnIndex < otherJnNumLogs) {
-  long localJNstartTxId = thisJournalEditLogs.get(thisJnIndex)
+while (localJnIndex < localJnNumLogs && remoteJnIndex < remoteJnNumLogs) {
+  long localJNstartTxId = thisJournalEditLogs.get(localJnIndex)
   .getStartTxId();
-  long remoteJNstartTxId = otherJournalEditLogs.get(otherJnIndex)
+  long remoteJNstartTxId = otherJournalEditLogs.get(remoteJnIndex)
   .getStartTxId();
 
   if (localJNstartTxId == remoteJNstartTxId) {
-thisJnIndex++;
-otherJnIndex++;
+localJnIndex++;
+remoteJnIndex++;
   } else if (localJNstartTxId > remoteJNstartTxId) {
-missingEditLogs.add(otherJournalEditLogs.get(otherJnIndex));
-otherJnIndex++;
+missingEditLogs.add(otherJournalEditLogs.get(remoteJnIndex));
+remoteJnIndex++;
   } else {
-thisJnIndex++;
+localJnIndex++;
   }
 }
 
-if (otherJnIndex < otherJnNumLogs) {
-  for (; otherJnIndex < otherJnNumLogs; otherJnIndex++) {
-missingEditLogs.add(otherJournalEditLogs.get(otherJnIndex));
+if (remoteJnIndex < remoteJnNumLogs) {
+  for (; remoteJnIndex < remoteJnNumLogs; remoteJnIndex++) {
+missingEditLogs.add(otherJournalEditLogs.get(remoteJnIndex));
   }
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/3] hadoop git commit: YARN-6613. Update json validation for new native services providers. Contributed by Billie Rinaldi

2017-05-25 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/common/tools/TestMiscSliderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/common/tools/TestMiscSliderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/common/tools/TestMiscSliderUtils.java
deleted file mode 100644
index bf6ee2c..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/common/tools/TestMiscSliderUtils.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.slider.common.tools;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.slider.utils.SliderTestBase;
-import org.junit.Test;
-
-import java.net.URI;
-
-/**
- * Test slider utils.
- */
-public class TestMiscSliderUtils extends SliderTestBase {
-
-
-  public static final String CLUSTER1 = "cluster1";
-
-  @Test
-  public void testPurgeTempDir() throws Throwable {
-
-Configuration configuration = new Configuration();
-FileSystem fs = FileSystem.get(new URI("file:///"), configuration);
-SliderFileSystem sliderFileSystem = new SliderFileSystem(fs, 
configuration);
-Path inst = sliderFileSystem.createAppInstanceTempPath(CLUSTER1, "001");
-
-assertTrue(fs.exists(inst));
-sliderFileSystem.purgeAppInstanceTempFiles(CLUSTER1);
-assertFalse(fs.exists(inst));
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/core/conf/ExampleAppJson.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/core/conf/ExampleAppJson.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/core/conf/ExampleAppJson.java
new file mode 100644
index 000..1700771
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/java/org/apache/slider/core/conf/ExampleAppJson.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.slider.core.conf;
+
+import org.apache.slider.api.resource.Application;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.slider.utils.SliderTestUtils.JSON_SER_DESER;
+
+/**
+ * Names of the example configs.
+ */
+public final class ExampleAppJson {
+
+  public static final String APP_JSON = "app.json";
+  public static final String OVERRIDE_JSON = "app-override.json";
+  public static final String DEFAULT_JSON = "default.json";
+  public static final String EXTERNAL_JSON_0 = "external0.json";
+  public static final String EXTERNAL_JSON_1 = "external1.json";
+

[1/3] hadoop git commit: YARN-6613. Update json validation for new native services providers. Contributed by Billie Rinaldi

2017-05-25 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/yarn-native-services 8c3b3db41 -> 9c49ca75f


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/default.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/default.json
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/default.json
new file mode 100644
index 000..16f0efc
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/default.json
@@ -0,0 +1,16 @@
+{
+  "name": "default-app-1",
+  "lifetime": "3600",
+  "components" :
+  [
+{
+  "name": "SLEEP",
+  "number_of_containers": 1,
+  "launch_command": "sleep 3600",
+  "resource": {
+"cpus": 2,
+"memory": "256"
+  }
+}
+  ]
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external0.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external0.json
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external0.json
new file mode 100644
index 000..1f9dfeb
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external0.json
@@ -0,0 +1,8 @@
+{
+  "name": "external-0",
+  "lifetime": "3600",
+  "artifact": {
+"type": "APPLICATION",
+"id": "app-1"
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external1.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external1.json
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external1.json
new file mode 100644
index 000..03ebce5
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external1.json
@@ -0,0 +1,30 @@
+{
+  "name": "external-1",
+  "lifetime": "3600",
+  "components": [
+{
+  "name": "simple",
+  "artifact": {
+"type": "APPLICATION",
+"id": "app-1"
+  }
+},
+{
+  "name": "master",
+  "configuration": {
+"properties": {
+  "g3": "is-overridden"
+}
+  }
+},
+{
+  "name": "other",
+  "launch_command": "sleep 3600",
+  "number_of_containers": 2,
+  "resource": {
+"cpus": 1,
+"memory": "512"
+  }
+}
+  ]
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external2.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external2.json
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external2.json
new file mode 100644
index 000..9e61fba
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/test/resources/org/apache/slider/core/conf/examples/external2.json
@@ -0,0 +1,22 @@
+{
+  "name": "external-2",
+  "lifetime": "3600",
+  "components": [
+{
+  "name": "ext",
+  "artifact": {
+"type": "APPLICATION",
+"id": "external-1"
+  }
+},
+{
+  "name": "another",
+  "launch_command": "sleep 3600",
+  "number_of_containers": 1,
+  "resource": 

[3/3] hadoop git commit: YARN-6613. Update json validation for new native services providers. Contributed by Billie Rinaldi

2017-05-25 Thread jianhe
YARN-6613. Update json validation for new native services providers. 
Contributed by Billie Rinaldi


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c49ca75
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c49ca75
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c49ca75

Branch: refs/heads/yarn-native-services
Commit: 9c49ca75fc46b502ce97e5780bdbf7cb0ae16d95
Parents: 8c3b3db
Author: Jian He 
Authored: Thu May 25 12:47:19 2017 -0700
Committer: Jian He 
Committed: Thu May 25 12:47:19 2017 -0700

--
 .../hadoop-yarn-services-api/pom.xml|  57 +--
 ...RN-Simplified-V1-API-Layer-For-Services.yaml |  12 +-
 .../api/impl/TestApplicationApiService.java | 209 --
 .../apache/slider/api/resource/Application.java |   4 +
 .../apache/slider/api/resource/Component.java   |  41 +-
 .../slider/api/resource/Configuration.java  |  10 +-
 .../org/apache/slider/client/SliderClient.java  | 109 ++---
 .../slider/client/SliderYarnClientImpl.java |  61 +++
 .../org/apache/slider/common/SliderKeys.java| 128 +-
 .../slider/common/tools/CoreFileSystem.java |  64 ---
 .../apache/slider/common/tools/SliderUtils.java |  14 +-
 .../slider/core/persist/InstancePaths.java  |  58 ---
 .../providers/AbstractClientProvider.java   |  51 ++-
 .../slider/providers/SliderProviderFactory.java |  12 +-
 .../tarball/TarballProviderService.java |   2 +-
 .../server/appmaster/SliderAppMaster.java   |  18 +-
 .../slider/util/RestApiErrorMessages.java   |   4 +-
 .../org/apache/slider/util/ServiceApiUtil.java  | 273 +++--
 .../slider/client/TestKeytabCommandOptions.java |  11 +-
 .../common/tools/TestMiscSliderUtils.java   |  49 ---
 .../apache/slider/core/conf/ExampleAppJson.java |  64 +++
 .../slider/core/conf/ExampleConfResources.java  |  58 ---
 .../core/conf/TestConfTreeLoadExamples.java |  64 ---
 .../core/conf/TestConfigurationResolve.java | 146 ++-
 .../slider/core/conf/TestExampleAppJson.java|  79 
 .../providers/TestAbstractClientProvider.java   | 121 ++
 .../TestBuildApplicationComponent.java  |  96 +
 .../slider/providers/TestDefaultProvider.java   |  60 +++
 .../model/appstate/BaseMockAppStateAATest.java  |   2 +-
 .../appstate/TestMockAppStateAAPlacement.java   |   2 +-
 .../TestMockAppStateContainerFailure.java   |   2 +-
 .../TestMockAppStateFlexDynamicRoles.java   |   5 +-
 .../TestMockAppStateRebuildOnAMRestart.java |   2 +-
 .../appstate/TestMockAppStateUniqueNames.java   |   3 +-
 .../TestMockContainerResourceAllocations.java   |   2 +-
 .../model/mock/BaseMockAppStateTest.java|  14 +-
 .../appmaster/model/mock/MockFactory.java   |   3 +
 .../apache/slider/utils/TestServiceApiUtil.java | 393 +++
 .../slider/utils/YarnMiniClusterTestBase.java   |  99 +++--
 .../slider/utils/YarnZKMiniClusterTestBase.java |   5 +-
 .../conf/examples/app-override-resolved.json|  49 ---
 .../slider/core/conf/examples/app-override.json |  33 +-
 .../slider/core/conf/examples/app-resolved.json |  81 
 .../apache/slider/core/conf/examples/app.json   |  13 +-
 .../slider/core/conf/examples/default.json  |  16 +
 .../slider/core/conf/examples/external0.json|   8 +
 .../slider/core/conf/examples/external1.json|  30 ++
 .../slider/core/conf/examples/external2.json|  22 ++
 48 files changed, 1539 insertions(+), 1120 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c49ca75/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml
index 4e88aef..bc714db 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/pom.xml
@@ -28,11 +28,6 @@
   jar
   Hadoop YARN REST APIs for services
 
-  
-false
-1.6.5
-  
-
   
 
 
@@ -81,30 +76,10 @@
   
 org.apache.maven.plugins
 maven-surefire-plugin
-${maven-surefire-plugin.version}
 
-  ${test.reuseForks}
-  ${test.forkMode}
-  1
-  ${test.forkedProcessTimeoutInSeconds}
-  
-  1
-  ${test.argLine}
-  ${test.failIfNoTests}
-  
${build.redirect.test.output.to.file}
   
-${test.env.path}
+${java.home}
   
-  
-true
-true
-  
-  
-**/Test*.java
-  
-

hadoop git commit: HDFS-11856. Ability to re-add Upgrading Nodes to pipeline for future pipeline updates. Contributed by Vinayakumar B.

2017-05-25 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4fb41b31d -> 29b7df960


HDFS-11856. Ability to re-add Upgrading Nodes to pipeline for future pipeline 
updates. Contributed by Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/29b7df96
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/29b7df96
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/29b7df96

Branch: refs/heads/trunk
Commit: 29b7df960fc3d0a7d1416225c3106c7d4222f0ca
Parents: 4fb41b3
Author: Kihwal Lee 
Authored: Thu May 25 13:04:09 2017 -0500
Committer: Kihwal Lee 
Committed: Thu May 25 13:05:23 2017 -0500

--
 .../hadoop/hdfs/DFSClientFaultInjector.java |  4 +
 .../org/apache/hadoop/hdfs/DataStreamer.java| 70 +++
 .../hdfs/server/datanode/BlockReceiver.java |  6 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |  2 +-
 .../impl/FsDatasetAsyncDiskService.java | 14 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 85 --
 .../TestClientProtocolForPipelineRecovery.java  | 92 
 .../server/datanode/SimulatedFSDataset.java |  6 +-
 .../server/datanode/TestSimulatedFSDataset.java |  2 +-
 .../extdataset/ExternalDatasetImpl.java |  3 +-
 .../fsdataset/impl/TestWriteToReplica.java  | 20 +++--
 11 files changed, 241 insertions(+), 63 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/29b7df96/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
index 4eb4c52..748edcd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
@@ -57,4 +57,8 @@ public class DFSClientFaultInjector {
   public void fetchFromDatanodeException() {}
 
   public void readFromDatanodeDelay() {}
+
+  public boolean skipRollingRestartWait() {
+return false;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/29b7df96/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 49c17b9..f5ce0ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -327,6 +327,7 @@ class DataStreamer extends Daemon {
   static class ErrorState {
 ErrorType error = ErrorType.NONE;
 private int badNodeIndex = -1;
+private boolean waitForRestart = true;
 private int restartingNodeIndex = -1;
 private long restartingNodeDeadline = 0;
 private final long datanodeRestartTimeout;
@@ -342,6 +343,7 @@ class DataStreamer extends Daemon {
   badNodeIndex = -1;
   restartingNodeIndex = -1;
   restartingNodeDeadline = 0;
+  waitForRestart = true;
 }
 
 synchronized void reset() {
@@ -349,6 +351,7 @@ class DataStreamer extends Daemon {
   badNodeIndex = -1;
   restartingNodeIndex = -1;
   restartingNodeDeadline = 0;
+  waitForRestart = true;
 }
 
 synchronized boolean hasInternalError() {
@@ -389,14 +392,19 @@ class DataStreamer extends Daemon {
   return restartingNodeIndex;
 }
 
-synchronized void initRestartingNode(int i, String message) {
+synchronized void initRestartingNode(int i, String message,
+boolean shouldWait) {
   restartingNodeIndex = i;
-  restartingNodeDeadline =  Time.monotonicNow() + datanodeRestartTimeout;
-  // If the data streamer has already set the primary node
-  // bad, clear it. It is likely that the write failed due to
-  // the DN shutdown. Even if it was a real failure, the pipeline
-  // recovery will take care of it.
-  badNodeIndex = -1;
+  if (shouldWait) {
+restartingNodeDeadline = Time.monotonicNow() + datanodeRestartTimeout;
+// If the data streamer has already set the primary node
+// bad, clear it. It is likely that the write failed due to
+// the DN shutdown. Even if it was a real failure, the pipeline
+// recovery will take care of it.
+badNodeIndex = -1;
+  } else {

hadoop git commit: HDFS-11878. Fix journal missing log httpServerUrl address in JournalNodeSyncer. Contributed by Hanisha Koneru.

2017-05-25 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2e41f8803 -> 4fb41b31d


HDFS-11878. Fix journal missing log httpServerUrl address in JournalNodeSyncer. 
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4fb41b31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4fb41b31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4fb41b31

Branch: refs/heads/trunk
Commit: 4fb41b31dbc109f11898ea6d8fc0bb3e6c20d89b
Parents: 2e41f88
Author: Arpit Agarwal 
Authored: Thu May 25 10:42:24 2017 -0700
Committer: Arpit Agarwal 
Committed: Thu May 25 10:42:24 2017 -0700

--
 .../hadoop/hdfs/qjournal/server/JournalNodeSyncer.java| 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4fb41b31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 73defc2..99bd499 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -285,14 +285,8 @@ public class JournalNodeSyncer {
 boolean success = false;
 try {
   if (remoteJNproxy.httpServerUrl == null) {
-if (response.hasFromURL()) {
-  URI uri = URI.create(response.getFromURL());
-  remoteJNproxy.httpServerUrl = getHttpServerURI(uri.getScheme(),
-  uri.getHost(), uri.getPort());
-} else {
-  remoteJNproxy.httpServerUrl = getHttpServerURI("http",
-  remoteJNproxy.jnAddr.getHostName(), response.getHttpPort());
-}
+remoteJNproxy.httpServerUrl = getHttpServerURI("http",
+remoteJNproxy.jnAddr.getHostName(), response.getHttpPort());
   }
 
   String urlPath = GetJournalEditServlet.buildPath(jid, missingLog


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is corrupt. Contributed by Brahma Reddy Battula.

2017-05-25 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 f225b5514 -> 724a5f3db


HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is 
corrupt. Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/724a5f3d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/724a5f3d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/724a5f3d

Branch: refs/heads/branch-2.7
Commit: 724a5f3dbc26ce03d8cea90880d30dc7481da581
Parents: f225b55
Author: Brahma Reddy Battula 
Authored: Thu May 25 23:00:35 2017 +0800
Committer: Brahma Reddy Battula 
Committed: Thu May 25 23:00:35 2017 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../BlockInfoContiguousUnderConstruction.java   | 22 ++-
 .../server/blockmanagement/BlockManager.java| 32 +--
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 41 
 5 files changed, 86 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/724a5f3d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 79d241b..9ca3b9a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -322,6 +322,9 @@ Release 2.7.4 - UNRELEASED
 
 HADOOP-13026 Should not wrap IOExceptions into a AuthenticationException in
 KerberosAuthenticator. Xuan Gong via stevel
+
+HDFS-11445. FSCK shows overall health stauts as corrupt even one replica 
is corrupt. 
+(Brahma Reddy Battula)
 
 Release 2.7.3 - 2016-08-25
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/724a5f3d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
index 4f315c7..1342c84 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
@@ -87,7 +87,7 @@ public class BlockInfoContiguousUnderConstruction extends 
BlockInfoContiguous {
  * It is not guaranteed, but expected, that the data-node actually has
  * the replica.
  */
-private DatanodeStorageInfo getExpectedStorageLocation() {
+public DatanodeStorageInfo getExpectedStorageLocation() {
   return expectedLocation;
 }
 
@@ -245,38 +245,40 @@ public class BlockInfoContiguousUnderConstruction extends 
BlockInfoContiguous {
* Process the recorded replicas. When about to commit or finish the
* pipeline recovery sort out bad replicas.
* @param genStamp  The final generation stamp for the block.
+   * @return staleReplica's List.
*/
-  public void setGenerationStampAndVerifyReplicas(long genStamp) {
+  public List setGenerationStampAndVerifyReplicas(
+  long genStamp) {
 // Set the generation stamp for the block.
 setGenerationStamp(genStamp);
 if (replicas == null)
-  return;
+  return null;
 
-// Remove the replicas with wrong gen stamp.
-// The replica list is unchanged.
+List staleReplicas = new ArrayList<>();
+// Remove replicas with wrong gen stamp. The replica list is unchanged.
 for (ReplicaUnderConstruction r : replicas) {
   if (genStamp != r.getGenerationStamp()) {
-r.getExpectedStorageLocation().removeBlock(this);
-NameNode.blockStateChangeLog.info("BLOCK* Removing stale replica "
-+ "from location: {}", r.getExpectedStorageLocation());
+staleReplicas.add(r);
   }
 }
+return staleReplicas;
   }
 
   /**
* Commit block's length and generation stamp as reported by the client.
* Set block state to {@link BlockUCState#COMMITTED}.
* @param block - contains client reported block length and generation 
+   * @return staleReplica's List.
* @throws IOException if block ids are inconsistent.
*/
-  void commitBlock(Block block) throws IOException {
+  List commitBlock(Block block) throws IOException {
 if(getBlockId() != block.getBlockId())
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.get

[1/3] hadoop git commit: HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is corrupt. Contributed by Brahma Reddy Battula.

2017-05-25 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1efba62fd -> 7bad74809
  refs/heads/branch-2.8 c1f3cd765 -> 4f5846f1e
  refs/heads/branch-2.8.1 3bf038446 -> e2a817a9a


HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is 
corrupt. Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7bad7480
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7bad7480
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7bad7480

Branch: refs/heads/branch-2
Commit: 7bad748091b6a5f90e919c781a915e13000e1be6
Parents: 1efba62
Author: Brahma Reddy Battula 
Authored: Thu May 25 22:38:12 2017 +0800
Committer: Brahma Reddy Battula 
Committed: Thu May 25 22:38:12 2017 +0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 18 +--
 .../server/blockmanagement/BlockManager.java| 27 +++--
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 32 
 4 files changed, 64 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7bad7480/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 8dfb80b..7f945c0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -25,7 +25,6 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.util.LightWeightGSet;
 
 import static org.apache.hadoop.hdfs.server.namenode.INodeId.INVALID_INODE_ID;
@@ -391,28 +390,25 @@ public abstract class BlockInfo extends Block
* Process the recorded replicas. When about to commit or finish the
* pipeline recovery sort out bad replicas.
* @param genStamp  The final generation stamp for the block.
+   * @return staleReplica's List.
*/
-  public void setGenerationStampAndVerifyReplicas(long genStamp) {
+  public List setGenerationStampAndVerifyReplicas(
+  long genStamp) {
 Preconditions.checkState(uc != null && !isComplete());
 // Set the generation stamp for the block.
 setGenerationStamp(genStamp);
 
-// Remove the replicas with wrong gen stamp
-List staleReplicas = 
uc.getStaleReplicas(genStamp);
-for (ReplicaUnderConstruction r : staleReplicas) {
-  r.getExpectedStorageLocation().removeBlock(this);
-  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica {}"
-  + " of {}", r, Block.toString(r));
-}
+return uc.getStaleReplicas(genStamp);
   }
 
   /**
* Commit block's length and generation stamp as reported by the client.
* Set block state to {@link BlockUCState#COMMITTED}.
* @param block - contains client reported block length and generation
+   * @return staleReplica's List.
* @throws IOException if block ids are inconsistent.
*/
-  void commitBlock(Block block) throws IOException {
+  List commitBlock(Block block) throws IOException {
 if (getBlockId() != block.getBlockId()) {
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.getBlockId() + ", expected id = " + getBlockId());
@@ -421,6 +417,6 @@ public abstract class BlockInfo extends Block
 uc.commit();
 this.setNumBytes(block.getNumBytes());
 // Sort out invalid replicas.
-setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
+return setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7bad7480/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 2aaad73..5fe285e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-p

[3/3] hadoop git commit: HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is corrupt. Contributed by Brahma Reddy Battula.

2017-05-25 Thread brahma
HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is 
corrupt. Contributed by Brahma Reddy Battula.

(cherry picked from commit 7bad748091b6a5f90e919c781a915e13000e1be6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2a817a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2a817a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2a817a9

Branch: refs/heads/branch-2.8.1
Commit: e2a817a9a03ab421b8766c299ebd3a292751bba5
Parents: 3bf0384
Author: Brahma Reddy Battula 
Authored: Thu May 25 22:38:12 2017 +0800
Committer: Brahma Reddy Battula 
Committed: Thu May 25 22:54:24 2017 +0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 18 +--
 .../server/blockmanagement/BlockManager.java| 27 ++--
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 33 
 4 files changed, 65 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a817a9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 9d6dc6a..d62abee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -25,7 +25,6 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.util.LightWeightGSet;
 
 import static org.apache.hadoop.hdfs.server.namenode.INodeId.INVALID_INODE_ID;
@@ -381,28 +380,25 @@ public abstract class BlockInfo extends Block
* Process the recorded replicas. When about to commit or finish the
* pipeline recovery sort out bad replicas.
* @param genStamp  The final generation stamp for the block.
+   * @return staleReplica's List.
*/
-  public void setGenerationStampAndVerifyReplicas(long genStamp) {
+  public List setGenerationStampAndVerifyReplicas(
+  long genStamp) {
 Preconditions.checkState(uc != null && !isComplete());
 // Set the generation stamp for the block.
 setGenerationStamp(genStamp);
 
-// Remove the replicas with wrong gen stamp
-List staleReplicas = 
uc.getStaleReplicas(genStamp);
-for (ReplicaUnderConstruction r : staleReplicas) {
-  r.getExpectedStorageLocation().removeBlock(this);
-  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica {}"
-  + " of {}", r, Block.toString(r));
-}
+return uc.getStaleReplicas(genStamp);
   }
 
   /**
* Commit block's length and generation stamp as reported by the client.
* Set block state to {@link BlockUCState#COMMITTED}.
* @param block - contains client reported block length and generation
+   * @return staleReplica's List.
* @throws IOException if block ids are inconsistent.
*/
-  void commitBlock(Block block) throws IOException {
+  List commitBlock(Block block) throws IOException {
 if (getBlockId() != block.getBlockId()) {
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.getBlockId() + ", expected id = " + getBlockId());
@@ -411,6 +407,6 @@ public abstract class BlockInfo extends Block
 uc.commit();
 this.setNumBytes(block.getNumBytes());
 // Sort out invalid replicas.
-setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
+return setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a817a9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index df9aa32..1170af1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -699,7 

[2/3] hadoop git commit: HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is corrupt. Contributed by Brahma Reddy Battula.

2017-05-25 Thread brahma
HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is 
corrupt. Contributed by Brahma Reddy Battula.

(cherry picked from commit 7bad748091b6a5f90e919c781a915e13000e1be6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f5846f1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f5846f1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f5846f1

Branch: refs/heads/branch-2.8
Commit: 4f5846f1e395e22dd7a8ebe99b541a545d0883d4
Parents: c1f3cd7
Author: Brahma Reddy Battula 
Authored: Thu May 25 22:38:12 2017 +0800
Committer: Brahma Reddy Battula 
Committed: Thu May 25 22:40:11 2017 +0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 18 +--
 .../server/blockmanagement/BlockManager.java| 27 +++--
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 32 
 4 files changed, 64 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f5846f1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 9d6dc6a..d62abee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -25,7 +25,6 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.util.LightWeightGSet;
 
 import static org.apache.hadoop.hdfs.server.namenode.INodeId.INVALID_INODE_ID;
@@ -381,28 +380,25 @@ public abstract class BlockInfo extends Block
* Process the recorded replicas. When about to commit or finish the
* pipeline recovery sort out bad replicas.
* @param genStamp  The final generation stamp for the block.
+   * @return staleReplica's List.
*/
-  public void setGenerationStampAndVerifyReplicas(long genStamp) {
+  public List setGenerationStampAndVerifyReplicas(
+  long genStamp) {
 Preconditions.checkState(uc != null && !isComplete());
 // Set the generation stamp for the block.
 setGenerationStamp(genStamp);
 
-// Remove the replicas with wrong gen stamp
-List staleReplicas = 
uc.getStaleReplicas(genStamp);
-for (ReplicaUnderConstruction r : staleReplicas) {
-  r.getExpectedStorageLocation().removeBlock(this);
-  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica {}"
-  + " of {}", r, Block.toString(r));
-}
+return uc.getStaleReplicas(genStamp);
   }
 
   /**
* Commit block's length and generation stamp as reported by the client.
* Set block state to {@link BlockUCState#COMMITTED}.
* @param block - contains client reported block length and generation
+   * @return staleReplica's List.
* @throws IOException if block ids are inconsistent.
*/
-  void commitBlock(Block block) throws IOException {
+  List commitBlock(Block block) throws IOException {
 if (getBlockId() != block.getBlockId()) {
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.getBlockId() + ", expected id = " + getBlockId());
@@ -411,6 +407,6 @@ public abstract class BlockInfo extends Block
 uc.commit();
 this.setNumBytes(block.getNumBytes());
 // Sort out invalid replicas.
-setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
+return setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f5846f1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 7ade4ea..ed00cd1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -699,7 +

hadoop git commit: HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is corrupt. Contributed by Brahma Reddy Battula.

2017-05-25 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8bf0e2d6b -> 2e41f8803


HDFS-11445. FSCK shows overall health stauts as corrupt even one replica is 
corrupt. Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e41f880
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e41f880
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e41f880

Branch: refs/heads/trunk
Commit: 2e41f8803dd46d1bab16c1b206c71be72ea260a1
Parents: 8bf0e2d
Author: Brahma Reddy Battula 
Authored: Thu May 25 22:35:10 2017 +0800
Committer: Brahma Reddy Battula 
Committed: Thu May 25 22:35:10 2017 +0800

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 18 +--
 .../server/blockmanagement/BlockManager.java| 26 ++--
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 32 
 4 files changed, 63 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e41f880/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index df9cdc3..e9d235c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.util.LightWeightGSet;
 
 import static org.apache.hadoop.hdfs.server.namenode.INodeId.INVALID_INODE_ID;
@@ -286,28 +285,25 @@ public abstract class BlockInfo extends Block
* Process the recorded replicas. When about to commit or finish the
* pipeline recovery sort out bad replicas.
* @param genStamp  The final generation stamp for the block.
+   * @return staleReplica's List.
*/
-  public void setGenerationStampAndVerifyReplicas(long genStamp) {
+  public List setGenerationStampAndVerifyReplicas(
+  long genStamp) {
 Preconditions.checkState(uc != null && !isComplete());
 // Set the generation stamp for the block.
 setGenerationStamp(genStamp);
 
-// Remove the replicas with wrong gen stamp
-List staleReplicas = 
uc.getStaleReplicas(genStamp);
-for (ReplicaUnderConstruction r : staleReplicas) {
-  r.getExpectedStorageLocation().removeBlock(this);
-  NameNode.blockStateChangeLog.debug("BLOCK* Removing stale replica {}"
-  + " of {}", r, Block.toString(r));
-}
+return uc.getStaleReplicas(genStamp);
   }
 
   /**
* Commit block's length and generation stamp as reported by the client.
* Set block state to {@link BlockUCState#COMMITTED}.
* @param block - contains client reported block length and generation
+   * @return staleReplica's List.
* @throws IOException if block ids are inconsistent.
*/
-  void commitBlock(Block block) throws IOException {
+  List commitBlock(Block block) throws IOException {
 if (getBlockId() != block.getBlockId()) {
   throw new IOException("Trying to commit inconsistent block: id = "
   + block.getBlockId() + ", expected id = " + getBlockId());
@@ -316,6 +312,6 @@ public abstract class BlockInfo extends Block
 uc.commit();
 this.setNumBytes(block.getNumBytes());
 // Sort out invalid replicas.
-setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
+return setGenerationStampAndVerifyReplicas(block.getGenerationStamp());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e41f880/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index a9592bf..f0c12cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@

hadoop git commit: HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0. Contributed by Hongyuan Li.

2017-05-25 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 07c14b35e -> 1efba62fd


HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's
getFileStatus method is always 0.
Contributed by Hongyuan Li.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1efba62f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1efba62f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1efba62f

Branch: refs/heads/branch-2
Commit: 1efba62fd73a24fd7da98c59d7acd4a9ab7a221a
Parents: 07c14b3
Author: Steve Loughran 
Authored: Thu May 25 15:17:43 2017 +0100
Committer: Steve Loughran 
Committed: Thu May 25 15:17:43 2017 +0100

--
 .../org/apache/hadoop/fs/sftp/SFTPFileSystem.java |  2 +-
 .../org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java | 14 ++
 2 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efba62f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
index 8b6267a..e4e7bbf 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
@@ -279,7 +279,7 @@ public class SFTPFileSystem extends FileSystem {
 // block sizes on server. The assumption could be less than ideal.
 long blockSize = DEFAULT_BLOCK_SIZE;
 long modTime = attr.getMTime() * 1000; // convert to milliseconds
-long accessTime = 0;
+long accessTime = attr.getATime() * 1000L;
 FsPermission permission = getPermissions(sftpFile);
 // not be able to get the real user group name, just use the user and group
 // id

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efba62f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
index 36aacee..ad54dc0 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.sftp;
 
 import java.io.IOException;
 import java.net.URI;
+import java.nio.file.Files;
+import java.nio.file.attribute.BasicFileAttributes;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
@@ -28,6 +30,7 @@ import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.util.Shell;
@@ -306,4 +309,15 @@ public class TestSFTPFileSystem {
 sftpFs.rename(file1, file2);
   }
 
+  @Test
+  public void testGetAccessTime() throws IOException {
+Path file = touch(localFs, name.getMethodName().toLowerCase());
+LocalFileSystem local = (LocalFileSystem)localFs;
+java.nio.file.Path path = (local).pathToFile(file).toPath();
+long accessTime1 = Files.readAttributes(path, BasicFileAttributes.class)
+.lastAccessTime().toMillis();
+long accessTime2 = sftpFs.getFileStatus(file).getAccessTime();
+assertEquals(accessTime1, accessTime2);
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's getFileStatus method is always 0. Contributed by Hongyuan Li.

2017-05-25 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1ba9704ee -> 8bf0e2d6b


HADOOP-14430 the accessTime of FileStatus returned by SFTPFileSystem's
getFileStatus method is always 0.
Contributed by Hongyuan Li.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8bf0e2d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8bf0e2d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8bf0e2d6

Branch: refs/heads/trunk
Commit: 8bf0e2d6b38a2cbd3c3d45557ede7575c1f18312
Parents: 1ba9704
Author: Steve Loughran 
Authored: Thu May 25 15:19:58 2017 +0100
Committer: Steve Loughran 
Committed: Thu May 25 15:19:58 2017 +0100

--
 .../org/apache/hadoop/fs/sftp/SFTPFileSystem.java |  2 +-
 .../org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java | 14 ++
 2 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bf0e2d6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
index 30cf4d3..d91d391 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
@@ -278,7 +278,7 @@ public class SFTPFileSystem extends FileSystem {
 // block sizes on server. The assumption could be less than ideal.
 long blockSize = DEFAULT_BLOCK_SIZE;
 long modTime = attr.getMTime() * 1000; // convert to milliseconds
-long accessTime = 0;
+long accessTime = attr.getATime() * 1000L;
 FsPermission permission = getPermissions(sftpFile);
 // not be able to get the real user group name, just use the user and group
 // id

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bf0e2d6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
index 8dc5324..9b514e1 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.sftp;
 
 import java.io.IOException;
 import java.net.URI;
+import java.nio.file.Files;
+import java.nio.file.attribute.BasicFileAttributes;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
@@ -28,6 +30,7 @@ import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.test.GenericTestUtils;
 
@@ -305,4 +308,15 @@ public class TestSFTPFileSystem {
 sftpFs.rename(file1, file2);
   }
 
+  @Test
+  public void testGetAccessTime() throws IOException {
+Path file = touch(localFs, name.getMethodName().toLowerCase());
+LocalFileSystem local = (LocalFileSystem)localFs;
+java.nio.file.Path path = (local).pathToFile(file).toPath();
+long accessTime1 = Files.readAttributes(path, BasicFileAttributes.class)
+.lastAccessTime().toMillis();
+long accessTime2 = sftpFs.getFileStatus(file).getAccessTime();
+assertEquals(accessTime1, accessTime2);
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14399. Configuration does not correctly XInclude absolute file URIs. Contributed by Jonathan Eagles

2017-05-25 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1a6c53230 -> 07c14b35e


HADOOP-14399. Configuration does not correctly XInclude absolute file URIs.
Contributed by Jonathan Eagles

(cherry picked from commit 1ba9704eec22c75f8aec653ee15eb6767b5a7f4b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07c14b35
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07c14b35
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07c14b35

Branch: refs/heads/branch-2
Commit: 07c14b35ecd168eba05fa2fab925d5b8898504f8
Parents: 1a6c532
Author: Steve Loughran 
Authored: Thu May 25 15:03:01 2017 +0100
Committer: Steve Loughran 
Committed: Thu May 25 15:03:01 2017 +0100

--
 .../org/apache/hadoop/conf/Configuration.java   | 37 
 .../apache/hadoop/conf/TestConfiguration.java   | 23 +---
 2 files changed, 42 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07c14b35/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 55bcbdf..9551e61 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2638,6 +2638,7 @@ public class Configuration implements 
Iterable>,
   StringBuilder token = new StringBuilder();
   String confName = null;
   String confValue = null;
+  String confInclude = null;
   boolean confFinal = false;
   boolean fallbackAllowed = false;
   boolean fallbackEntered = false;
@@ -2681,7 +2682,7 @@ public class Configuration implements 
Iterable>,
 break;
   case "include":
 // Determine href for xi:include
-String confInclude = null;
+confInclude = null;
 attrCount = reader.getAttributeCount();
 for (int i = 0; i < attrCount; i++) {
   String attrName = reader.getAttributeLocalName(i);
@@ -2700,18 +2701,25 @@ public class Configuration implements 
Iterable>,
   Resource classpathResource = new Resource(include, name);
   loadResource(properties, classpathResource, quiet);
 } else {
-  File href = new File(confInclude);
-  if (!href.isAbsolute()) {
-// Included resources are relative to the current resource
-File baseFile = new File(name).getParentFile();
-href = new File(baseFile, href.getPath());
+  URL url;
+  try {
+url = new URL(confInclude);
+url.openConnection().connect();
+  } catch (IOException ioe) {
+File href = new File(confInclude);
+if (!href.isAbsolute()) {
+  // Included resources are relative to the current resource
+  File baseFile = new File(name).getParentFile();
+  href = new File(baseFile, href.getPath());
+}
+if (!href.exists()) {
+  // Resource errors are non-fatal iff there is 1 xi:fallback
+  fallbackAllowed = true;
+  break;
+}
+url = href.toURI().toURL();
   }
-  if (!href.exists()) {
-// Resource errors are non-fatal iff there is 1 xi:fallback
-fallbackAllowed = true;
-break;
-  }
-  Resource uriResource = new Resource(href.toURI().toURL(), name);
+  Resource uriResource = new Resource(url, name);
   loadResource(properties, uriResource, quiet);
 }
 break;
@@ -2752,8 +2760,9 @@ public class Configuration implements 
Iterable>,
 break;
   case "include":
 if (fallbackAllowed && !fallbackEntered) {
-  throw new IOException("Fetch fail on include with no "
-  + "fallback while loading '" + name + "'");
+  throw new IOException("Fetch fail on include for '"
+  + confInclude + "' with no fallback while loading '"
+  + name + "'");
 }
 fallbackAllowed = false;
 fallbackEntered = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/07c14b35/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
-

hadoop git commit: HADOOP-14399. Configuration does not correctly XInclude absolute file URIs. Contributed by Jonathan Eagles

2017-05-25 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1a56a3db5 -> 1ba9704ee


HADOOP-14399. Configuration does not correctly XInclude absolute file URIs.
Contributed by Jonathan Eagles


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1ba9704e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1ba9704e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1ba9704e

Branch: refs/heads/trunk
Commit: 1ba9704eec22c75f8aec653ee15eb6767b5a7f4b
Parents: 1a56a3d
Author: Steve Loughran 
Authored: Thu May 25 14:59:33 2017 +0100
Committer: Steve Loughran 
Committed: Thu May 25 14:59:33 2017 +0100

--
 .../org/apache/hadoop/conf/Configuration.java   | 37 
 .../apache/hadoop/conf/TestConfiguration.java   | 23 +---
 2 files changed, 42 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ba9704e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 2ac52cb..1a6679b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2714,6 +2714,7 @@ public class Configuration implements 
Iterable>,
   StringBuilder token = new StringBuilder();
   String confName = null;
   String confValue = null;
+  String confInclude = null;
   boolean confFinal = false;
   boolean fallbackAllowed = false;
   boolean fallbackEntered = false;
@@ -2757,7 +2758,7 @@ public class Configuration implements 
Iterable>,
 break;
   case "include":
 // Determine href for xi:include
-String confInclude = null;
+confInclude = null;
 attrCount = reader.getAttributeCount();
 for (int i = 0; i < attrCount; i++) {
   String attrName = reader.getAttributeLocalName(i);
@@ -2776,18 +2777,25 @@ public class Configuration implements 
Iterable>,
   Resource classpathResource = new Resource(include, name);
   loadResource(properties, classpathResource, quiet);
 } else {
-  File href = new File(confInclude);
-  if (!href.isAbsolute()) {
-// Included resources are relative to the current resource
-File baseFile = new File(name).getParentFile();
-href = new File(baseFile, href.getPath());
+  URL url;
+  try {
+url = new URL(confInclude);
+url.openConnection().connect();
+  } catch (IOException ioe) {
+File href = new File(confInclude);
+if (!href.isAbsolute()) {
+  // Included resources are relative to the current resource
+  File baseFile = new File(name).getParentFile();
+  href = new File(baseFile, href.getPath());
+}
+if (!href.exists()) {
+  // Resource errors are non-fatal iff there is 1 xi:fallback
+  fallbackAllowed = true;
+  break;
+}
+url = href.toURI().toURL();
   }
-  if (!href.exists()) {
-// Resource errors are non-fatal iff there is 1 xi:fallback
-fallbackAllowed = true;
-break;
-  }
-  Resource uriResource = new Resource(href.toURI().toURL(), name);
+  Resource uriResource = new Resource(url, name);
   loadResource(properties, uriResource, quiet);
 }
 break;
@@ -2828,8 +2836,9 @@ public class Configuration implements 
Iterable>,
 break;
   case "include":
 if (fallbackAllowed && !fallbackEntered) {
-  throw new IOException("Fetch fail on include with no "
-  + "fallback while loading '" + name + "'");
+  throw new IOException("Fetch fail on include for '"
+  + confInclude + "' with no fallback while loading '"
+  + name + "'");
 }
 fallbackAllowed = false;
 fallbackEntered = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ba9704e/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/

hadoop git commit: HADOOP-13760: S3Guard: add new classes

2017-05-25 Thread mackrorysd
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13345 8e257a406 -> 2d0684292


HADOOP-13760: S3Guard: add new classes


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d068429
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d068429
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d068429

Branch: refs/heads/HADOOP-13345
Commit: 2d0684292bb8cd77509e71830ee047e788057a05
Parents: 8e257a4
Author: Sean Mackrory 
Authored: Thu May 25 07:11:20 2017 -0600
Committer: Sean Mackrory 
Committed: Thu May 25 07:11:20 2017 -0600

--
 .../s3guard/MetadataStoreListFilesIterator.java | 168 +++
 .../org/apache/hadoop/fs/s3a/TestListing.java   |  94 +++
 2 files changed, 262 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d068429/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreListFilesIterator.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreListFilesIterator.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreListFilesIterator.java
new file mode 100644
index 000..272b1f4
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreListFilesIterator.java
@@ -0,0 +1,168 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.Queue;
+import java.util.Set;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * {@code MetadataStoreListFilesIterator} is a {@link RemoteIterator} that
+ * is similar to {@code DescendantsIterator} but does not return directories
+ * that have (or may have) children, and will also provide access to the set of
+ * tombstones to allow recently deleted S3 objects to be filtered out from a
+ * corresponding request.  In other words, it returns tombstones and the same
+ * set of objects that should exist in S3: empty directories, and files, and 
not
+ * other directories whose existence is inferred therefrom.
+ *
+ * For example, assume the consistent store contains metadata representing this
+ * file system structure:
+ *
+ * 
+ * /dir1
+ * |-- dir2
+ * |   |-- file1
+ * |   `-- file2
+ * `-- dir3
+ * |-- dir4
+ * |   `-- file3
+ * |-- dir5
+ * |   `-- file4
+ * `-- dir6
+ * 
+ *
+ * Consider this code sample:
+ * 
+ * final PathMetadata dir1 = get(new Path("/dir1"));
+ * for (MetadataStoreListFilesIterator files =
+ * new MetadataStoreListFilesIterator(dir1); files.hasNext(); ) {
+ *   final FileStatus status = files.next().getFileStatus();
+ *   System.out.printf("%s %s%n", status.isDirectory() ? 'D' : 'F',
+ *   status.getPath());
+ * }
+ * 
+ *
+ * The output is:
+ * 
+ * F /dir1/dir2/file1
+ * F /dir1/dir2/file2
+ * F /dir1/dir3/dir4/file3
+ * F /dir1/dir3/dir5/file4
+ * D /dir1/dir3/dir6
+ * 
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class MetadataStoreListFilesIterator implements
+RemoteIterator {
+  public static final Logger LOG = LoggerFactory.getLogger(
+  MetadataStoreListFilesIterator.class);
+
+  private final boolean allowAuthoritative;
+  private final MetadataStore metadataStore;
+  private final Set tombstones = new HashSet<>();
+  private Iterator leafNodesIterator = null;
+
+  public MetadataStoreListFilesIterator(MetadataStore ms, PathMetadata meta,
+  boolean allowAuthoritative) throws IOException {
+ 

[2/2] hadoop git commit: HADOOP-13760. S3Guard: add delete tracking

2017-05-25 Thread mackrorysd
HADOOP-13760. S3Guard: add delete tracking


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8e257a40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8e257a40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8e257a40

Branch: refs/heads/HADOOP-13345
Commit: 8e257a406201dcb3dec345884d1bfd613c0d9953
Parents: 80613da
Author: Sean Mackrory 
Authored: Thu May 25 06:12:15 2017 -0600
Committer: Sean Mackrory 
Committed: Thu May 25 06:16:20 2017 -0600

--
 .../fs/s3a/InconsistentAmazonS3Client.java  | 125 +++-
 .../java/org/apache/hadoop/fs/s3a/Listing.java  |  91 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 310 +--
 .../fs/s3a/s3guard/DescendantsIterator.java |  14 +-
 .../fs/s3a/s3guard/DirListingMetadata.java  |  32 ++
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java   |  86 +++--
 .../fs/s3a/s3guard/LocalMetadataStore.java  |  61 ++--
 .../hadoop/fs/s3a/s3guard/MetadataStore.java|  19 +-
 .../fs/s3a/s3guard/NullMetadataStore.java   |   5 +
 .../hadoop/fs/s3a/s3guard/PathMetadata.java |  27 +-
 .../PathMetadataDynamoDBTranslation.java|   7 +-
 .../apache/hadoop/fs/s3a/s3guard/S3Guard.java   |  26 +-
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  |   5 +-
 .../hadoop/fs/s3a/ITestS3GuardEmptyDirs.java|  62 ++--
 .../fs/s3a/ITestS3GuardListConsistency.java | 213 -
 .../fs/s3a/s3guard/MetadataStoreTestBase.java   | 150 ++---
 .../s3a/s3guard/TestDynamoDBMetadataStore.java  |   4 +-
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  38 ++-
 .../AbstractITestS3AMetadataStoreScale.java |   2 +-
 19 files changed, 1001 insertions(+), 276 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e257a40/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
index ebca268..98ea16a 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
@@ -23,6 +23,7 @@ import com.amazonaws.AmazonServiceException;
 import com.amazonaws.ClientConfiguration;
 import com.amazonaws.auth.AWSCredentialsProvider;
 import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.DeleteObjectRequest;
 import com.amazonaws.services.s3.model.ListObjectsRequest;
 import com.amazonaws.services.s3.model.ObjectListing;
 import com.amazonaws.services.s3.model.PutObjectRequest;
@@ -33,6 +34,7 @@ import org.slf4j.LoggerFactory;
 
 import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 
@@ -57,14 +59,50 @@ public class InconsistentAmazonS3Client extends 
AmazonS3Client {
   private static final Logger LOG =
   LoggerFactory.getLogger(InconsistentAmazonS3Client.class);
 
+  /**
+   * Composite of data we need to track about recently deleted objects:
+   * when it was deleted (same was with recently put objects) and the object
+   * summary (since we should keep returning it for sometime after its
+   * deletion).
+   */
+  private static class Delete {
+private Long time;
+private S3ObjectSummary summary;
+
+Delete(Long time, S3ObjectSummary summary) {
+  this.time = time;
+  this.summary = summary;
+}
+
+public Long time() {
+  return time;
+}
+
+public S3ObjectSummary summary() {
+  return summary;
+}
+  }
+
+  /** Map of key to delay -> time it was deleted + object summary (object
+   * summary is null for prefixes. */
+  private Map delayedDeletes = new HashMap<>();
+
   /** Map of key to delay -> time it was created. */
-  private Map delayedKeys = new HashMap<>();
+  private Map delayedPutKeys = new HashMap<>();
 
   public InconsistentAmazonS3Client(AWSCredentialsProvider credentials,
   ClientConfiguration clientConfiguration) {
 super(credentials, clientConfiguration);
   }
 
+  @Override
+  public void deleteObject(DeleteObjectRequest deleteObjectRequest)
+  throws AmazonClientException, AmazonServiceException {
+LOG.debug("key {}", deleteObjectRequest.getKey());
+registerDeleteObject(deleteObjectRequest);
+super.deleteObject(deleteObjectRequest);
+  }
+
   /* We should only need to override this version of putObject() */
   @Override
   public PutObjectResult putObject(PutObjectRequest putObjectRequest)
@@ -80,10 +118,49 @@ public class InconsistentAmazonS3Client extends 
AmazonS3

[1/2] hadoop git commit: HADOOP-13760. S3Guard: add delete tracking [Forced Update!]

2017-05-25 Thread mackrorysd
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13345 c17759607 -> 8e257a406 (forced update)


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e257a40/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
index 99acf6e..dfa8a9e 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.fs.s3a.s3guard;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.s3a.S3ATestUtils;
 import org.apache.hadoop.fs.s3a.Tristate;
@@ -134,12 +135,52 @@ public abstract class MetadataStoreTestBase extends 
Assert {
   }
 
   /**
+   * Helper function for verifying DescendantsIterator and
+   * MetadataStoreListFilesIterator behavior.
+   * @param createNodes List of paths to create
+   * @param checkNodes List of paths that the iterator should return
+   * @throws IOException
+   */
+  private void doTestDescendantsIterator(
+  Class implementation, String[] createNodes,
+  String[] checkNodes) throws Exception {
+// we set up the example file system tree in metadata store
+for (String pathStr : createNodes) {
+  final FileStatus status = pathStr.contains("file")
+  ? basicFileStatus(strToPath(pathStr), 100, false)
+  : basicFileStatus(strToPath(pathStr), 0, true);
+  ms.put(new PathMetadata(status));
+}
+
+final PathMetadata rootMeta = new PathMetadata(makeDirStatus("/"));
+RemoteIterator iterator;
+if (implementation == DescendantsIterator.class) {
+  iterator = new DescendantsIterator(ms, rootMeta);
+} else if (implementation == MetadataStoreListFilesIterator.class) {
+  iterator = new MetadataStoreListFilesIterator(ms, rootMeta, false);
+} else {
+  throw new UnsupportedOperationException("Unrecognized class");
+}
+
+final Set actual = new HashSet<>();
+while (iterator.hasNext()) {
+  final Path p = iterator.next().getPath();
+  actual.add(Path.getPathWithoutSchemeAndAuthority(p).toString());
+}
+LOG.info("We got {} by iterating DescendantsIterator", actual);
+
+if (!allowMissing()) {
+  assertEquals(Sets.newHashSet(checkNodes), actual);
+}
+  }
+
+  /**
* Test that we can get the whole sub-tree by iterating DescendantsIterator.
*
* The tree is similar to or same as the example in code comment.
*/
   @Test
-  public void testDescendantsIterator() throws IOException {
+  public void testDescendantsIterator() throws Exception {
 final String[] tree = new String[] {
 "/dir1",
 "/dir1/dir2",
@@ -152,26 +193,38 @@ public abstract class MetadataStoreTestBase extends 
Assert {
 "/dir1/dir3/dir5/file4",
 "/dir1/dir3/dir6"
 };
-// we set up the example file system tree in metadata store
-for (String pathStr : tree) {
-  final FileStatus status = pathStr.contains("file")
-  ? basicFileStatus(strToPath(pathStr), 100, false)
-  : basicFileStatus(strToPath(pathStr), 0, true);
-  ms.put(new PathMetadata(status));
-}
-
-final Set actual = new HashSet<>();
-final PathMetadata rootMeta = new PathMetadata(makeDirStatus("/"));
-for (DescendantsIterator desc = new DescendantsIterator(ms, rootMeta);
- desc.hasNext();) {
-  final Path p = desc.next().getPath();
-  actual.add(Path.getPathWithoutSchemeAndAuthority(p).toString());
-}
-LOG.info("We got {} by iterating DescendantsIterator", actual);
+doTestDescendantsIterator(DescendantsIterator.class,
+tree, tree);
+  }
 
-if (!allowMissing()) {
-  assertEquals(Sets.newHashSet(tree), actual);
-}
+  /**
+   * Test that we can get the correct subset of the tree with
+   * MetadataStoreListFilesIterator.
+   *
+   * The tree is similar to or same as the example in code comment.
+   */
+  @Test
+  public void testMetadataStoreListFilesIterator() throws Exception {
+final String[] wholeTree = new String[] {
+"/dir1",
+"/dir1/dir2",
+"/dir1/dir3",
+"/dir1/dir2/file1",
+"/dir1/dir2/file2",
+"/dir1/dir3/dir4",
+"/dir1/dir3/dir5",
+"/dir1/dir3/dir4/file3",
+"/dir1/dir3/dir5/file4",
+"/dir1/dir3/dir6"
+};
+final String[] leafNodes = new String[] {
+"/dir1/dir2/file1",
+"/dir1/dir2/file2",
+"/dir1/dir3/di

[2/6] hadoop git commit: Reproduced erroneous authoritative listing

2017-05-25 Thread mackrorysd
Reproduced erroneous authoritative listing


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83b82348
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83b82348
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83b82348

Branch: refs/heads/HADOOP-13345
Commit: 83b82348ff8a4bd347b5c868ebdd87df3961a5a0
Parents: 2f3305d
Author: Sean Mackrory 
Authored: Wed May 24 09:19:01 2017 -0600
Committer: Sean Mackrory 
Committed: Wed May 24 17:19:39 2017 -0600

--
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java |  2 +-
 .../fs/contract/s3a/ITestS3AContractRename.java | 47 
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  2 +-
 3 files changed, 49 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/83b82348/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 7de94cc..ffb2147 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -960,7 +960,7 @@ public class S3AFileSystem extends FileSystem {
   }
 
   @VisibleForTesting
-  MetadataStore getMetadataStore() {
+  public MetadataStore getMetadataStore() {
 return metadataStore;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83b82348/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
index 4339649..65da13a 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
@@ -23,6 +23,14 @@ import 
org.apache.hadoop.fs.contract.AbstractContractRenameTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.Tristate;
+import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
+import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
+import org.apache.hadoop.fs.s3a.s3guard.S3Guard;
+import org.junit.Test;
+
+import java.io.IOException;
 
 import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
@@ -72,4 +80,43 @@ public class ITestS3AContractRename extends 
AbstractContractRenameTest {
 boolean rename = fs.rename(srcDir, destDir);
 assertFalse("s3a doesn't support rename to non-empty directory", rename);
   }
+
+  @Override
+  @Test
+  public void testRenamePopulatesFileAncestors() throws IOException {
+final FileSystem fs = getFileSystem();
+final Path src = path("testRenamePopulatesFileAncestors/source");
+fs.mkdirs(src);
+final String nestedFile = "/dir1/dir2/dir3/file4";
+byte[] srcDataset = dataset(256, 'a', 'z');
+writeDataset(fs, path(src + nestedFile), srcDataset, srcDataset.length,
+1024, false);
+
+Path dst = path("testRenamePopulatesFileAncestorsNew");
+
+DirListingMetadata l = ((S3AFileSystem)fs).getMetadataStore()
+.listChildren(src);
+fs.rename(src, dst);
+  }
+
+  @Test
+  public void testMkdirPopulatesFileAncestors() throws IOException {
+final FileSystem fs = getFileSystem();
+final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
+final Path parent = path("testMkdirPopulatesFileAncestors/source");
+try {
+  //fs.mkdirs(parent);
+  final Path nestedFile = new Path(parent, "/dir1/dir2/dir3/file4");
+  byte[] srcDataset = dataset(256, 'a', 'z');
+  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
+  1024, false);
+
+  DirListingMetadata list = ms.listChildren(parent);
+  System.out.println("list isEmpty: " + list.isEmpty());
+  System.out.println("list isAuthoritative: " + list.isAuthoritative());
+  assertTrue(list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
+} finally {
+  fs.delete(parent, true);
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83b82348/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestLocalMetadataStore.jav

[5/6] hadoop git commit: Revert "HADOOP-13760. S3Guard: add delete tracking."

2017-05-25 Thread mackrorysd
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1775960/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
deleted file mode 100644
index 43eb2c0..000
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.s3a;
-
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.LocatedFileStatus;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.RemoteIterator;
-import org.junit.Assert;
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Set;
-
-/**
- * Place for the S3A listing classes; keeps all the small classes under 
control.
- */
-public class TestListing extends AbstractS3AMockTest {
-
-  private static class MockRemoteIterator implements
-  RemoteIterator {
-private Iterator iterator;
-
-MockRemoteIterator(Collection source) {
-  iterator = source.iterator();
-}
-
-public boolean hasNext() {
-  return iterator.hasNext();
-}
-
-public FileStatus next() {
-  return iterator.next();
-}
-  }
-
-  private FileStatus blankFileStatus(Path path) {
-return new FileStatus(0, true, 0, 0, 0, path);
-  }
-
-  @Test
-  public void testTombstoneReconcilingIterator() throws Exception {
-Path parent = new Path("/parent");
-Path liveChild = new Path(parent, "/liveChild");
-Path deletedChild = new Path(parent, "/deletedChild");
-Path[] allFiles = {parent, liveChild, deletedChild};
-Path[] liveFiles = {parent, liveChild};
-
-Listing listing = new Listing(fs);
-Collection statuses = new ArrayList<>();
-statuses.add(blankFileStatus(parent));
-statuses.add(blankFileStatus(liveChild));
-statuses.add(blankFileStatus(deletedChild));
-
-Set tombstones = new HashSet<>();
-tombstones.add(deletedChild);
-
-RemoteIterator sourceIterator = new MockRemoteIterator(
-statuses);
-RemoteIterator locatedIterator =
-listing.createLocatedFileStatusIterator(sourceIterator);
-RemoteIterator reconcilingIterator =
-listing.createTombstoneReconcilingIterator(locatedIterator, 
tombstones);
-
-Set expectedPaths = new HashSet<>();
-expectedPaths.add(parent);
-expectedPaths.add(liveChild);
-
-Set actualPaths = new HashSet<>();
-while (reconcilingIterator.hasNext()) {
-  actualPaths.add(reconcilingIterator.next().getPath());
-}
-Assert.assertTrue(actualPaths.equals(expectedPaths));
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1775960/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
index dfa8a9e..99acf6e 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
@@ -21,7 +21,6 @@ package org.apache.hadoop.fs.s3a.s3guard;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.s3a.S3ATestUtils;
 import org.apache.hadoop.fs.s3a.Tristate;
@@ -135,52 +134,12 @@ public abstract class MetadataStoreTestBase extends 
Assert {
   }
 
   /**
-   * Helper function for verifying DescendantsIterator and
-   * MetadataStoreListFilesIterator behavior.
-   * @param createNodes List of paths to create
-   * @para

[3/6] hadoop git commit: HADOOP-13760. S3Guard: add delete tracking.

2017-05-25 Thread mackrorysd
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f3305db/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
new file mode 100644
index 000..43eb2c0
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestListing.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Set;
+
+/**
+ * Place for the S3A listing classes; keeps all the small classes under 
control.
+ */
+public class TestListing extends AbstractS3AMockTest {
+
+  private static class MockRemoteIterator implements
+  RemoteIterator {
+private Iterator iterator;
+
+MockRemoteIterator(Collection source) {
+  iterator = source.iterator();
+}
+
+public boolean hasNext() {
+  return iterator.hasNext();
+}
+
+public FileStatus next() {
+  return iterator.next();
+}
+  }
+
+  private FileStatus blankFileStatus(Path path) {
+return new FileStatus(0, true, 0, 0, 0, path);
+  }
+
+  @Test
+  public void testTombstoneReconcilingIterator() throws Exception {
+Path parent = new Path("/parent");
+Path liveChild = new Path(parent, "/liveChild");
+Path deletedChild = new Path(parent, "/deletedChild");
+Path[] allFiles = {parent, liveChild, deletedChild};
+Path[] liveFiles = {parent, liveChild};
+
+Listing listing = new Listing(fs);
+Collection statuses = new ArrayList<>();
+statuses.add(blankFileStatus(parent));
+statuses.add(blankFileStatus(liveChild));
+statuses.add(blankFileStatus(deletedChild));
+
+Set tombstones = new HashSet<>();
+tombstones.add(deletedChild);
+
+RemoteIterator sourceIterator = new MockRemoteIterator(
+statuses);
+RemoteIterator locatedIterator =
+listing.createLocatedFileStatusIterator(sourceIterator);
+RemoteIterator reconcilingIterator =
+listing.createTombstoneReconcilingIterator(locatedIterator, 
tombstones);
+
+Set expectedPaths = new HashSet<>();
+expectedPaths.add(parent);
+expectedPaths.add(liveChild);
+
+Set actualPaths = new HashSet<>();
+while (reconcilingIterator.hasNext()) {
+  actualPaths.add(reconcilingIterator.next().getPath());
+}
+Assert.assertTrue(actualPaths.equals(expectedPaths));
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f3305db/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
index 99acf6e..dfa8a9e 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.fs.s3a.s3guard;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.fs.s3a.S3ATestUtils;
 import org.apache.hadoop.fs.s3a.Tristate;
@@ -134,12 +135,52 @@ public abstract class MetadataStoreTestBase extends 
Assert {
   }
 
   /**
+   * Helper function for verifying DescendantsIterator and
+   * MetadataStoreListFilesIterator behavior.
+   * @param createNodes List of paths to create
+   * @param ch

[6/6] hadoop git commit: Revert "HADOOP-13760. S3Guard: add delete tracking."

2017-05-25 Thread mackrorysd
Revert "HADOOP-13760. S3Guard: add delete tracking."

This reverts commit 2f3305db768e4159b9216de88adbd73b686d9a02.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1775960
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1775960
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1775960

Branch: refs/heads/HADOOP-13345
Commit: c17759607e7ce770aa4635a4896ac1233b003f33
Parents: 447d7b8
Author: Sean Mackrory 
Authored: Wed May 24 20:06:28 2017 -0600
Committer: Sean Mackrory 
Committed: Wed May 24 20:06:28 2017 -0600

--
 .../fs/s3a/InconsistentAmazonS3Client.java  | 122 +---
 .../java/org/apache/hadoop/fs/s3a/Listing.java  |  91 --
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 310 ++-
 .../fs/s3a/s3guard/DescendantsIterator.java |  14 +-
 .../fs/s3a/s3guard/DirListingMetadata.java  |  32 --
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java   |  86 ++---
 .../fs/s3a/s3guard/LocalMetadataStore.java  |  61 ++--
 .../hadoop/fs/s3a/s3guard/MetadataStore.java|  19 +-
 .../s3guard/MetadataStoreListFilesIterator.java | 168 --
 .../fs/s3a/s3guard/NullMetadataStore.java   |   5 -
 .../hadoop/fs/s3a/s3guard/PathMetadata.java |  27 +-
 .../PathMetadataDynamoDBTranslation.java|   7 +-
 .../apache/hadoop/fs/s3a/s3guard/S3Guard.java   |  26 +-
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  |   5 +-
 .../hadoop/fs/s3a/ITestS3GuardEmptyDirs.java|  62 ++--
 .../fs/s3a/ITestS3GuardListConsistency.java | 213 +
 .../org/apache/hadoop/fs/s3a/TestListing.java   |  94 --
 .../fs/s3a/s3guard/MetadataStoreTestBase.java   | 150 +++--
 .../s3a/s3guard/TestDynamoDBMetadataStore.java  |   4 +-
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  38 +--
 .../AbstractITestS3AMetadataStoreScale.java |   2 +-
 21 files changed, 276 insertions(+), 1260 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1775960/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
index aadcc37..ebca268 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
@@ -23,7 +23,6 @@ import com.amazonaws.AmazonServiceException;
 import com.amazonaws.ClientConfiguration;
 import com.amazonaws.auth.AWSCredentialsProvider;
 import com.amazonaws.services.s3.AmazonS3Client;
-import com.amazonaws.services.s3.model.DeleteObjectRequest;
 import com.amazonaws.services.s3.model.ListObjectsRequest;
 import com.amazonaws.services.s3.model.ObjectListing;
 import com.amazonaws.services.s3.model.PutObjectRequest;
@@ -34,7 +33,6 @@ import org.slf4j.LoggerFactory;
 
 import java.util.ArrayList;
 import java.util.HashMap;
-import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 
@@ -59,50 +57,14 @@ public class InconsistentAmazonS3Client extends 
AmazonS3Client {
   private static final Logger LOG =
   LoggerFactory.getLogger(InconsistentAmazonS3Client.class);
 
-  /**
-   * Composite of data we need to track about recently deleted objects:
-   * when it was deleted (same was with recently put objects) and the object
-   * summary (since we should keep returning it for sometime after its
-   * deletion).
-   */
-  private static class Delete {
-private Long time;
-private S3ObjectSummary summary;
-
-Delete(Long time, S3ObjectSummary summary) {
-  this.time = time;
-  this.summary = summary;
-}
-
-public Long time() {
-  return time;
-}
-
-public S3ObjectSummary summary() {
-  return summary;
-}
-  }
-
-  /** Map of key to delay -> time it was deleted + object summary (object
-   * summary is null for prefixes. */
-  private Map delayedDeletes = new HashMap<>();
-
   /** Map of key to delay -> time it was created. */
-  private Map delayedPutKeys = new HashMap<>();
+  private Map delayedKeys = new HashMap<>();
 
   public InconsistentAmazonS3Client(AWSCredentialsProvider credentials,
   ClientConfiguration clientConfiguration) {
 super(credentials, clientConfiguration);
   }
 
-  @Override
-  public void deleteObject(DeleteObjectRequest deleteObjectRequest)
-  throws AmazonClientException, AmazonServiceException {
-LOG.debug("key {}", deleteObjectRequest.getKey());
-registerDeleteObject(deleteObjectRequest);
-super.deleteObject(deleteObjectRequest);
-  }
-
   /* We should only need to o

[4/6] hadoop git commit: HADOOP-13760. S3Guard: add delete tracking.

2017-05-25 Thread mackrorysd
HADOOP-13760. S3Guard: add delete tracking.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f3305db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f3305db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f3305db

Branch: refs/heads/HADOOP-13345
Commit: 2f3305db768e4159b9216de88adbd73b686d9a02
Parents: 80613da
Author: Sean Mackrory 
Authored: Fri Apr 7 17:21:58 2017 -0600
Committer: Sean Mackrory 
Committed: Wed May 24 17:19:39 2017 -0600

--
 .../fs/s3a/InconsistentAmazonS3Client.java  | 122 +++-
 .../java/org/apache/hadoop/fs/s3a/Listing.java  |  91 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 310 +--
 .../fs/s3a/s3guard/DescendantsIterator.java |  14 +-
 .../fs/s3a/s3guard/DirListingMetadata.java  |  32 ++
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java   |  86 +++--
 .../fs/s3a/s3guard/LocalMetadataStore.java  |  61 ++--
 .../hadoop/fs/s3a/s3guard/MetadataStore.java|  19 +-
 .../s3guard/MetadataStoreListFilesIterator.java | 168 ++
 .../fs/s3a/s3guard/NullMetadataStore.java   |   5 +
 .../hadoop/fs/s3a/s3guard/PathMetadata.java |  27 +-
 .../PathMetadataDynamoDBTranslation.java|   7 +-
 .../apache/hadoop/fs/s3a/s3guard/S3Guard.java   |  26 +-
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  |   5 +-
 .../hadoop/fs/s3a/ITestS3GuardEmptyDirs.java|  62 ++--
 .../fs/s3a/ITestS3GuardListConsistency.java | 213 -
 .../org/apache/hadoop/fs/s3a/TestListing.java   |  94 ++
 .../fs/s3a/s3guard/MetadataStoreTestBase.java   | 150 ++---
 .../s3a/s3guard/TestDynamoDBMetadataStore.java  |   4 +-
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  38 ++-
 .../AbstractITestS3AMetadataStoreScale.java |   2 +-
 21 files changed, 1260 insertions(+), 276 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f3305db/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
index ebca268..aadcc37 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
@@ -23,6 +23,7 @@ import com.amazonaws.AmazonServiceException;
 import com.amazonaws.ClientConfiguration;
 import com.amazonaws.auth.AWSCredentialsProvider;
 import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.model.DeleteObjectRequest;
 import com.amazonaws.services.s3.model.ListObjectsRequest;
 import com.amazonaws.services.s3.model.ObjectListing;
 import com.amazonaws.services.s3.model.PutObjectRequest;
@@ -33,6 +34,7 @@ import org.slf4j.LoggerFactory;
 
 import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 
@@ -57,14 +59,50 @@ public class InconsistentAmazonS3Client extends 
AmazonS3Client {
   private static final Logger LOG =
   LoggerFactory.getLogger(InconsistentAmazonS3Client.class);
 
+  /**
+   * Composite of data we need to track about recently deleted objects:
+   * when it was deleted (same was with recently put objects) and the object
+   * summary (since we should keep returning it for sometime after its
+   * deletion).
+   */
+  private static class Delete {
+private Long time;
+private S3ObjectSummary summary;
+
+Delete(Long time, S3ObjectSummary summary) {
+  this.time = time;
+  this.summary = summary;
+}
+
+public Long time() {
+  return time;
+}
+
+public S3ObjectSummary summary() {
+  return summary;
+}
+  }
+
+  /** Map of key to delay -> time it was deleted + object summary (object
+   * summary is null for prefixes. */
+  private Map delayedDeletes = new HashMap<>();
+
   /** Map of key to delay -> time it was created. */
-  private Map delayedKeys = new HashMap<>();
+  private Map delayedPutKeys = new HashMap<>();
 
   public InconsistentAmazonS3Client(AWSCredentialsProvider credentials,
   ClientConfiguration clientConfiguration) {
 super(credentials, clientConfiguration);
   }
 
+  @Override
+  public void deleteObject(DeleteObjectRequest deleteObjectRequest)
+  throws AmazonClientException, AmazonServiceException {
+LOG.debug("key {}", deleteObjectRequest.getKey());
+registerDeleteObject(deleteObjectRequest);
+super.deleteObject(deleteObjectRequest);
+  }
+
   /* We should only need to override this version of putObject() */
   @Override
   public PutObjectRe

[1/6] hadoop git commit: Revert "Reproduced erroneous authoritative listing"

2017-05-25 Thread mackrorysd
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13345 80613da01 -> c17759607


Revert "Reproduced erroneous authoritative listing"

This reverts commit 8e6e39f42045247c21f9546103f6f243de4e75a8.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/447d7b8f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/447d7b8f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/447d7b8f

Branch: refs/heads/HADOOP-13345
Commit: 447d7b8fa6fa309ff42103721a4e2ef6f303e76a
Parents: 83b8234
Author: Sean Mackrory 
Authored: Wed May 24 14:49:26 2017 -0600
Committer: Sean Mackrory 
Committed: Wed May 24 17:19:39 2017 -0600

--
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java |  2 +-
 .../fs/contract/s3a/ITestS3AContractRename.java | 47 
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  2 +-
 3 files changed, 2 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/447d7b8f/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index ffb2147..7de94cc 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -960,7 +960,7 @@ public class S3AFileSystem extends FileSystem {
   }
 
   @VisibleForTesting
-  public MetadataStore getMetadataStore() {
+  MetadataStore getMetadataStore() {
 return metadataStore;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/447d7b8f/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
index 65da13a..4339649 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
@@ -23,14 +23,6 @@ import 
org.apache.hadoop.fs.contract.AbstractContractRenameTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.s3a.S3AFileSystem;
-import org.apache.hadoop.fs.s3a.Tristate;
-import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
-import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
-import org.apache.hadoop.fs.s3a.s3guard.S3Guard;
-import org.junit.Test;
-
-import java.io.IOException;
 
 import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
@@ -80,43 +72,4 @@ public class ITestS3AContractRename extends 
AbstractContractRenameTest {
 boolean rename = fs.rename(srcDir, destDir);
 assertFalse("s3a doesn't support rename to non-empty directory", rename);
   }
-
-  @Override
-  @Test
-  public void testRenamePopulatesFileAncestors() throws IOException {
-final FileSystem fs = getFileSystem();
-final Path src = path("testRenamePopulatesFileAncestors/source");
-fs.mkdirs(src);
-final String nestedFile = "/dir1/dir2/dir3/file4";
-byte[] srcDataset = dataset(256, 'a', 'z');
-writeDataset(fs, path(src + nestedFile), srcDataset, srcDataset.length,
-1024, false);
-
-Path dst = path("testRenamePopulatesFileAncestorsNew");
-
-DirListingMetadata l = ((S3AFileSystem)fs).getMetadataStore()
-.listChildren(src);
-fs.rename(src, dst);
-  }
-
-  @Test
-  public void testMkdirPopulatesFileAncestors() throws IOException {
-final FileSystem fs = getFileSystem();
-final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
-final Path parent = path("testMkdirPopulatesFileAncestors/source");
-try {
-  //fs.mkdirs(parent);
-  final Path nestedFile = new Path(parent, "/dir1/dir2/dir3/file4");
-  byte[] srcDataset = dataset(256, 'a', 'z');
-  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
-  1024, false);
-
-  DirListingMetadata list = ms.listChildren(parent);
-  System.out.println("list isEmpty: " + list.isEmpty());
-  System.out.println("list isAuthoritative: " + list.isAuthoritative());
-  assertTrue(list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
-} finally {
-  fs.delete(parent, true);
-}
-  }
 

[3/4] hadoop git commit: Addendum patch to fix Docker sanitization.

2017-05-25 Thread vvasudev
Addendum patch to fix Docker sanitization.

(cherry picked from commit 2ff2a1f50e8c7c0f33676b010b256d6c8daf912d)
(cherry picked from commit 983c4437c2b1aaafddfc97b19371cdb992389a61)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1f3cd76
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1f3cd76
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1f3cd76

Branch: refs/heads/branch-2.8
Commit: c1f3cd765f2501ea0f4f78bb5981b767e0e675bb
Parents: d50f240
Author: Varun Vasudev 
Authored: Wed May 24 16:03:28 2017 +0530
Committer: Varun Vasudev 
Committed: Thu May 25 14:54:17 2017 +0530

--
 .../impl/container-executor.c   |  6 +-
 .../test/test-container-executor.c  | 20 +---
 2 files changed, 6 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1f3cd76/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 7aa36cb..ddbf738 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1268,13 +1268,9 @@ char* sanitize_docker_command(const char *line) {
   }
 
   if(optind < split_counter) {
-quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
-strcat(output, "'");
 while(optind < split_counter) {
-  strcat(output, linesplit[optind++]);
-  strcat(output, " ");
+  quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
 }
-strcat(output, "'");
   }
 
   return output;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1f3cd76/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index fcc05a3..7a4dda2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -715,11 +715,6 @@ void test_run_container() {
 
 void test_sanitize_docker_command() {
 
-/*
-  char *input[] = {
-"run "
-  };
-*/
   char *input[] = {
 "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
 "run --name=$CID --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
@@ -727,17 +722,12 @@ void test_sanitize_docker_command() {
 "run --name=cname --user=nobody -d --workdir=/yarn/loca

[1/4] hadoop git commit: Addendum patch to fix Docker sanitization.

2017-05-25 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ca1c0cbc6 -> 1a6c53230
  refs/heads/branch-2.8 d50f24078 -> c1f3cd765
  refs/heads/branch-2.8.1 e9835c107 -> 3bf038446
  refs/heads/trunk bc28da65f -> 1a56a3db5


Addendum patch to fix Docker sanitization.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a56a3db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a56a3db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a56a3db

Branch: refs/heads/trunk
Commit: 1a56a3db599659091284e3016d0309052966d018
Parents: bc28da6
Author: Varun Vasudev 
Authored: Wed May 24 16:03:28 2017 +0530
Committer: Varun Vasudev 
Committed: Thu May 25 14:53:57 2017 +0530

--
 .../impl/container-executor.c   |  6 +-
 .../test/test-container-executor.c  | 20 +---
 2 files changed, 6 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a56a3db/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 3a87646..5d138f3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1292,13 +1292,9 @@ char* sanitize_docker_command(const char *line) {
   }
 
   if(optind < split_counter) {
-quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
-strcat(output, "'");
 while(optind < split_counter) {
-  strcat(output, linesplit[optind++]);
-  strcat(output, " ");
+  quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
 }
-strcat(output, "'");
   }
 
   return output;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a56a3db/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index ff76d4a..83d11ec 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -1087,11 +1087,6 @@ void test_trim_function() {
 
 void test_sanitize_docker_command() {
 
-/*
-  char *input[] = {
-"run "
-  };
-*/
   char *input[] = {
 "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
 "run --name=$CID --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
@@ -1099,17 +1094,12 @@ void test_saniti

[4/4] hadoop git commit: Addendum patch to fix Docker sanitization.

2017-05-25 Thread vvasudev
Addendum patch to fix Docker sanitization.

(cherry picked from commit 2ff2a1f50e8c7c0f33676b010b256d6c8daf912d)
(cherry picked from commit 983c4437c2b1aaafddfc97b19371cdb992389a61)
(cherry picked from commit aef41cd8df54933314858a6ed0f334a67edfa5e8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bf03844
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bf03844
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bf03844

Branch: refs/heads/branch-2.8.1
Commit: 3bf038446e19000e99bb6e213c1ac40d04fda2dc
Parents: e9835c1
Author: Varun Vasudev 
Authored: Wed May 24 16:03:28 2017 +0530
Committer: Varun Vasudev 
Committed: Thu May 25 14:54:25 2017 +0530

--
 .../impl/container-executor.c   |  6 +-
 .../test/test-container-executor.c  | 20 +---
 2 files changed, 6 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bf03844/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 7aa36cb..ddbf738 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1268,13 +1268,9 @@ char* sanitize_docker_command(const char *line) {
   }
 
   if(optind < split_counter) {
-quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
-strcat(output, "'");
 while(optind < split_counter) {
-  strcat(output, linesplit[optind++]);
-  strcat(output, " ");
+  quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
 }
-strcat(output, "'");
   }
 
   return output;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bf03844/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index fcc05a3..7a4dda2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -715,11 +715,6 @@ void test_run_container() {
 
 void test_sanitize_docker_command() {
 
-/*
-  char *input[] = {
-"run "
-  };
-*/
   char *input[] = {
 "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
 "run --name=$CID --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
@@ -727,17 +722,12 @@ void test_sanitize_docker_c

[2/4] hadoop git commit: Addendum patch to fix Docker sanitization.

2017-05-25 Thread vvasudev
Addendum patch to fix Docker sanitization.

(cherry picked from commit 2ff2a1f50e8c7c0f33676b010b256d6c8daf912d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a6c5323
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a6c5323
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a6c5323

Branch: refs/heads/branch-2
Commit: 1a6c532301dfadcd55525ad2190b1ed0647efb95
Parents: ca1c0cb
Author: Varun Vasudev 
Authored: Wed May 24 16:03:28 2017 +0530
Committer: Varun Vasudev 
Committed: Thu May 25 14:54:08 2017 +0530

--
 .../impl/container-executor.c   |  6 +-
 .../test/test-container-executor.c  | 20 +---
 2 files changed, 6 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a6c5323/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 618c602..410861f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1288,13 +1288,9 @@ char* sanitize_docker_command(const char *line) {
   }
 
   if(optind < split_counter) {
-quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
-strcat(output, "'");
 while(optind < split_counter) {
-  strcat(output, linesplit[optind++]);
-  strcat(output, " ");
+  quote_and_append_arg(&output, &output_size, "", linesplit[optind++]);
 }
-strcat(output, "'");
   }
 
   return output;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a6c5323/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index fd99325..5edcfa2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -1015,11 +1015,6 @@ void test_recursive_unlink_children() {
 
 void test_sanitize_docker_command() {
 
-/*
-  char *input[] = {
-"run "
-  };
-*/
   char *input[] = {
 "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
 "run --name=$CID --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/device:/sys/fs/cgroup/device --detach=true 
--cgroup-parent=/sys/fs/cgroup/cpu/yarn/cid --net=host --cap-drop=ALL 
--cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP --cap-add=SETPCAP 
--cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE --cap-add=SETGID 
--cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID --cap-add=DAC_OVERRIDE 
--cap-add=KILL --cap-add=NET_BIND_SERVICE -v /sys/fs/cgroup:/sys/fs/cgroup:ro 
-v /yarn/local/cdir:/yarn/local/cdir -v 
/yarn/local/usercache/test/:/yarn/local/usercache/test/ ubuntu bash 
/yarn/local/usercache/test/appcache/aid/cid/launch_container.sh",
@@ -1027,17 +1022,12 @@ void test_sanitize_docker_command() {
 "run --name=cname --user=nobody -d --workdir=/yarn/local/cdir --privileged 
--rm --device=/sys/fs/cgroup/devic

hadoop git commit: YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. Contributed by Sonia Garudi and Ayappan.

2017-05-25 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8.1 5ab55390e -> e9835c107


YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. 
Contributed by Sonia Garudi and Ayappan.

(cherry picked from commit bc28da65fb1c67904aa3cefd7273cb7423521014)
(cherry picked from commit ca1c0cbc62b577e03ed59efb3f9050cba59be8a0)
(cherry picked from commit d50f2407846f1d16720e40b81828d8a37eeb6fc3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e9835c10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e9835c10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e9835c10

Branch: refs/heads/branch-2.8.1
Commit: e9835c107074c061aa6cdb91726e5def6273a865
Parents: 5ab5539
Author: Akira Ajisaka 
Authored: Thu May 25 17:06:26 2017 +0900
Committer: Akira Ajisaka 
Committed: Thu May 25 17:08:45 2017 +0900

--
 .../src/main/native/container-executor/impl/get_executable.c   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9835c10/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
index 49ae093..ce46b77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
@@ -142,7 +142,7 @@ char* get_executable(char *argv0) {
   return __get_exec_sysctl(mib);
 }
 
-#elif defined(__linux)
+#elif defined(__linux__)
 
 
 char* get_executable(char *argv0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. Contributed by Sonia Garudi and Ayappan.

2017-05-25 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 caad191c0 -> d50f24078


YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. 
Contributed by Sonia Garudi and Ayappan.

(cherry picked from commit bc28da65fb1c67904aa3cefd7273cb7423521014)
(cherry picked from commit ca1c0cbc62b577e03ed59efb3f9050cba59be8a0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d50f2407
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d50f2407
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d50f2407

Branch: refs/heads/branch-2.8
Commit: d50f2407846f1d16720e40b81828d8a37eeb6fc3
Parents: caad191
Author: Akira Ajisaka 
Authored: Thu May 25 17:06:26 2017 +0900
Committer: Akira Ajisaka 
Committed: Thu May 25 17:08:18 2017 +0900

--
 .../src/main/native/container-executor/impl/get_executable.c   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d50f2407/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
index 49ae093..ce46b77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
@@ -142,7 +142,7 @@ char* get_executable(char *argv0) {
   return __get_exec_sysctl(mib);
 }
 
-#elif defined(__linux)
+#elif defined(__linux__)
 
 
 char* get_executable(char *argv0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. Contributed by Sonia Garudi and Ayappan.

2017-05-25 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 dd552a97b -> ca1c0cbc6


YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. 
Contributed by Sonia Garudi and Ayappan.

(cherry picked from commit bc28da65fb1c67904aa3cefd7273cb7423521014)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca1c0cbc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca1c0cbc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca1c0cbc

Branch: refs/heads/branch-2
Commit: ca1c0cbc62b577e03ed59efb3f9050cba59be8a0
Parents: dd552a9
Author: Akira Ajisaka 
Authored: Thu May 25 17:06:26 2017 +0900
Committer: Akira Ajisaka 
Committed: Thu May 25 17:07:50 2017 +0900

--
 .../src/main/native/container-executor/impl/get_executable.c   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca1c0cbc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
index 49ae093..ce46b77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
@@ -142,7 +142,7 @@ char* get_executable(char *argv0) {
   return __get_exec_sysctl(mib);
 }
 
-#elif defined(__linux)
+#elif defined(__linux__)
 
 
 char* get_executable(char *argv0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. Contributed by Sonia Garudi and Ayappan.

2017-05-25 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6a52b5e14 -> bc28da65f


YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable codepath. 
Contributed by Sonia Garudi and Ayappan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc28da65
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc28da65
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc28da65

Branch: refs/heads/trunk
Commit: bc28da65fb1c67904aa3cefd7273cb7423521014
Parents: 6a52b5e
Author: Akira Ajisaka 
Authored: Thu May 25 17:06:26 2017 +0900
Committer: Akira Ajisaka 
Committed: Thu May 25 17:06:26 2017 +0900

--
 .../src/main/native/container-executor/impl/get_executable.c   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc28da65/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
index 49ae093..ce46b77 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
@@ -142,7 +142,7 @@ char* get_executable(char *argv0) {
   return __get_exec_sysctl(mib);
 }
 
-#elif defined(__linux)
+#elif defined(__linux__)
 
 
 char* get_executable(char *argv0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org