hadoop git commit: HDFS-9885. Correct the distcp counters name while displaying counters. Contributed by Surendra Singh Lilhore

2016-09-26 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 aaff2033b -> ff4fb48da


HDFS-9885. Correct the distcp counters name while displaying counters. 
Contributed by Surendra Singh Lilhore

(cherry picked from commit e17a4970bea8213660bb6c550104783069153236)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ff4fb48d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ff4fb48d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ff4fb48d

Branch: refs/heads/branch-2.7
Commit: ff4fb48da1068e0b19a2b1e55ce20b6c800f7893
Parents: aaff203
Author: Brahma Reddy Battula 
Authored: Tue Sep 27 10:45:12 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Tue Sep 27 10:53:51 2016 +0530

--
 .../tools/mapred/CopyMapper_Counter.properties  | 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff4fb48d/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
new file mode 100644
index 000..53727ee
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
@@ -0,0 +1,24 @@
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+# ResourceBundle properties file for DistCp counters
+
+CounterGroupName= DistCp Counters
+
+COPY.name=Files Copied
+SKIP.name=Files Skipped
+FAIL.name=Files Failed
+BYTESCOPIED.name= Bytes Copied
+BYTESEXPECTED.name=   Bytes Expected
+BYTESFAILED.name= Bytes Failed
+BYTESSKIPPED.name=Bytes Skipped
+BANDWIDTH_IN_BYTES.name=  Bandwidth in Btyes
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/3] hadoop git commit: HDFS-9885. Correct the distcp counters name while displaying counters. Contributed by Surendra Singh Lilhore

2016-09-26 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 809a45a60 -> 5737d04c5
  refs/heads/branch-2.8 f0f3b6a66 -> 686cbf45d
  refs/heads/trunk edf0d0f8b -> e17a4970b


HDFS-9885. Correct the distcp counters name while displaying counters. 
Contributed by Surendra Singh Lilhore


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e17a4970
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e17a4970
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e17a4970

Branch: refs/heads/trunk
Commit: e17a4970bea8213660bb6c550104783069153236
Parents: edf0d0f
Author: Brahma Reddy Battula 
Authored: Tue Sep 27 10:45:12 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Tue Sep 27 10:45:12 2016 +0530

--
 .../tools/mapred/CopyMapper_Counter.properties  | 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e17a4970/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
new file mode 100644
index 000..53727ee
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
@@ -0,0 +1,24 @@
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+# ResourceBundle properties file for DistCp counters
+
+CounterGroupName= DistCp Counters
+
+COPY.name=Files Copied
+SKIP.name=Files Skipped
+FAIL.name=Files Failed
+BYTESCOPIED.name= Bytes Copied
+BYTESEXPECTED.name=   Bytes Expected
+BYTESFAILED.name= Bytes Failed
+BYTESSKIPPED.name=Bytes Skipped
+BANDWIDTH_IN_BYTES.name=  Bandwidth in Btyes
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[3/3] hadoop git commit: HDFS-9885. Correct the distcp counters name while displaying counters. Contributed by Surendra Singh Lilhore

2016-09-26 Thread brahma
HDFS-9885. Correct the distcp counters name while displaying counters. 
Contributed by Surendra Singh Lilhore

(cherry picked from commit e17a4970bea8213660bb6c550104783069153236)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/686cbf45
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/686cbf45
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/686cbf45

Branch: refs/heads/branch-2.8
Commit: 686cbf45d46f68a220aa8e319fa6d1998ca6993a
Parents: f0f3b6a
Author: Brahma Reddy Battula 
Authored: Tue Sep 27 10:45:12 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Tue Sep 27 10:49:04 2016 +0530

--
 .../tools/mapred/CopyMapper_Counter.properties  | 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/686cbf45/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
new file mode 100644
index 000..53727ee
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
@@ -0,0 +1,24 @@
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+# ResourceBundle properties file for DistCp counters
+
+CounterGroupName= DistCp Counters
+
+COPY.name=Files Copied
+SKIP.name=Files Skipped
+FAIL.name=Files Failed
+BYTESCOPIED.name= Bytes Copied
+BYTESEXPECTED.name=   Bytes Expected
+BYTESFAILED.name= Bytes Failed
+BYTESSKIPPED.name=Bytes Skipped
+BANDWIDTH_IN_BYTES.name=  Bandwidth in Btyes
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/3] hadoop git commit: HDFS-9885. Correct the distcp counters name while displaying counters. Contributed by Surendra Singh Lilhore

2016-09-26 Thread brahma
HDFS-9885. Correct the distcp counters name while displaying counters. 
Contributed by Surendra Singh Lilhore

(cherry picked from commit e17a4970bea8213660bb6c550104783069153236)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5737d04c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5737d04c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5737d04c

Branch: refs/heads/branch-2
Commit: 5737d04c5fa24b7d0146caba3a743f881a7d54c2
Parents: 809a45a
Author: Brahma Reddy Battula 
Authored: Tue Sep 27 10:45:12 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Tue Sep 27 10:47:07 2016 +0530

--
 .../tools/mapred/CopyMapper_Counter.properties  | 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5737d04c/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
new file mode 100644
index 000..53727ee
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/resources/org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties
@@ -0,0 +1,24 @@
+#   Licensed under the Apache License, Version 2.0 (the "License");
+#   you may not use this file except in compliance with the License.
+#   You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+
+# ResourceBundle properties file for DistCp counters
+
+CounterGroupName= DistCp Counters
+
+COPY.name=Files Copied
+SKIP.name=Files Skipped
+FAIL.name=Files Failed
+BYTESCOPIED.name= Bytes Copied
+BYTESEXPECTED.name=   Bytes Expected
+BYTESFAILED.name= Bytes Failed
+BYTESSKIPPED.name=Bytes Skipped
+BANDWIDTH_IN_BYTES.name=  Bandwidth in Btyes
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-9895. Remove unnecessary conf cache from DataNode. Contributed by Xiaobing Zhou.

2016-09-26 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bde787db2 -> 809a45a60


HDFS-9895. Remove unnecessary conf cache from DataNode. Contributed by Xiaobing 
Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/809a45a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/809a45a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/809a45a6

Branch: refs/heads/branch-2
Commit: 809a45a60cf5738e0255eb1f55361a4aff230bfc
Parents: bde787d
Author: Arpit Agarwal 
Authored: Mon Sep 26 19:24:16 2016 -0700
Committer: Arpit Agarwal 
Committed: Mon Sep 26 19:24:16 2016 -0700

--
 .../hdfs/server/datanode/BlockScanner.java  |   4 +
 .../hadoop/hdfs/server/datanode/DNConf.java |  94 +++---
 .../hadoop/hdfs/server/datanode/DataNode.java   | 123 ++-
 .../server/datanode/TestBPOfferService.java |   4 +-
 .../TestDataXceiverLazyPersistHint.java |   5 +-
 .../fsdataset/impl/TestFsDatasetImpl.java   |   6 +-
 6 files changed, 125 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/809a45a6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
index be6aa83..456dcc1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockScanner.java
@@ -165,6 +165,10 @@ public class BlockScanner {
 }
   }
 
+  public BlockScanner(DataNode datanode) {
+this(datanode, datanode.getConf());
+  }
+
   public BlockScanner(DataNode datanode, Configuration conf) {
 this.datanode = datanode;
 this.conf = new Conf(conf);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/809a45a6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
index 3dd5177..09f336a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
@@ -56,6 +56,7 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.IGNORE_SECURE_PORTS_FOR_TESTI
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_BP_READY_TIMEOUT_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_BP_READY_TIMEOUT_DEFAULT;
 
+import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -70,7 +71,6 @@ import org.apache.hadoop.security.SaslPropertiesResolver;
  */
 @InterfaceAudience.Private
 public class DNConf {
-  final Configuration conf;
   final int socketTimeout;
   final int socketWriteTimeout;
   final int socketKeepaliveTimeout;
@@ -117,70 +117,75 @@ public class DNConf {
   private final int volFailuresTolerated;
   private final int volsConfigured;
   private final int maxDataLength;
+  private Configurable dn;
 
-  public DNConf(Configuration conf) {
-this.conf = conf;
-socketTimeout = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY,
+  public DNConf(final Configurable dn) {
+this.dn = dn;
+socketTimeout = getConf().getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY,
 HdfsConstants.READ_TIMEOUT);
-socketWriteTimeout = conf.getInt(DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY,
+socketWriteTimeout = getConf().getInt(
+DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY,
 HdfsConstants.WRITE_TIMEOUT);
-socketKeepaliveTimeout = conf.getInt(
+socketKeepaliveTimeout = getConf().getInt(
 DFSConfigKeys.DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_KEY,
 DFSConfigKeys.DFS_DATANODE_SOCKET_REUSE_KEEPALIVE_DEFAULT);
-this.transferSocketSendBufferSize = conf.getInt(
+this.transferSocketSendBufferSize = getConf().getInt(
 DFSConfigKeys.DFS_DATANODE_TRANSFER_SOCKET_SEND_BUFFER_SIZE_KEY,
 DFSConfigKeys.DFS_DATANODE_TRANSFER_SOCKET_SEND_BUFFER_SIZE_DEFAULT);
-this.transferSocketRecvBufferSize = conf.getInt(
+this.transferSocketRecvBufferSize = getConf().getInt(
 

hadoop git commit: HDFS-9444. Add utility to find set of available ephemeral ports to ServerSocketUtil. Contributed by Masatake Iwasaki

2016-09-26 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 059058f96 -> edf0d0f8b


HDFS-9444. Add utility to find set of available ephemeral ports to 
ServerSocketUtil. Contributed by Masatake Iwasaki


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edf0d0f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edf0d0f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edf0d0f8

Branch: refs/heads/trunk
Commit: edf0d0f8b2115d4edb5d4932b5ecb15430d94c40
Parents: 059058f
Author: Brahma Reddy Battula 
Authored: Tue Sep 27 07:04:37 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Tue Sep 27 07:04:37 2016 +0530

--
 .../org/apache/hadoop/net/ServerSocketUtil.java | 23 +++
 .../server/namenode/TestNameNodeMXBean.java | 34 +++-
 .../server/namenode/ha/TestEditLogTailer.java   | 42 +---
 3 files changed, 75 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edf0d0f8/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
index 023c1ed..a294e74 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
@@ -102,4 +102,27 @@ public class ServerSocketUtil {
   }
 }
   }
+
+  /**
+   * Find the specified number of unique ports available.
+   * The ports are all closed afterwards,
+   * so other network services started may grab those same ports.
+   *
+   * @param numPorts number of required port nubmers
+   * @return array of available port numbers
+   * @throws IOException
+   */
+  public static int[] getPorts(int numPorts) throws IOException {
+ServerSocket[] sockets = new ServerSocket[numPorts];
+int[] ports = new int[numPorts];
+for (int i = 0; i < numPorts; i++) {
+  ServerSocket sock = new ServerSocket(0);
+  sockets[i] = sock;
+  ports[i] = sock.getLocalPort();
+}
+for (ServerSocket sock : sockets) {
+  sock.close();
+}
+return ports;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edf0d0f8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
index dc8bea7..ac97a36 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
@@ -47,6 +47,7 @@ import javax.management.MBeanServer;
 import javax.management.ObjectName;
 import java.io.File;
 import java.lang.management.ManagementFactory;
+import java.net.BindException;
 import java.net.URI;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -59,6 +60,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 /**
  * Class for testing {@link NameNodeMXBean} implementation
@@ -431,17 +433,29 @@ public class TestNameNodeMXBean {
   public void testNNDirectorySize() throws Exception{
 Configuration conf = new Configuration();
 conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
-// Have to specify IPC ports so the NNs can talk to each other.
-MiniDFSNNTopology topology = new MiniDFSNNTopology()
-.addNameservice(new MiniDFSNNTopology.NSConf("ns1")
-.addNN(new MiniDFSNNTopology.NNConf("nn1")
-.setIpcPort(ServerSocketUtil.getPort(0, 100)))
-.addNN(new MiniDFSNNTopology.NNConf("nn2")
-.setIpcPort(ServerSocketUtil.getPort(0, 100;
+MiniDFSCluster cluster = null;
+for (int i = 0; i < 5; i++) {
+  try{
+// Have to specify IPC ports so the NNs can talk to each other.
+int[] ports = ServerSocketUtil.getPorts(2);
+MiniDFSNNTopology topology = new MiniDFSNNTopology()
+.addNameservice(new MiniDFSNNTopology.NSConf("ns1")
+.addNN(new 

hadoop git commit: HDFS-10713. Throttle FsNameSystem lock warnings. Contributed by Hanisha Koneru.

2016-09-26 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8e06d865c -> 059058f96


HDFS-10713. Throttle FsNameSystem lock warnings. Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/059058f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/059058f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/059058f9

Branch: refs/heads/trunk
Commit: 059058f9614613667d5385f76022294e07e140aa
Parents: 8e06d86
Author: Arpit Agarwal 
Authored: Mon Sep 26 17:09:47 2016 -0700
Committer: Arpit Agarwal 
Committed: Mon Sep 26 17:09:47 2016 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 101 ---
 .../hdfs/server/namenode/TestFSNamesystem.java  |  85 
 2 files changed, 150 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/059058f9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 0459840..7f8981f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -75,6 +75,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_WRITE_LOCK_REPOR
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_WRITE_LOCK_REPORTING_THRESHOLD_MS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_LOCK_SUPPRESS_WARNING_INTERVAL_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_LOCK_SUPPRESS_WARNING_INTERVAL_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RETRY_CACHE_EXPIRYTIME_MILLIS_DEFAULT;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RETRY_CACHE_EXPIRYTIME_MILLIS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RETRY_CACHE_HEAP_PERCENT_DEFAULT;
@@ -127,6 +129,8 @@ import java.util.TreeMap;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.locks.Condition;
 import java.util.concurrent.locks.ReentrantLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -280,6 +284,7 @@ import org.apache.hadoop.util.Daemon;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.Timer;
 import org.apache.hadoop.util.VersionInfo;
 import org.apache.log4j.Appender;
 import org.apache.log4j.AsyncAppender;
@@ -713,6 +718,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 fsLock = new FSNamesystemLock(fair);
 cond = fsLock.writeLock().newCondition();
 cpLock = new ReentrantLock();
+setTimer(new Timer());
 
 this.fsImage = fsImage;
 try {
@@ -828,6 +834,10 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_KEY,
   DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_DEFAULT);
 
+  this.lockSuppressWarningInterval = conf.getTimeDuration(
+  DFS_LOCK_SUPPRESS_WARNING_INTERVAL_KEY,
+  DFS_LOCK_SUPPRESS_WARNING_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS);
+
   // For testing purposes, allow the DT secret manager to be started 
regardless
   // of whether security is enabled.
   alwaysUseDelegationTokensForTests = conf.getBoolean(
@@ -1506,12 +1516,20 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 return Util.stringCollectionAsURIs(dirNames);
   }
 
+  private final long lockSuppressWarningInterval;
   /** Threshold (ms) for long holding write lock report. */
-  private long writeLockReportingThreshold;
+  private final long writeLockReportingThreshold;
+  private int numWriteLockWarningsSuppressed = 0;
+  private long timeStampOfLastWriteLockReport = 0;
+  private long longestWriteLockHeldInterval = 0;
   /** Last time stamp for write lock. Keep the longest one for 
multi-entrance.*/
   private long writeLockHeldTimeStamp;
   /** Threshold (ms) for long holding read 

hadoop git commit: HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 09964a162 -> f0f3b6a66


HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may 
abort downstream applications. Contributed by Wei-Chiu Chuang.

(cherry picked from commit 3ae652f82110a52bf239f3c1849b48981558eb19)
(cherry picked from commit bde787db23dd38388dac045d421394006ba63bed)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0f3b6a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0f3b6a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0f3b6a6

Branch: refs/heads/branch-2.8
Commit: f0f3b6a66a4437a126b420cbe33aba54c71500d6
Parents: 09964a1
Author: Wei-Chiu Chuang 
Authored: Mon Sep 26 14:44:48 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon Sep 26 14:46:02 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   5 +
 .../org/apache/hadoop/hdfs/DataStreamer.java| 148 +++-
 .../block/BlockPoolTokenSecretManager.java  |   3 +-
 .../token/block/BlockTokenSecretManager.java|   6 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   5 +
 .../hadoop/hdfs/TestEncryptedTransfer.java  | 742 ---
 6 files changed, 436 insertions(+), 473 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0f3b6a6/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 4abf234..6244ee4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1772,6 +1772,11 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 }
   }
 
+  @VisibleForTesting
+  public DataEncryptionKey getEncryptionKey() {
+return encryptionKey;
+  }
+
   /**
* Get the checksum of the whole file of a range of the file. Note that the
* range always starts from the beginning of the file.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0f3b6a6/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index bdd20c4..2a7f8c0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -124,6 +124,89 @@ import javax.annotation.Nonnull;
 class DataStreamer extends Daemon {
   static final Logger LOG = LoggerFactory.getLogger(DataStreamer.class);
 
+  private class RefetchEncryptionKeyPolicy {
+private int fetchEncryptionKeyTimes = 0;
+private InvalidEncryptionKeyException lastException;
+private final DatanodeInfo src;
+
+RefetchEncryptionKeyPolicy(DatanodeInfo src) {
+  this.src = src;
+}
+boolean continueRetryingOrThrow() throws InvalidEncryptionKeyException {
+  if (fetchEncryptionKeyTimes >= 2) {
+// hit the same exception twice connecting to the node, so
+// throw the exception and exclude the node.
+throw lastException;
+  }
+  // Don't exclude this node just yet.
+  // Try again with a new encryption key.
+  LOG.info("Will fetch a new encryption key and retry, "
+  + "encryption key was invalid when connecting to "
+  + this.src + ": ", lastException);
+  // The encryption key used is invalid.
+  dfsClient.clearDataEncryptionKey();
+  return true;
+}
+
+/**
+ * Record a connection exception.
+ * @param e
+ * @throws InvalidEncryptionKeyException
+ */
+void recordFailure(final InvalidEncryptionKeyException e)
+throws InvalidEncryptionKeyException {
+  fetchEncryptionKeyTimes++;
+  lastException = e;
+}
+  }
+
+  private class StreamerStreams implements java.io.Closeable {
+private Socket sock = null;
+private DataOutputStream out = null;
+private DataInputStream in = null;
+
+StreamerStreams(final DatanodeInfo src,
+final long writeTimeout, final long readTimeout,
+final Token blockToken)
+throws IOException {
+  sock = createSocketForPipeline(src, 2, dfsClient);
+
+  OutputStream unbufOut = 

hadoop git commit: HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 06187e4f9 -> bde787db2


HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may 
abort downstream applications. Contributed by Wei-Chiu Chuang.

(cherry picked from commit 3ae652f82110a52bf239f3c1849b48981558eb19)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bde787db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bde787db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bde787db

Branch: refs/heads/branch-2
Commit: bde787db23dd38388dac045d421394006ba63bed
Parents: 06187e4
Author: Wei-Chiu Chuang 
Authored: Mon Sep 26 14:44:48 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon Sep 26 14:44:48 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   5 +
 .../org/apache/hadoop/hdfs/DataStreamer.java| 148 +++-
 .../block/BlockPoolTokenSecretManager.java  |   3 +-
 .../token/block/BlockTokenSecretManager.java|   6 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   5 +
 .../hadoop/hdfs/TestEncryptedTransfer.java  | 742 ---
 6 files changed, 436 insertions(+), 473 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bde787db/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 74276e4..66c799e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1772,6 +1772,11 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 }
   }
 
+  @VisibleForTesting
+  public DataEncryptionKey getEncryptionKey() {
+return encryptionKey;
+  }
+
   /**
* Get the checksum of the whole file of a range of the file. Note that the
* range always starts from the beginning of the file.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bde787db/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 67581e9..bf99422 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -124,6 +124,89 @@ import javax.annotation.Nonnull;
 class DataStreamer extends Daemon {
   static final Logger LOG = LoggerFactory.getLogger(DataStreamer.class);
 
+  private class RefetchEncryptionKeyPolicy {
+private int fetchEncryptionKeyTimes = 0;
+private InvalidEncryptionKeyException lastException;
+private final DatanodeInfo src;
+
+RefetchEncryptionKeyPolicy(DatanodeInfo src) {
+  this.src = src;
+}
+boolean continueRetryingOrThrow() throws InvalidEncryptionKeyException {
+  if (fetchEncryptionKeyTimes >= 2) {
+// hit the same exception twice connecting to the node, so
+// throw the exception and exclude the node.
+throw lastException;
+  }
+  // Don't exclude this node just yet.
+  // Try again with a new encryption key.
+  LOG.info("Will fetch a new encryption key and retry, "
+  + "encryption key was invalid when connecting to "
+  + this.src + ": ", lastException);
+  // The encryption key used is invalid.
+  dfsClient.clearDataEncryptionKey();
+  return true;
+}
+
+/**
+ * Record a connection exception.
+ * @param e
+ * @throws InvalidEncryptionKeyException
+ */
+void recordFailure(final InvalidEncryptionKeyException e)
+throws InvalidEncryptionKeyException {
+  fetchEncryptionKeyTimes++;
+  lastException = e;
+}
+  }
+
+  private class StreamerStreams implements java.io.Closeable {
+private Socket sock = null;
+private DataOutputStream out = null;
+private DataInputStream in = null;
+
+StreamerStreams(final DatanodeInfo src,
+final long writeTimeout, final long readTimeout,
+final Token blockToken)
+throws IOException {
+  sock = createSocketForPipeline(src, 2, dfsClient);
+
+  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
+  InputStream unbufIn = 

hadoop git commit: Addendum patch for HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications.

2016-09-26 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3ae652f82 -> 8e06d865c


Addendum patch for HDFS-10609. Uncaught InvalidEncryptionKeyException during 
pipeline recovery may abort downstream applications.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8e06d865
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8e06d865
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8e06d865

Branch: refs/heads/trunk
Commit: 8e06d865c4848f2ddd1a3ec4ee825e152d8e77c3
Parents: 3ae652f
Author: Wei-Chiu Chuang 
Authored: Mon Sep 26 14:41:04 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon Sep 26 14:41:04 2016 -0700

--
 .../src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e06d865/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
index e2b5ead..78020b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
@@ -318,6 +318,7 @@ public class TestEncryptedTransfer {
 assertEquals(checksum, fs.getFileChecksum(TEST_PATH));
   }
 
+  @Test
   public void testLongLivedClientPipelineRecovery()
   throws IOException, InterruptedException, TimeoutException {
 if (resolverClazz != null) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk fa397e74f -> 3ae652f82


HDFS-10609. Uncaught InvalidEncryptionKeyException during pipeline recovery may 
abort downstream applications. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3ae652f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3ae652f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3ae652f8

Branch: refs/heads/trunk
Commit: 3ae652f82110a52bf239f3c1849b48981558eb19
Parents: fa397e7
Author: Wei-Chiu Chuang 
Authored: Mon Sep 26 13:10:17 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon Sep 26 13:11:32 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   5 +
 .../org/apache/hadoop/hdfs/DataStreamer.java| 146 +++-
 .../block/BlockPoolTokenSecretManager.java  |   3 +-
 .../token/block/BlockTokenSecretManager.java|   6 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |   5 +
 .../hadoop/hdfs/TestEncryptedTransfer.java  | 741 ---
 6 files changed, 434 insertions(+), 472 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ae652f8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index b0fcba2..4c2a967 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1710,6 +1710,11 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 }
   }
 
+  @VisibleForTesting
+  public DataEncryptionKey getEncryptionKey() {
+return encryptionKey;
+  }
+
   /**
* Get the checksum of the whole file or a range of the file. Note that the
* range always starts from the beginning of the file. The file can be

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ae652f8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 5166c8c..ef5d21a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -116,6 +116,89 @@ import javax.annotation.Nonnull;
 class DataStreamer extends Daemon {
   static final Logger LOG = LoggerFactory.getLogger(DataStreamer.class);
 
+  private class RefetchEncryptionKeyPolicy {
+private int fetchEncryptionKeyTimes = 0;
+private InvalidEncryptionKeyException lastException;
+private final DatanodeInfo src;
+
+RefetchEncryptionKeyPolicy(DatanodeInfo src) {
+  this.src = src;
+}
+boolean continueRetryingOrThrow() throws InvalidEncryptionKeyException {
+  if (fetchEncryptionKeyTimes >= 2) {
+// hit the same exception twice connecting to the node, so
+// throw the exception and exclude the node.
+throw lastException;
+  }
+  // Don't exclude this node just yet.
+  // Try again with a new encryption key.
+  LOG.info("Will fetch a new encryption key and retry, "
+  + "encryption key was invalid when connecting to "
+  + this.src + ": ", lastException);
+  // The encryption key used is invalid.
+  dfsClient.clearDataEncryptionKey();
+  return true;
+}
+
+/**
+ * Record a connection exception.
+ * @param e
+ * @throws InvalidEncryptionKeyException
+ */
+void recordFailure(final InvalidEncryptionKeyException e)
+throws InvalidEncryptionKeyException {
+  fetchEncryptionKeyTimes++;
+  lastException = e;
+}
+  }
+
+  private class StreamerStreams implements java.io.Closeable {
+private Socket sock = null;
+private DataOutputStream out = null;
+private DataInputStream in = null;
+
+StreamerStreams(final DatanodeInfo src,
+final long writeTimeout, final long readTimeout,
+final Token blockToken)
+throws IOException {
+  sock = createSocketForPipeline(src, 2, dfsClient);
+
+  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
+  InputStream unbufIn = NetUtils.getInputStream(sock, readTimeout);
+  IOStreamPair 

hadoop git commit: HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4815d024c -> fa397e74f


HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed 
by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa397e74
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa397e74
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa397e74

Branch: refs/heads/trunk
Commit: fa397e74fe988bcbb05c816de73eb738794ace4b
Parents: 4815d02
Author: Xiao Chen 
Authored: Mon Sep 26 13:00:57 2016 -0700
Committer: Xiao Chen 
Committed: Mon Sep 26 13:00:57 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMSWebApp.java |  2 +
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 76 ++--
 2 files changed, 42 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa397e74/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index e3d1a93..cd773dd 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.crypto.key.KeyProvider;
 import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
 import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.util.VersionInfo;
 import org.apache.log4j.PropertyConfigurator;
@@ -121,6 +122,7 @@ public class KMSWebApp implements ServletContextListener {
   }
   kmsConf = KMSConfiguration.getKMSConf();
   initLogging(confDir);
+  UserGroupInformation.setConfiguration(kmsConf);
   
LOG.info("-");
   LOG.info("  Java runtime version : {}", System.getProperty(
   "java.runtime.version"));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa397e74/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index 61b9a90..9cbd08a 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -143,11 +143,31 @@ public class TestKMS {
   }
 
   protected Configuration createBaseKMSConf(File keyStoreDir) throws Exception 
{
-Configuration conf = new Configuration(false);
-conf.set(KMSConfiguration.KEY_PROVIDER_URI,
+return createBaseKMSConf(keyStoreDir, null);
+  }
+
+  /**
+   * The Configuration object is shared by both KMS client and server in unit
+   * tests because UGI gets/sets it to a static variable.
+   * As a workaround, make sure the client configurations are copied to server
+   * so that client can read them.
+   * @param keyStoreDir where keystore is located.
+   * @param conf KMS client configuration
+   * @return KMS server configuration based on client.
+   * @throws Exception
+   */
+  protected Configuration createBaseKMSConf(File keyStoreDir,
+  Configuration conf) throws Exception {
+Configuration newConf;
+if (conf == null) {
+  newConf = new Configuration(false);
+} else {
+  newConf = new Configuration(conf);
+}
+newConf.set(KMSConfiguration.KEY_PROVIDER_URI,
 "jceks://file@" + new Path(keyStoreDir.getAbsolutePath(), 
"kms.keystore").toUri());
-conf.set("hadoop.kms.authentication.type", "simple");
-return conf;
+newConf.set("hadoop.kms.authentication.type", "simple");
+return newConf;
   }
 
   public static void writeConf(File confDir, Configuration conf)
@@ -280,9 +300,8 @@ public class TestKMS {
 if (kerberos) {
   conf.set("hadoop.security.authentication", "kerberos");
 }
-UserGroupInformation.setConfiguration(conf);
 File testDir = getTestDir();
-conf = createBaseKMSConf(testDir);
+conf = createBaseKMSConf(testDir, conf);
 
 final String keystore;
 final 

hadoop git commit: HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7484d0b1b -> 06187e4f9


HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed 
by Wei-Chiu Chuang.

(cherry picked from commit fa397e74fe988bcbb05c816de73eb738794ace4b)

Conflicts:

hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/06187e4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/06187e4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/06187e4f

Branch: refs/heads/branch-2
Commit: 06187e4f98c70b12fbf61c21580ccded27c87185
Parents: 7484d0b
Author: Xiao Chen 
Authored: Mon Sep 26 13:00:57 2016 -0700
Committer: Xiao Chen 
Committed: Mon Sep 26 13:02:57 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMSWebApp.java |  2 +
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 73 +++-
 2 files changed, 41 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/06187e4f/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index e972509..763f207 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.crypto.key.KeyProvider;
 import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
 import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.util.VersionInfo;
 import org.apache.log4j.PropertyConfigurator;
@@ -121,6 +122,7 @@ public class KMSWebApp implements ServletContextListener {
   }
   kmsConf = KMSConfiguration.getKMSConf();
   initLogging(confDir);
+  UserGroupInformation.setConfiguration(kmsConf);
   
LOG.info("-");
   LOG.info("  Java runtime version : {}", System.getProperty(
   "java.runtime.version"));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/06187e4f/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index b4174dd..58c1b81 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -145,11 +145,31 @@ public class TestKMS {
   }
 
   protected Configuration createBaseKMSConf(File keyStoreDir) throws Exception 
{
-Configuration conf = new Configuration(false);
-conf.set(KMSConfiguration.KEY_PROVIDER_URI,
+return createBaseKMSConf(keyStoreDir, null);
+  }
+
+  /**
+   * The Configuration object is shared by both KMS client and server in unit
+   * tests because UGI gets/sets it to a static variable.
+   * As a workaround, make sure the client configurations are copied to server
+   * so that client can read them.
+   * @param keyStoreDir where keystore is located.
+   * @param conf KMS client configuration
+   * @return KMS server configuration based on client.
+   * @throws Exception
+   */
+  protected Configuration createBaseKMSConf(File keyStoreDir,
+  Configuration conf) throws Exception {
+Configuration newConf;
+if (conf == null) {
+  newConf = new Configuration(false);
+} else {
+  newConf = new Configuration(conf);
+}
+newConf.set(KMSConfiguration.KEY_PROVIDER_URI,
 "jceks://file@" + new Path(keyStoreDir.getAbsolutePath(), 
"kms.keystore").toUri());
-conf.set("hadoop.kms.authentication.type", "simple");
-return conf;
+newConf.set("hadoop.kms.authentication.type", "simple");
+return newConf;
   }
 
   public static void writeConf(File confDir, Configuration conf)
@@ -278,9 +298,8 @@ public class TestKMS {
 if (kerberos) {
   conf.set("hadoop.security.authentication", "kerberos");
 }
-

hadoop git commit: HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed by Wei-Chiu Chuang.

2016-09-26 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 fa042ff9a -> 09964a162


HADOOP-13638. KMS should set UGI's Configuration object properly. Contributed 
by Wei-Chiu Chuang.

(cherry picked from commit fa397e74fe988bcbb05c816de73eb738794ace4b)

Conflicts:

hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java

(cherry picked from commit 06187e4f98c70b12fbf61c21580ccded27c87185)

Conflicts:

hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09964a16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09964a16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09964a16

Branch: refs/heads/branch-2.8
Commit: 09964a16294adafbb02fc6785e686d51a9fa3ffc
Parents: fa042ff
Author: Xiao Chen 
Authored: Mon Sep 26 13:00:57 2016 -0700
Committer: Xiao Chen 
Committed: Mon Sep 26 13:04:29 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMSWebApp.java |  2 +
 .../hadoop/crypto/key/kms/server/TestKMS.java   | 70 +++-
 2 files changed, 40 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09964a16/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index 1474463..7cb6c37 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.crypto.key.KeyProvider;
 import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
 import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.util.VersionInfo;
 import org.apache.log4j.PropertyConfigurator;
@@ -121,6 +122,7 @@ public class KMSWebApp implements ServletContextListener {
   }
   kmsConf = KMSConfiguration.getKMSConf();
   initLogging(confDir);
+  UserGroupInformation.setConfiguration(kmsConf);
   
LOG.info("-");
   LOG.info("  Java runtime version : {}", System.getProperty(
   "java.runtime.version"));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09964a16/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index b275fd1..3344a6a 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -139,11 +139,31 @@ public class TestKMS {
   }
 
   protected Configuration createBaseKMSConf(File keyStoreDir) throws Exception 
{
-Configuration conf = new Configuration(false);
-conf.set(KMSConfiguration.KEY_PROVIDER_URI,
+return createBaseKMSConf(keyStoreDir, null);
+  }
+
+  /**
+   * The Configuration object is shared by both KMS client and server in unit
+   * tests because UGI gets/sets it to a static variable.
+   * As a workaround, make sure the client configurations are copied to server
+   * so that client can read them.
+   * @param keyStoreDir where keystore is located.
+   * @param conf KMS client configuration
+   * @return KMS server configuration based on client.
+   * @throws Exception
+   */
+  protected Configuration createBaseKMSConf(File keyStoreDir,
+  Configuration conf) throws Exception {
+Configuration newConf;
+if (conf == null) {
+  newConf = new Configuration(false);
+} else {
+  newConf = new Configuration(conf);
+}
+newConf.set(KMSConfiguration.KEY_PROVIDER_URI,
 "jceks://file@" + new Path(keyStoreDir.getAbsolutePath(), 
"kms.keystore").toUri());
-conf.set("hadoop.kms.authentication.type", "simple");
-return conf;
+newConf.set("hadoop.kms.authentication.type", "simple");
+return newConf;
   }
 
   public static 

hadoop git commit: HADOOP-12597. In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be updated with new name. ( Contributed by Surendra Singh Lilhore via Brahm

2016-09-26 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 ece3ca0cb -> fa042ff9a


HADOOP-12597. In kms-site.xml configuration 
hadoop.security.keystore.JavaKeyStoreProvider.password should be updated with 
new name. ( Contributed by Surendra Singh Lilhore via Brahma Reddy Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa042ff9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa042ff9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa042ff9

Branch: refs/heads/branch-2.8
Commit: fa042ff9af17c2c8dcbe707211971440d0890679
Parents: ece3ca0
Author: Brahma Reddy Battula 
Authored: Thu Jan 7 16:00:37 2016 +
Committer: Brahma Reddy Battula 
Committed: Mon Sep 26 23:19:36 2016 +0530

--
 hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa042ff9/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml 
b/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
index a810ca4..e8d2344 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
@@ -25,10 +25,10 @@
   
 
   
-hadoop.security.keystore.JavaKeyStoreProvider.password
-none
+hadoop.security.keystore.java-keystore-provider.password-file
+kms.keystore.password
 
-  If using the JavaKeyStoreProvider, the password for the keystore file.
+  If using the JavaKeyStoreProvider, the file name for the keystore 
password.
 
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-12597. In kms-site.xml configuration hadoop.security.keystore.JavaKeyStoreProvider.password should be updated with new name. ( Contributed by Surendra Singh Lilhore via Brahm

2016-09-26 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 66ee9f554 -> aaff2033b


HADOOP-12597. In kms-site.xml configuration 
hadoop.security.keystore.JavaKeyStoreProvider.password should be updated with 
new name. ( Contributed by Surendra Singh Lilhore via Brahma Reddy Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aaff2033
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aaff2033
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aaff2033

Branch: refs/heads/branch-2.7
Commit: aaff2033b92863a92c7e206f5eb4319b35e713d0
Parents: 66ee9f5
Author: Brahma Reddy Battula 
Authored: Mon Sep 26 22:31:31 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Mon Sep 26 22:31:31 2016 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 4 
 hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml | 6 +++---
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aaff2033/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8260ea8..1de102d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -37,6 +37,10 @@ Release 2.7.4 - UNRELEASED
 HADOOP-13579. Fix source-level compatibility after HADOOP-11252.
 (Tsuyoshi Ozawa via aajisaka)
 
+HADOOP-12597. In kms-site.xml configuration
+"hadoop.security.keystore.JavaKeyStoreProvider.password" should be updated 
with
+new name. (Contributed by Surendra Singh Lilhore via Brahma Reddy Battula)
+
 Release 2.7.3 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aaff2033/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml 
b/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
index a810ca4..e8d2344 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
@@ -25,10 +25,10 @@
   
 
   
-hadoop.security.keystore.JavaKeyStoreProvider.password
-none
+hadoop.security.keystore.java-keystore-provider.password-file
+kms.keystore.password
 
-  If using the JavaKeyStoreProvider, the password for the keystore file.
+  If using the JavaKeyStoreProvider, the file name for the keystore 
password.
 
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh

2016-09-26 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 74f2df16a -> 7484d0b1b


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7484d0b1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 3704cfd..6631d3a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -98,6 +98,8 @@ public class ContainerImpl implements Container {
 private final ContainerLaunchContext oldLaunchContext;
 private final ResourceSet oldResourceSet;
 
+private boolean isRollback = false;
+
 private ReInitializationContext(ContainerLaunchContext newLaunchContext,
 ResourceSet newResourceSet,
 ContainerLaunchContext oldLaunchContext,
@@ -112,20 +114,23 @@ public class ContainerImpl implements Container {
   return (oldLaunchContext != null);
 }
 
-private ResourceSet mergedResourceSet() {
-  if (oldLaunchContext == null) {
+private ResourceSet mergedResourceSet(ResourceSet current) {
+  if (isRollback) {
+// No merging should be done for rollback
 return newResourceSet;
   }
-  return ResourceSet.merge(oldResourceSet, newResourceSet);
+  if (current == newResourceSet) {
+// This happens during a restart
+return current;
+  }
+  return ResourceSet.merge(current, newResourceSet);
 }
 
 private ReInitializationContext createContextForRollback() {
-  if (oldLaunchContext == null) {
-return null;
-  } else {
-return new ReInitializationContext(
-oldLaunchContext, oldResourceSet, null, null);
-  }
+  ReInitializationContext cntxt = new ReInitializationContext(
+  oldLaunchContext, oldResourceSet, null, null);
+  cntxt.isRollback = true;
+  return cntxt;
 }
   }
 
@@ -909,13 +914,20 @@ public class ContainerImpl implements Container {
 public void transition(ContainerImpl container, ContainerEvent event) {
   container.reInitContext = createReInitContext(container, event);
   try {
-Map
-resByVisibility = container.reInitContext.newResourceSet
-.getAllResourcesByVisibility();
-if (!resByVisibility.isEmpty()) {
+// 'reInitContext.newResourceSet' can be
+// a) current container resourceSet (In case of Restart)
+// b) previous resourceSet (In case of RollBack)
+// c) An actual NEW resourceSet (In case of Upgrade/ReInit)
+//
+// In cases a) and b) Container can immediately be cleaned up since
+// we are sure the resources are already available (we check the
+// pendingResources to verify that nothing more is needed). So we can
+// kill the container immediately
+ResourceSet newResourceSet = container.reInitContext.newResourceSet;
+if (!newResourceSet.getPendingResources().isEmpty()) {
   container.dispatcher.getEventHandler().handle(
   new ContainerLocalizationRequestEvent(
-  container, resByVisibility));
+  container, newResourceSet.getAllResourcesByVisibility()));
 } else {
   // We are not waiting on any resources, so...
   // Kill the current container.
@@ -923,6 +935,11 @@ public class ContainerImpl implements Container {
   new ContainersLauncherEvent(container,
   ContainersLauncherEventType.CLEANUP_CONTAINER_FOR_REINIT));
 }
+container.metrics.reInitingContainer();
+NMAuditLogger.logSuccess(container.user,
+AuditConstants.START_CONTAINER_REINIT, "ContainerImpl",
+container.containerId.getApplicationAttemptId().getApplicationId(),
+container.containerId);
   } catch (Exception e) {
 LOG.error("Container [" + container.getContainerId() + "]" +
 " re-initialization failure..", e);
@@ -934,13 +951,26 @@ public class ContainerImpl implements Container {
 protected ReInitializationContext createReInitContext(
 ContainerImpl container, ContainerEvent event) {
   

[2/2] hadoop git commit: YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh

2016-09-26 Thread jianhe
YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. 
Contributed by Arun Suresh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7484d0b1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7484d0b1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7484d0b1

Branch: refs/heads/branch-2
Commit: 7484d0b1b964e15a0b7bb1eb17de1e501821459a
Parents: 74f2df1
Author: Arun Suresh 
Authored: Mon Sep 26 08:46:54 2016 -0700
Committer: Jian He 
Committed: Mon Sep 26 23:55:13 2016 +0800

--
 .../v2/app/launcher/TestContainerLauncher.java  |  30 
 .../app/launcher/TestContainerLauncherImpl.java |  30 
 .../yarn/api/ContainerManagementProtocol.java   |  54 ++
 .../api/protocolrecords/CommitResponse.java |  42 +
 .../ReInitializeContainerRequest.java   | 110 
 .../ReInitializeContainerResponse.java  |  38 
 .../RestartContainerResponse.java   |  38 
 .../api/protocolrecords/RollbackResponse.java   |  42 +
 .../proto/containermanagement_protocol.proto|   6 +
 .../src/main/proto/yarn_service_protos.proto|  18 ++
 ...ContainerManagementProtocolPBClientImpl.java |  73 
 ...ontainerManagementProtocolPBServiceImpl.java |  86 -
 .../impl/pb/CommitResponsePBImpl.java   |  67 +++
 .../pb/ReInitializeContainerRequestPBImpl.java  | 173 +++
 .../pb/ReInitializeContainerResponsePBImpl.java |  68 
 .../impl/pb/RestartContainerResponsePBImpl.java |  67 +++
 .../impl/pb/RollbackResponsePBImpl.java |  67 +++
 .../hadoop/yarn/TestContainerLaunchRPC.java |  30 
 .../yarn/TestContainerResourceIncreaseRPC.java  |  30 
 .../java/org/apache/hadoop/yarn/TestRPC.java|  30 
 .../hadoop/yarn/api/TestPBImplRecords.java  |  10 ++
 .../yarn/server/nodemanager/NMAuditLogger.java  |   4 +
 .../containermanager/ContainerManagerImpl.java  |  53 +-
 .../container/ContainerImpl.java|  92 +++---
 .../nodemanager/metrics/NodeManagerMetrics.java |  26 +++
 .../TestContainerManagerWithLCE.java|  12 ++
 .../containermanager/TestContainerManager.java  | 103 +--
 .../server/resourcemanager/NodeManager.java |  29 
 .../resourcemanager/TestAMAuthorization.java|  31 
 .../TestApplicationMasterLauncher.java  |  30 
 30 files changed, 1444 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7484d0b1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
index ba404a5..1520929 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
@@ -30,10 +30,15 @@ import java.util.Map;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest;
+import 
org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RestartContainerResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RollbackResponse;
 import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -476,5 +481,30 @@ public class TestContainerLauncher {
 ResourceLocalizationRequest request) throws YarnException, IOException 
{
   return null;
 }
+
+@Override
+public ReInitializeContainerResponse reInitializeContainer(
+ReInitializeContainerRequest request) throws YarnException,
+IOException {
+  

[1/3] hadoop git commit: Revert "YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh"

2016-09-26 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk fe644bafe -> 4815d024c


Revert "YARN-5609. Expose upgrade and restart API in 
ContainerManagementProtocol. Contributed by Arun Suresh"

This reverts commit fe644bafe7b4fb5b07f7cf08a7d7044abbf55027.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f163cd5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f163cd5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f163cd5

Branch: refs/heads/trunk
Commit: 2f163cd5cfaf8308f50b6a92c21498b78ada6953
Parents: fe644ba
Author: Arun Suresh 
Authored: Mon Sep 26 08:36:59 2016 -0700
Committer: Arun Suresh 
Committed: Mon Sep 26 08:36:59 2016 -0700

--
 .../v2/app/launcher/TestContainerLauncher.java  |  30 --
 .../app/launcher/TestContainerLauncherImpl.java |  30 --
 .../yarn/api/ContainerManagementProtocol.java   |  54 --
 .../proto/containermanagement_protocol.proto|   6 --
 .../src/main/proto/yarn_service_protos.proto|  18 
 ...ContainerManagementProtocolPBClientImpl.java |  73 -
 ...ontainerManagementProtocolPBServiceImpl.java |  86 +---
 .../hadoop/yarn/TestContainerLaunchRPC.java |  30 --
 .../yarn/TestContainerResourceIncreaseRPC.java  |  30 --
 .../hadoop/yarn/api/TestPBImplRecords.java  |  10 --
 .../java/org/apache/hadoop/yarn/TestRPC.java|  30 --
 .../yarn/server/nodemanager/NMAuditLogger.java  |   4 -
 .../containermanager/ContainerManagerImpl.java  |  53 ++
 .../container/ContainerImpl.java|  92 +
 .../nodemanager/metrics/NodeManagerMetrics.java |  26 -
 .../TestContainerManagerWithLCE.java|  12 ---
 .../containermanager/TestContainerManager.java  | 103 +++
 .../server/resourcemanager/NodeManager.java |  29 --
 .../resourcemanager/TestAMAuthorization.java|  31 --
 .../TestApplicationMasterLauncher.java  |  30 --
 20 files changed, 45 insertions(+), 732 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f163cd5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
index 1520929..ba404a5 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
@@ -30,15 +30,10 @@ import java.util.Map;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.atomic.AtomicInteger;
 
-import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest;
-import 
org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.RestartContainerResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.RollbackResponse;
 import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -481,30 +476,5 @@ public class TestContainerLauncher {
 ResourceLocalizationRequest request) throws YarnException, IOException 
{
   return null;
 }
-
-@Override
-public ReInitializeContainerResponse reInitializeContainer(
-ReInitializeContainerRequest request) throws YarnException,
-IOException {
-  return null;
-}
-
-@Override
-public RestartContainerResponse restartContainer(ContainerId containerId)
-throws YarnException, IOException {
-  return null;
-}
-
-@Override
-public RollbackResponse rollbackLastReInitialization(
-ContainerId containerId) throws YarnException, IOException {
-  return null;
-}
-
-@Override
-public CommitResponse 

[2/3] hadoop git commit: YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh

2016-09-26 Thread asuresh
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4815d024/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 0707df0..4bc0a0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -99,6 +99,8 @@ public class ContainerImpl implements Container {
 private final ContainerLaunchContext oldLaunchContext;
 private final ResourceSet oldResourceSet;
 
+private boolean isRollback = false;
+
 private ReInitializationContext(ContainerLaunchContext newLaunchContext,
 ResourceSet newResourceSet,
 ContainerLaunchContext oldLaunchContext,
@@ -113,20 +115,23 @@ public class ContainerImpl implements Container {
   return (oldLaunchContext != null);
 }
 
-private ResourceSet mergedResourceSet() {
-  if (oldLaunchContext == null) {
+private ResourceSet mergedResourceSet(ResourceSet current) {
+  if (isRollback) {
+// No merging should be done for rollback
 return newResourceSet;
   }
-  return ResourceSet.merge(oldResourceSet, newResourceSet);
+  if (current == newResourceSet) {
+// This happens during a restart
+return current;
+  }
+  return ResourceSet.merge(current, newResourceSet);
 }
 
 private ReInitializationContext createContextForRollback() {
-  if (oldLaunchContext == null) {
-return null;
-  } else {
-return new ReInitializationContext(
-oldLaunchContext, oldResourceSet, null, null);
-  }
+  ReInitializationContext cntxt = new ReInitializationContext(
+  oldLaunchContext, oldResourceSet, null, null);
+  cntxt.isRollback = true;
+  return cntxt;
 }
   }
 
@@ -918,13 +923,20 @@ public class ContainerImpl implements Container {
 public void transition(ContainerImpl container, ContainerEvent event) {
   container.reInitContext = createReInitContext(container, event);
   try {
-Map
-resByVisibility = container.reInitContext.newResourceSet
-.getAllResourcesByVisibility();
-if (!resByVisibility.isEmpty()) {
+// 'reInitContext.newResourceSet' can be
+// a) current container resourceSet (In case of Restart)
+// b) previous resourceSet (In case of RollBack)
+// c) An actual NEW resourceSet (In case of Upgrade/ReInit)
+//
+// In cases a) and b) Container can immediately be cleaned up since
+// we are sure the resources are already available (we check the
+// pendingResources to verify that nothing more is needed). So we can
+// kill the container immediately
+ResourceSet newResourceSet = container.reInitContext.newResourceSet;
+if (!newResourceSet.getPendingResources().isEmpty()) {
   container.dispatcher.getEventHandler().handle(
   new ContainerLocalizationRequestEvent(
-  container, resByVisibility));
+  container, newResourceSet.getAllResourcesByVisibility()));
 } else {
   // We are not waiting on any resources, so...
   // Kill the current container.
@@ -932,6 +944,11 @@ public class ContainerImpl implements Container {
   new ContainersLauncherEvent(container,
   ContainersLauncherEventType.CLEANUP_CONTAINER_FOR_REINIT));
 }
+container.metrics.reInitingContainer();
+NMAuditLogger.logSuccess(container.user,
+AuditConstants.START_CONTAINER_REINIT, "ContainerImpl",
+container.containerId.getApplicationAttemptId().getApplicationId(),
+container.containerId);
   } catch (Exception e) {
 LOG.error("Container [" + container.getContainerId() + "]" +
 " re-initialization failure..", e);
@@ -943,13 +960,26 @@ public class ContainerImpl implements Container {
 protected ReInitializationContext createReInitContext(
 ContainerImpl container, ContainerEvent event) {
   ContainerReInitEvent reInitEvent = (ContainerReInitEvent)event;
-  return new 

[3/3] hadoop git commit: YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh

2016-09-26 Thread asuresh
YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. 
Contributed by Arun Suresh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4815d024
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4815d024
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4815d024

Branch: refs/heads/trunk
Commit: 4815d024c59cb029e2053d94c7aed33eb8053d3e
Parents: 2f163cd
Author: Arun Suresh 
Authored: Mon Sep 26 08:46:54 2016 -0700
Committer: Arun Suresh 
Committed: Mon Sep 26 08:46:54 2016 -0700

--
 .../v2/app/launcher/TestContainerLauncher.java  |  30 
 .../app/launcher/TestContainerLauncherImpl.java |  30 
 .../yarn/api/ContainerManagementProtocol.java   |  54 ++
 .../api/protocolrecords/CommitResponse.java |  42 +
 .../ReInitializeContainerRequest.java   | 110 
 .../ReInitializeContainerResponse.java  |  38 
 .../RestartContainerResponse.java   |  38 
 .../api/protocolrecords/RollbackResponse.java   |  42 +
 .../proto/containermanagement_protocol.proto|   6 +
 .../src/main/proto/yarn_service_protos.proto|  18 ++
 ...ContainerManagementProtocolPBClientImpl.java |  73 
 ...ontainerManagementProtocolPBServiceImpl.java |  86 -
 .../impl/pb/CommitResponsePBImpl.java   |  67 +++
 .../pb/ReInitializeContainerRequestPBImpl.java  | 173 +++
 .../pb/ReInitializeContainerResponsePBImpl.java |  68 
 .../impl/pb/RestartContainerResponsePBImpl.java |  67 +++
 .../impl/pb/RollbackResponsePBImpl.java |  67 +++
 .../hadoop/yarn/TestContainerLaunchRPC.java |  30 
 .../yarn/TestContainerResourceIncreaseRPC.java  |  30 
 .../hadoop/yarn/api/TestPBImplRecords.java  |  10 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java|  30 
 .../yarn/server/nodemanager/NMAuditLogger.java  |   4 +
 .../containermanager/ContainerManagerImpl.java  |  53 +-
 .../container/ContainerImpl.java|  92 +++---
 .../nodemanager/metrics/NodeManagerMetrics.java |  26 +++
 .../TestContainerManagerWithLCE.java|  12 ++
 .../containermanager/TestContainerManager.java  | 103 +--
 .../server/resourcemanager/NodeManager.java |  29 
 .../resourcemanager/TestAMAuthorization.java|  31 
 .../TestApplicationMasterLauncher.java  |  30 
 30 files changed, 1444 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4815d024/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
index ba404a5..1520929 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
@@ -30,10 +30,15 @@ import java.util.Map;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest;
+import 
org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RestartContainerResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RollbackResponse;
 import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -476,5 +481,30 @@ public class TestContainerLauncher {
 ResourceLocalizationRequest request) throws YarnException, IOException 
{
   return null;
 }
+
+@Override
+public ReInitializeContainerResponse reInitializeContainer(
+ReInitializeContainerRequest request) throws YarnException,
+IOException {
+  

hadoop git commit: YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. Contributed by Arun Suresh

2016-09-26 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 14a696f36 -> fe644bafe


YARN-5609. Expose upgrade and restart API in ContainerManagementProtocol. 
Contributed by Arun Suresh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe644baf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe644baf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe644baf

Branch: refs/heads/trunk
Commit: fe644bafe7b4fb5b07f7cf08a7d7044abbf55027
Parents: 14a696f
Author: Jian He 
Authored: Mon Sep 26 22:41:16 2016 +0800
Committer: Jian He 
Committed: Mon Sep 26 22:41:16 2016 +0800

--
 .../v2/app/launcher/TestContainerLauncher.java  |  30 ++
 .../app/launcher/TestContainerLauncherImpl.java |  30 ++
 .../yarn/api/ContainerManagementProtocol.java   |  54 ++
 .../proto/containermanagement_protocol.proto|   6 ++
 .../src/main/proto/yarn_service_protos.proto|  18 
 ...ContainerManagementProtocolPBClientImpl.java |  73 +
 ...ontainerManagementProtocolPBServiceImpl.java |  86 +++-
 .../hadoop/yarn/TestContainerLaunchRPC.java |  30 ++
 .../yarn/TestContainerResourceIncreaseRPC.java  |  30 ++
 .../hadoop/yarn/api/TestPBImplRecords.java  |  10 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java|  30 ++
 .../yarn/server/nodemanager/NMAuditLogger.java  |   4 +
 .../containermanager/ContainerManagerImpl.java  |  53 --
 .../container/ContainerImpl.java|  92 -
 .../nodemanager/metrics/NodeManagerMetrics.java |  26 +
 .../TestContainerManagerWithLCE.java|  12 +++
 .../containermanager/TestContainerManager.java  | 103 ---
 .../server/resourcemanager/NodeManager.java |  29 ++
 .../resourcemanager/TestAMAuthorization.java|  31 ++
 .../TestApplicationMasterLauncher.java  |  30 ++
 20 files changed, 732 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe644baf/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
index ba404a5..1520929 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
@@ -30,10 +30,15 @@ import java.util.Map;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest;
 import 
org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest;
+import 
org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RestartContainerResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RollbackResponse;
 import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -476,5 +481,30 @@ public class TestContainerLauncher {
 ResourceLocalizationRequest request) throws YarnException, IOException 
{
   return null;
 }
+
+@Override
+public ReInitializeContainerResponse reInitializeContainer(
+ReInitializeContainerRequest request) throws YarnException,
+IOException {
+  return null;
+}
+
+@Override
+public RestartContainerResponse restartContainer(ContainerId containerId)
+throws YarnException, IOException {
+  return null;
+}
+
+@Override
+public RollbackResponse rollbackLastReInitialization(
+ContainerId containerId) throws YarnException, IOException {
+  return null;
+}
+
+@Override
+public CommitResponse commitLastReInitialization(ContainerId containerId)
+throws YarnException, IOException {
+  

[35/50] [abbrv] hadoop git commit: TimelineClient failed to retry on java.net.SocketTimeoutException: Read timed out

2016-09-26 Thread jianhe
TimelineClient failed to retry on java.net.SocketTimeoutException: Read timed 
out


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e6ee957
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e6ee957
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e6ee957

Branch: refs/heads/MAPREDUCE-6608
Commit: 2e6ee957161ab63a02a7861b727efa6310b275b2
Parents: d85d9b2
Author: Varun Saxena 
Authored: Fri Sep 23 13:23:54 2016 +0530
Committer: Varun Saxena 
Committed: Fri Sep 23 13:23:54 2016 +0530

--
 .../apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e6ee957/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
index 56da2a7..dc4d3e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
@@ -265,7 +265,8 @@ public class TimelineClientImpl extends TimelineClient {
 public boolean shouldRetryOn(Exception e) {
   // Only retry on connection exceptions
   return (e instanceof ClientHandlerException)
-  && (e.getCause() instanceof ConnectException);
+  && (e.getCause() instanceof ConnectException ||
+  e.getCause() instanceof SocketTimeoutException);
 }
   };
   try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[34/50] [abbrv] hadoop git commit: HDFS-10871. DiskBalancerWorkItem should not import jackson relocated by htrace. Contributed by Manoj Govindassamy.

2016-09-26 Thread jianhe
HDFS-10871. DiskBalancerWorkItem should not import jackson relocated by htrace. 
Contributed by Manoj Govindassamy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d85d9b2e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d85d9b2e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d85d9b2e

Branch: refs/heads/MAPREDUCE-6608
Commit: d85d9b2e7b4df04335111f4e3ce21a2fda39aee9
Parents: d0372dc
Author: Anu Engineer 
Authored: Thu Sep 22 19:36:16 2016 -0700
Committer: Anu Engineer 
Committed: Thu Sep 22 19:36:16 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml | 4 
 .../hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java  | 2 +-
 .../hadoop/hdfs/server/diskbalancer/planner/MoveStep.java  | 6 +-
 3 files changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d85d9b2e/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index b8b531b..1e38019 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -107,6 +107,10 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   test
   test-jar
 
+  
+  com.fasterxml.jackson.core
+  jackson-annotations
+  
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d85d9b2e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
index 592a89f..909bbd5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
@@ -19,10 +19,10 @@
 
 package org.apache.hadoop.hdfs.server.datanode;
 
+import com.fasterxml.jackson.annotation.JsonInclude;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
 import org.codehaus.jackson.map.ObjectMapper;
 import org.codehaus.jackson.map.ObjectReader;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d85d9b2e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
index b5f68fd..7f2f954 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
@@ -17,13 +17,9 @@
 
 package org.apache.hadoop.hdfs.server.diskbalancer.planner;
 
+import com.fasterxml.jackson.annotation.JsonInclude;
 import org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume;
 import org.apache.hadoop.util.StringUtils;
-import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
-
-
-
-
 
 /**
  * Ignore fields with default values. In most cases Throughtput, diskErrors


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[11/50] [abbrv] hadoop git commit: YARN-3141. Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp. Contributed by Wangda Tan

2016-09-26 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a30f2f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index 9e5a807..3555faa 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -123,65 +123,72 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
 return queue.getMetrics();
   }
 
-  synchronized public void containerCompleted(RMContainer rmContainer,
+  public void containerCompleted(RMContainer rmContainer,
   ContainerStatus containerStatus, RMContainerEventType event) {
-
-Container container = rmContainer.getContainer();
-ContainerId containerId = container.getId();
-
-// Remove from the list of newly allocated containers if found
-newlyAllocatedContainers.remove(rmContainer);
-
-// Inform the container
-rmContainer.handle(
-new RMContainerFinishedEvent(
-containerId,
-containerStatus,
-event)
-);
-if (LOG.isDebugEnabled()) {
-  LOG.debug("Completed container: " + rmContainer.getContainerId() +
-  " in state: " + rmContainer.getState() + " event:" + event);
-}
+try {
+  writeLock.lock();
+  Container container = rmContainer.getContainer();
+  ContainerId containerId = container.getId();
+
+  // Remove from the list of newly allocated containers if found
+  newlyAllocatedContainers.remove(rmContainer);
+
+  // Inform the container
+  rmContainer.handle(
+  new RMContainerFinishedEvent(containerId, containerStatus, event));
+  if (LOG.isDebugEnabled()) {
+LOG.debug("Completed container: " + rmContainer.getContainerId()
++ " in state: " + rmContainer.getState() + " event:" + event);
+  }
+
+  // Remove from the list of containers
+  liveContainers.remove(rmContainer.getContainerId());
 
-// Remove from the list of containers
-liveContainers.remove(rmContainer.getContainerId());
+  Resource containerResource = rmContainer.getContainer().getResource();
+  RMAuditLogger.logSuccess(getUser(), AuditConstants.RELEASE_CONTAINER,
+  "SchedulerApp", getApplicationId(), containerId, containerResource);
 
-Resource containerResource = rmContainer.getContainer().getResource();
-RMAuditLogger.logSuccess(getUser(), 
-AuditConstants.RELEASE_CONTAINER, "SchedulerApp", 
-getApplicationId(), containerId, containerResource);
-
-// Update usage metrics 
-queue.getMetrics().releaseResources(getUser(), 1, containerResource);
-this.attemptResourceUsage.decUsed(containerResource);
+  // Update usage metrics
+  queue.getMetrics().releaseResources(getUser(), 1, containerResource);
+  this.attemptResourceUsage.decUsed(containerResource);
 
-// remove from preemption map if it is completed
-preemptionMap.remove(rmContainer);
+  // remove from preemption map if it is completed
+  preemptionMap.remove(rmContainer);
 
-// Clear resource utilization metrics cache.
-lastMemoryAggregateAllocationUpdateTime = -1;
+  // Clear resource utilization metrics cache.
+  lastMemoryAggregateAllocationUpdateTime = -1;
+} finally {
+  writeLock.unlock();
+}
   }
 
-  private synchronized void unreserveInternal(
+  private void unreserveInternal(
   SchedulerRequestKey schedulerKey, FSSchedulerNode node) {
-Map reservedContainers = 
-this.reservedContainers.get(schedulerKey);
-RMContainer reservedContainer = 
reservedContainers.remove(node.getNodeID());
-if (reservedContainers.isEmpty()) {
-  this.reservedContainers.remove(schedulerKey);
-}
-
-// Reset the re-reservation count
-resetReReservations(schedulerKey);
+try {
+  writeLock.lock();
+  Map reservedContainers = 
this.reservedContainers.get(
+  schedulerKey);
+  RMContainer reservedContainer = reservedContainers.remove(
+  node.getNodeID());
+  if (reservedContainers.isEmpty()) {
+this.reservedContainers.remove(schedulerKey);
+  }
+
+  // Reset 

[03/50] [abbrv] hadoop git commit: YARN-5642. Typos in 9 log messages. Contributed by Mehran Hassani

2016-09-26 Thread jianhe
YARN-5642. Typos in 9 log messages. Contributed by Mehran Hassani


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4174b975
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4174b975
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4174b975

Branch: refs/heads/MAPREDUCE-6608
Commit: 4174b9756c8c7877797545c4356b1f40df603ec5
Parents: 7d21c28
Author: Naganarasimha 
Authored: Sat Sep 17 10:35:39 2016 +0530
Committer: Naganarasimha 
Committed: Sat Sep 17 10:35:39 2016 +0530

--
 .../main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java  | 2 +-
 .../apache/hadoop/yarn/security/YarnAuthorizationProvider.java   | 2 +-
 .../FileSystemApplicationHistoryStore.java   | 2 +-
 .../containermanager/monitor/ContainersMonitorImpl.java  | 2 +-
 .../resourcemanager/reservation/AbstractReservationSystem.java   | 2 +-
 .../server/resourcemanager/scheduler/capacity/LeafQueue.java | 2 +-
 .../server/resourcemanager/scheduler/capacity/ParentQueue.java   | 4 ++--
 .../server/resourcemanager/scheduler/fair/FairScheduler.java | 2 +-
 8 files changed, 9 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4174b975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
index 89b5861..df542ed 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
@@ -142,7 +142,7 @@ public class AsyncDispatcher extends AbstractService 
implements Dispatcher {
   protected void serviceStop() throws Exception {
 if (drainEventsOnStop) {
   blockNewEvents = true;
-  LOG.info("AsyncDispatcher is draining to stop, igonring any new 
events.");
+  LOG.info("AsyncDispatcher is draining to stop, ignoring any new 
events.");
   long endTime = System.currentTimeMillis() + getConfig()
   .getLong(YarnConfiguration.DISPATCHER_DRAIN_EVENTS_TIMEOUT,
   YarnConfiguration.DEFAULT_DISPATCHER_DRAIN_EVENTS_TIMEOUT);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4174b975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
index dd81ebd..4b43ea1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
@@ -54,7 +54,7 @@ public abstract class YarnAuthorizationProvider {
 (YarnAuthorizationProvider) ReflectionUtils.newInstance(
   authorizerClass, conf);
 authorizer.init(conf);
-LOG.info(authorizerClass.getName() + " is instiantiated.");
+LOG.info(authorizerClass.getName() + " is instantiated.");
   }
 }
 return authorizer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4174b975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
index 295b8ab..bb52b55 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
+++ 

[29/50] [abbrv] hadoop git commit: HADOOP-13602. Fix some warnings by findbugs in hadoop-maven-plugin. (ozawa)

2016-09-26 Thread jianhe
HADOOP-13602. Fix some warnings by findbugs in hadoop-maven-plugin. (ozawa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d619b48
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d619b48
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d619b48

Branch: refs/heads/MAPREDUCE-6608
Commit: 8d619b4896ac31f63fd0083594b6e7d207ef71a0
Parents: 537095d
Author: Tsuyoshi Ozawa 
Authored: Fri Sep 23 01:37:06 2016 +0900
Committer: Tsuyoshi Ozawa 
Committed: Fri Sep 23 01:37:06 2016 +0900

--
 .../maven/plugin/cmakebuilder/CompileMojo.java  |  4 +-
 .../maven/plugin/cmakebuilder/TestMojo.java |  4 +-
 .../hadoop/maven/plugin/protoc/ProtocMojo.java  |  4 ++
 .../apache/hadoop/maven/plugin/util/Exec.java   |  6 ++-
 .../plugin/versioninfo/VersionInfoMojo.java | 55 ++--
 5 files changed, 42 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d619b48/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
--
diff --git 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
index afb11cb..0196352 100644
--- 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
+++ 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
@@ -14,6 +14,7 @@
 
 package org.apache.hadoop.maven.plugin.cmakebuilder;
 
+import java.util.Locale;
 import org.apache.hadoop.maven.plugin.util.Exec.OutputBufferThread;
 import org.apache.hadoop.maven.plugin.util.Exec;
 import org.apache.maven.plugin.AbstractMojo;
@@ -83,7 +84,8 @@ public class CompileMojo extends AbstractMojo {
 
   // TODO: support Windows
   private static void validatePlatform() throws MojoExecutionException {
-if (System.getProperty("os.name").toLowerCase().startsWith("windows")) {
+if (System.getProperty("os.name").toLowerCase(Locale.ENGLISH)
+.startsWith("windows")) {
   throw new MojoExecutionException("CMakeBuilder does not yet support " +
   "the Windows platform.");
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d619b48/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
--
diff --git 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
index e676efd..95b6264 100644
--- 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
+++ 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
@@ -14,6 +14,7 @@
 
 package org.apache.hadoop.maven.plugin.cmakebuilder;
 
+import java.util.Locale;
 import org.apache.hadoop.maven.plugin.util.Exec;
 import org.apache.maven.execution.MavenSession;
 import org.apache.maven.plugin.AbstractMojo;
@@ -117,7 +118,8 @@ public class TestMojo extends AbstractMojo {
 
   // TODO: support Windows
   private static void validatePlatform() throws MojoExecutionException {
-if (System.getProperty("os.name").toLowerCase().startsWith("windows")) {
+if (System.getProperty("os.name").toLowerCase(Locale.ENGLISH)
+.startsWith("windows")) {
   throw new MojoExecutionException("CMakeBuilder does not yet support " +
   "the Windows platform.");
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d619b48/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
--
diff --git 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
index 0dcac0e..df479fd 100644
--- 
a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
+++ 
b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
@@ -105,6 +105,10 @@ public class ProtocMojo extends AbstractMojo {
 private boolean hasDirectoryChanged(File directory) throws IOException {
   File[] listing = directory.listFiles();
   boolean changed = false;
+  if (listing == null) {
+// not changed.
+return false;
+  }
   // Do not exit early, since we need to compute and save checksums
   // for each file within the directory.
   for (File f : 

[20/50] [abbrv] hadoop git commit: YARN-5655. TestContainerManagerSecurity#testNMTokens is asserting. Contributed by Robert Kanter

2016-09-26 Thread jianhe
YARN-5655. TestContainerManagerSecurity#testNMTokens is asserting. Contributed 
by Robert Kanter


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c6d1d742
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c6d1d742
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c6d1d742

Branch: refs/heads/MAPREDUCE-6608
Commit: c6d1d742e70e7b8f1d89cf9a4780657646e6a367
Parents: 734d54c
Author: Jason Lowe 
Authored: Tue Sep 20 14:15:06 2016 +
Committer: Jason Lowe 
Committed: Tue Sep 20 14:15:06 2016 +

--
 .../apache/hadoop/yarn/server/TestContainerManagerSecurity.java | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6d1d742/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
index ee3396d..408c1cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
@@ -68,6 +68,8 @@ import org.apache.hadoop.yarn.server.nodemanager.Context;
 import org.apache.hadoop.yarn.server.nodemanager.NodeManager;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl;
 import 
org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.MockRMApp;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager;
 import org.apache.hadoop.yarn.server.security.BaseNMTokenSecretManager;
@@ -205,6 +207,9 @@ public class TestContainerManagerSecurity extends 
KerberosSecurityTestcase {
 Resource r = Resource.newInstance(1024, 1);
 
 ApplicationId appId = ApplicationId.newInstance(1, 1);
+MockRMApp m = new MockRMApp(appId.getId(), appId.getClusterTimestamp(),
+RMAppState.NEW);
+yarnCluster.getResourceManager().getRMContext().getRMApps().put(appId, m);
 ApplicationAttemptId validAppAttemptId =
 ApplicationAttemptId.newInstance(appId, 1);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[39/50] [abbrv] hadoop git commit: HDFS-10894. Remove the redundant charactors for command -saveNamespace in HDFSCommands.md Contributed by Yiqun Lin.

2016-09-26 Thread jianhe
HDFS-10894. Remove the redundant charactors for command -saveNamespace in 
HDFSCommands.md Contributed by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c4480f7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c4480f7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c4480f7f

Branch: refs/heads/MAPREDUCE-6608
Commit: c4480f7f42f30f40900a73506790cd9bce3a823c
Parents: e5ef51e
Author: Brahma Reddy Battula 
Authored: Fri Sep 23 19:11:02 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Sep 23 19:11:02 2016 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4480f7f/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 83035bb..ec94afd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -402,7 +402,7 @@ Usage:
 |: |: |
 | `-report` `[-live]` `[-dead]` `[-decommissioning]` | Reports basic 
filesystem information and statistics, The dfs usage can be different from "du" 
usage, because it measures raw space used by replication, checksums, snapshots 
and etc. on all the DNs. Optional flags may be used to filter the list of 
displayed DataNodes. |
 | `-safemode` enter\|leave\|get\|wait\|forceExit | Safe mode maintenance 
command. Safe mode is a Namenode state in which it 1. does not accept 
changes to the name space (read-only) 2. does not replicate or delete 
blocks. Safe mode is entered automatically at Namenode startup, and leaves 
safe mode automatically when the configured minimum percentage of blocks 
satisfies the minimum replication condition. If Namenode detects any anomaly 
then it will linger in safe mode till that issue is resolved. If that anomaly 
is the consequence of a deliberate action, then administrator can use -safemode 
forceExit to exit safe mode. The cases where forceExit may be required are 
1. Namenode metadata is not consistent. If Namenode detects that metadata has 
been modified out of band and can cause data loss, then Namenode will enter 
forceExit state. At that point user can either restart Namenode with correct 
metadata files or forceExit (if data loss is acceptable).2. Rollback c
 auses metadata to be replaced and rarely it can trigger safe mode forceExit 
state in Namenode. In that case you may proceed by issuing -safemode 
forceExit. Safe mode can also be entered manually, but then it can only be 
turned off manually as well. |
-| `-saveNamespace` `\[-beforeShutdown\]` | Save current namespace into storage 
directories and reset edits log. Requires safe mode. If the "beforeShutdown" 
option is given, the NameNode does a checkpoint if and only if no checkpoint 
has been done during a time window (a configurable number of checkpoint 
periods). This is usually used before shutting down the NameNode to prevent 
potential fsimage/editlog corruption. |
+| `-saveNamespace` `[-beforeShutdown]` | Save current namespace into storage 
directories and reset edits log. Requires safe mode. If the "beforeShutdown" 
option is given, the NameNode does a checkpoint if and only if no checkpoint 
has been done during a time window (a configurable number of checkpoint 
periods). This is usually used before shutting down the NameNode to prevent 
potential fsimage/editlog corruption. |
 | `-rollEdits` | Rolls the edit log on the active NameNode. |
 | `-restoreFailedStorage` true\|false\|check | This option will turn on/off 
automatic attempt to restore failed storage replicas. If a failed storage 
becomes available again the system will attempt to restore edits and/or fsimage 
during checkpoint. 'check' option will return current setting. |
 | `-refreshNodes` | Re-read the hosts and exclude files to update the set of 
Datanodes that are allowed to connect to the Namenode and those that should be 
decommissioned or recommissioned. |


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[22/50] [abbrv] hadoop git commit: YARN-5656. Fix ReservationACLsTestBase. (Sean Po via asuresh)

2016-09-26 Thread jianhe
YARN-5656. Fix ReservationACLsTestBase. (Sean Po via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9f03b403
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9f03b403
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9f03b403

Branch: refs/heads/MAPREDUCE-6608
Commit: 9f03b403ec69658fc57bc0f6b832da0e3c746497
Parents: e45307c
Author: Arun Suresh 
Authored: Tue Sep 20 12:27:17 2016 -0700
Committer: Arun Suresh 
Committed: Tue Sep 20 12:27:17 2016 -0700

--
 .../reservation/NoOverCommitPolicy.java | 12 -
 .../exceptions/MismatchedUserException.java | 46 
 .../ReservationACLsTestBase.java|  2 +
 .../reservation/TestNoOverCommitPolicy.java | 21 -
 4 files changed, 2 insertions(+), 79 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f03b403/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
index 814d4b5..55f1d00 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
@@ -21,7 +21,6 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.reservation;
 import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.records.ReservationId;
-import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.MismatchedUserException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.ResourceOverCommitException;
 
@@ -39,17 +38,6 @@ public class NoOverCommitPolicy implements SharingPolicy {
   public void validate(Plan plan, ReservationAllocation reservation)
   throws PlanningException {
 
-ReservationAllocation oldReservation =
-plan.getReservationById(reservation.getReservationId());
-
-// check updates are using same name
-if (oldReservation != null
-&& !oldReservation.getUser().equals(reservation.getUser())) {
-  throw new MismatchedUserException(
-  "Updating an existing reservation with mismatching user:"
-  + oldReservation.getUser() + " != " + reservation.getUser());
-}
-
 RLESparseResourceAllocation available = plan.getAvailableResourceOverTime(
 reservation.getUser(), reservation.getReservationId(),
 reservation.getStartTime(), reservation.getEndTime());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f03b403/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
deleted file mode 100644
index 7b4419b..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
+++ /dev/null
@@ -1,46 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * 

[31/50] [abbrv] hadoop git commit: MAPREDUCE-6632. Master.getMasterAddress() should be updated to use YARN-4629 (templedf via rkanter)

2016-09-26 Thread jianhe
MAPREDUCE-6632. Master.getMasterAddress() should be updated to use YARN-4629 
(templedf via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4fc632ae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4fc632ae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4fc632ae

Branch: refs/heads/MAPREDUCE-6608
Commit: 4fc632ae19a1d6b0ec09cc7ead789a3cab1c2f1c
Parents: 40acace
Author: Robert Kanter 
Authored: Thu Sep 22 16:12:56 2016 -0700
Committer: Robert Kanter 
Committed: Thu Sep 22 16:12:56 2016 -0700

--
 .../hadoop-mapreduce-client-core/pom.xml|  6 +-
 .../java/org/apache/hadoop/mapred/Master.java   | 70 ++--
 .../org/apache/hadoop/mapred/TestMaster.java| 56 +---
 3 files changed, 28 insertions(+), 104 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4fc632ae/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
index 4de24c7..00bb11b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
@@ -37,7 +37,11 @@
   
 
   org.apache.hadoop
-  hadoop-yarn-common 
+  hadoop-yarn-client
+
+
+  org.apache.hadoop
+  hadoop-yarn-common
 
 
   org.apache.hadoop

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4fc632ae/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
index d84e395..ec00405 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Master.java
@@ -19,75 +19,45 @@
 package org.apache.hadoop.mapred;
 
 import java.io.IOException;
-import java.net.InetSocketAddress;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.MRConfig;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.SecurityUtil;
-import org.apache.hadoop.yarn.conf.HAUtil;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.client.util.YarnClientUtils;
 
 @Private
 @Unstable
 public class Master {
-
-  private static final Log LOG = LogFactory.getLog(Master.class);
-
   public enum State {
 INITIALIZING, RUNNING;
   }
 
-  public static String getMasterUserName(Configuration conf) {
-String framework = conf.get(MRConfig.FRAMEWORK_NAME, 
MRConfig.YARN_FRAMEWORK_NAME);
-if (framework.equals(MRConfig.CLASSIC_FRAMEWORK_NAME)) {
-  return conf.get(MRConfig.MASTER_USER_NAME);
-} 
-else {
-  return conf.get(YarnConfiguration.RM_PRINCIPAL);
-}
+  public static String getMasterAddress(Configuration conf) {
+String masterAddress = conf.get(MRConfig.MASTER_ADDRESS, "localhost:8012");
+
+return NetUtils.createSocketAddr(masterAddress, 8012,
+MRConfig.MASTER_ADDRESS).getHostName();
   }
-  
-  public static InetSocketAddress getMasterAddress(Configuration conf) {
-String masterAddress;
-String framework = conf.get(MRConfig.FRAMEWORK_NAME, 
MRConfig.YARN_FRAMEWORK_NAME);
+
+  public static String getMasterPrincipal(Configuration conf)
+  throws IOException {
+String masterPrincipal;
+String framework = conf.get(MRConfig.FRAMEWORK_NAME,
+MRConfig.YARN_FRAMEWORK_NAME);
+
 if (framework.equals(MRConfig.CLASSIC_FRAMEWORK_NAME)) {
-  masterAddress = conf.get(MRConfig.MASTER_ADDRESS, "localhost:8012");
-  return NetUtils.createSocketAddr(masterAddress, 8012, 
MRConfig.MASTER_ADDRESS);
-} else if (framework.equals(MRConfig.YARN_FRAMEWORK_NAME) &&
-HAUtil.isHAEnabled(conf)) {
-  YarnConfiguration yarnConf = new 

[32/50] [abbrv] hadoop git commit: YARN-4973. YarnWebParams next.fresh.interval should be next.refresh.interval (templedf via rkanter)

2016-09-26 Thread jianhe
YARN-4973. YarnWebParams next.fresh.interval should be next.refresh.interval 
(templedf via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ffd4b7c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ffd4b7c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ffd4b7c

Branch: refs/heads/MAPREDUCE-6608
Commit: 5ffd4b7c1e1f3168483c708c7ed307a565389eb2
Parents: 4fc632a
Author: Robert Kanter 
Authored: Thu Sep 22 16:45:34 2016 -0700
Committer: Robert Kanter 
Committed: Thu Sep 22 16:45:34 2016 -0700

--
 .../src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ffd4b7c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java
index 3792649..a34273c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java
@@ -40,5 +40,5 @@ public interface YarnWebParams {
   String NODE_STATE = "node.state";
   String NODE_LABEL = "node.label";
   String WEB_UI_TYPE = "web.ui.type";
-  String NEXT_REFRESH_INTERVAL = "next.fresh.interval";
+  String NEXT_REFRESH_INTERVAL = "next.refresh.interval";
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[25/50] [abbrv] hadoop git commit: YARN-4591. YARN Web UIs should provide a robots.txt. (Sidharta Seethana via wangda)

2016-09-26 Thread jianhe
YARN-4591. YARN Web UIs should provide a robots.txt. (Sidharta Seethana via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5a58bfee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5a58bfee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5a58bfee

Branch: refs/heads/MAPREDUCE-6608
Commit: 5a58bfee30a662b1b556048504f66f9cf00d182a
Parents: 0e918df
Author: Wangda Tan 
Authored: Tue Sep 20 17:20:50 2016 -0700
Committer: Wangda Tan 
Committed: Tue Sep 20 17:20:50 2016 -0700

--
 .../apache/hadoop/yarn/webapp/Dispatcher.java   |  9 +
 .../org/apache/hadoop/yarn/webapp/WebApp.java   |  4 +-
 .../hadoop/yarn/webapp/view/RobotsTextPage.java | 39 
 .../apache/hadoop/yarn/webapp/TestWebApp.java   | 26 +
 4 files changed, 77 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
index 66dd21b..d519dbb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.http.HtmlQuoting;
 import org.apache.hadoop.yarn.webapp.Controller.RequestContext;
 import org.apache.hadoop.yarn.webapp.Router.Dest;
 import org.apache.hadoop.yarn.webapp.view.ErrorPage;
+import org.apache.hadoop.yarn.webapp.view.RobotsTextPage;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -117,6 +118,14 @@ public class Dispatcher extends HttpServlet {
 }
 Controller.RequestContext rc =
 injector.getInstance(Controller.RequestContext.class);
+
+//short-circuit robots.txt serving for all YARN webapps.
+if (uri.equals(RobotsTextPage.ROBOTS_TXT_PATH)) {
+  rc.setStatus(HttpServletResponse.SC_FOUND);
+  render(RobotsTextPage.class);
+  return;
+}
+
 if (setCookieParams(rc, req) > 0) {
   Cookie ec = rc.cookies().get(ERROR_COOKIE);
   if (ec != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
index 2c21d1b..fe800f0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.yarn.webapp.view.RobotsTextPage;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -158,7 +159,8 @@ public abstract class WebApp extends ServletModule {
   public void configureServlets() {
 setup();
 
-serve("/", "/__stop").with(Dispatcher.class);
+serve("/", "/__stop", RobotsTextPage.ROBOTS_TXT_PATH)
+.with(Dispatcher.class);
 
 for (String path : this.servePathSpecs) {
   serve(path).with(Dispatcher.class);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
new file mode 100644
index 000..b15d492
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
@@ -0,0 +1,39 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional 

[07/50] [abbrv] hadoop git commit: HADOOP-13621. s3:// should have been fully cut off from trunk. Contributed by Mingliang Liu.

2016-09-26 Thread jianhe
HADOOP-13621. s3:// should have been fully cut off from trunk. Contributed by 
Mingliang Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96142efa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96142efa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96142efa

Branch: refs/heads/MAPREDUCE-6608
Commit: 96142efa2dded3212d07e7e14e251066865fb7d2
Parents: f67237c
Author: Mingliang Liu 
Authored: Fri Sep 16 18:36:26 2016 -0700
Committer: Mingliang Liu 
Committed: Sat Sep 17 22:07:46 2016 -0700

--
 .../src/site/markdown/tools/hadoop-aws/index.md | 44 +++-
 1 file changed, 6 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96142efa/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
--
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
index 7fcadb9..160aa46 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
@@ -28,7 +28,7 @@ HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-aws' in 
the list.
 
 ### Features
 
-**NOTE: `s3:` is being phased out. Use `s3n:` or `s3a:` instead.**
+**NOTE: `s3:` has been phased out. Use `s3n:` or `s3a:` instead.**
 
 1. The second-generation, `s3n:` filesystem, making it easy to share
 data between hadoop and other applications via the S3 object store.
@@ -86,38 +86,6 @@ these instructions —and be aware that all issues related 
to S3 integration
 in EMR can only be addressed by Amazon themselves: please raise your issues
 with them.
 
-## S3
-
-The `s3://` filesystem is the original S3 store in the Hadoop codebase.
-It implements an inode-style filesystem atop S3, and was written to
-provide scaleability when S3 had significant limits on the size of blobs.
-It is incompatible with any other application's use of data in S3.
-
-It is now deprecated and will be removed in Hadoop 3. Please do not use,
-and migrate off data which is on it.
-
-### Dependencies
-
-* `jets3t` jar
-* `commons-codec` jar
-* `commons-logging` jar
-* `httpclient` jar
-* `httpcore` jar
-* `java-xmlbuilder` jar
-
-### Authentication properties
-
-
-  fs.s3.awsAccessKeyId
-  AWS access key ID
-
-
-
-  fs.s3.awsSecretAccessKey
-  AWS secret key
-
-
-
 ## S3N
 
 S3N was the first S3 Filesystem client which used "native" S3 objects, hence
@@ -171,16 +139,16 @@ it should be used wherever possible.
 ### Other properties
 
 
-  fs.s3.buffer.dir
+  fs.s3n.buffer.dir
   ${hadoop.tmp.dir}/s3
-  Determines where on the local filesystem the s3:/s3n: 
filesystem
+  Determines where on the local filesystem the s3n: filesystem
   should store files before sending them to S3
   (or after retrieving them from S3).
   
 
 
 
-  fs.s3.maxRetries
+  fs.s3n.maxRetries
   4
   The maximum number of retries for reading or writing files 
to
 S3, before we signal failure to the application.
@@ -188,7 +156,7 @@ it should be used wherever possible.
 
 
 
-  fs.s3.sleepTimeSeconds
+  fs.s3n.sleepTimeSeconds
   10
   The number of seconds to sleep between each S3 retry.
   
@@ -1011,7 +979,7 @@ includes `distcp`.
 
 ### `ClassNotFoundException: org.apache.hadoop.fs.s3a.S3AFileSystem`
 
-(or `org.apache.hadoop.fs.s3native.NativeS3FileSystem`, 
`org.apache.hadoop.fs.s3.S3FileSystem`).
+(or `org.apache.hadoop.fs.s3native.NativeS3FileSystem`).
 
 These are the Hadoop classes, found in the `hadoop-aws` JAR. An exception
 reporting one of these classes is missing means that this JAR is not on


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[47/50] [abbrv] hadoop git commit: YARN-3877. YarnClientImpl.submitApplication swallows exceptions. Contributed by Varun Saxena

2016-09-26 Thread jianhe
YARN-3877. YarnClientImpl.submitApplication swallows exceptions. Contributed by 
Varun Saxena


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4e72db5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4e72db5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4e72db5

Branch: refs/heads/MAPREDUCE-6608
Commit: e4e72db5f9f305b493138ab36f073fe5d1750ad8
Parents: 3e37e24
Author: Naganarasimha 
Authored: Sun Sep 25 17:36:30 2016 +0530
Committer: Naganarasimha 
Committed: Sun Sep 25 17:36:30 2016 +0530

--
 .../yarn/client/api/impl/YarnClientImpl.java| 13 +++--
 .../yarn/client/api/impl/TestYarnClient.java| 52 
 2 files changed, 60 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4e72db5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index 7760521..80e453f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ -306,9 +306,10 @@ public class YarnClientImpl extends YarnClient {
 try {
   Thread.sleep(submitPollIntervalMillis);
 } catch (InterruptedException ie) {
-  LOG.error("Interrupted while waiting for application "
-  + applicationId
-  + " to be successfully submitted.");
+  String msg = "Interrupted while waiting for application "
+  + applicationId + " to be successfully submitted.";
+  LOG.error(msg);
+  throw new YarnException(msg, ie);
 }
   } catch (ApplicationNotFoundException ex) {
 // FailOver or RM restart happens before RMStateStore saves
@@ -446,8 +447,10 @@ public class YarnClientImpl extends YarnClient {
 Thread.sleep(asyncApiPollIntervalMillis);
   }
 } catch (InterruptedException e) {
-  LOG.error("Interrupted while waiting for application " + applicationId
-  + " to be killed.");
+  String msg = "Interrupted while waiting for application "
+  + applicationId + " to be killed.";
+  LOG.error(msg);
+  throw new YarnException(msg, e);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4e72db5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
index e462be1..19966ad 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
@@ -27,6 +27,7 @@ import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
 import java.io.IOException;
+import java.lang.Thread.State;
 import java.nio.ByteBuffer;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
@@ -201,6 +202,57 @@ public class TestYarnClient {
 client.stop();
   }
 
+  @SuppressWarnings("deprecation")
+  @Test (timeout = 2)
+  public void testSubmitApplicationInterrupted() throws IOException {
+Configuration conf = new Configuration();
+int pollIntervalMs = 1000;
+conf.setLong(YarnConfiguration.YARN_CLIENT_APP_SUBMISSION_POLL_INTERVAL_MS,
+pollIntervalMs);
+try (final YarnClient client = new MockYarnClient()) {
+  client.init(conf);
+  client.start();
+  // Submit the application and then interrupt it while its waiting
+  // for submission to be successful.
+  final class SubmitThread extends Thread {
+private boolean isInterrupted  = false;
+@Override
+public void run() {
+  ApplicationSubmissionContext context =
+  mock(ApplicationSubmissionContext.class);
+  ApplicationId applicationId = ApplicationId.newInstance(
+  

[48/50] [abbrv] hadoop git commit: YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

2016-09-26 Thread jianhe
YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/14a696f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/14a696f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/14a696f3

Branch: refs/heads/MAPREDUCE-6608
Commit: 14a696f369f7e3802587f57c8fff3aa51b5ab576
Parents: 5707f88
Author: Akira Ajisaka 
Authored: Mon Sep 26 15:00:01 2016 +0900
Committer: Akira Ajisaka 
Committed: Mon Sep 26 15:00:01 2016 +0900

--
 .../server/resourcemanager/recovery/ZKRMStateStore.java | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/14a696f3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 9e05f6d..c24b3e9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -748,17 +748,14 @@ public class ZKRMStateStore extends RMStateStore {
 String nodeCreatePath =
 getNodePath(dtMasterKeysRootPath, DELEGATION_KEY_PREFIX
 + delegationKey.getKeyId());
-ByteArrayOutputStream os = new ByteArrayOutputStream();
-DataOutputStream fsOut = new DataOutputStream(os);
 if (LOG.isDebugEnabled()) {
   LOG.debug("Storing RMDelegationKey_" + delegationKey.getKeyId());
 }
-delegationKey.write(fsOut);
-try {
+ByteArrayOutputStream os = new ByteArrayOutputStream();
+try(DataOutputStream fsOut = new DataOutputStream(os)) {
+  delegationKey.write(fsOut);
   safeCreate(nodeCreatePath, os.toByteArray(), zkAcl,
   CreateMode.PERSISTENT);
-} finally {
-  os.close();
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back.

2016-09-26 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5707f88d/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
new file mode 100644
index 000..88c83b5
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -0,0 +1,294 @@
+
+
+# Hadoop-Aliyun module: Integration with Aliyun Web Services
+
+
+
+## Overview
+
+The `hadoop-aliyun` module provides support for Aliyun integration with
+[Aliyun Object Storage Service (Aliyun 
OSS)](https://www.aliyun.com/product/oss).
+The generated JAR file, `hadoop-aliyun.jar` also declares a transitive
+dependency on all external artifacts which are needed for this support — 
enabling
+downstream applications to easily use this support.
+
+To make it part of Apache Hadoop's default classpath, simply make sure
+that HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-aliyun' in the list.
+
+### Features
+
+* Read and write data stored in Aliyun OSS.
+* Present a hierarchical file system view by implementing the standard Hadoop
+[`FileSystem`](../api/org/apache/hadoop/fs/FileSystem.html) interface.
+* Can act as a source of data in a MapReduce job, or a sink.
+
+### Warning #1: Object Stores are not filesystems.
+
+Aliyun OSS is an example of "an object store". In order to achieve scalability
+and especially high availability, Aliyun OSS has relaxed some of the 
constraints
+which classic "POSIX" filesystems promise.
+
+
+
+Specifically
+
+1. Atomic operations: `delete()` and `rename()` are implemented by recursive
+file-by-file operations. They take time at least proportional to the number of 
files,
+during which time partial updates may be visible. `delete()` and `rename()`
+can not guarantee atomicity. If the operations are interrupted, the filesystem
+is left in an intermediate state.
+2. File owner and group are persisted, but the permissions model is not 
enforced.
+Authorization occurs at the level of the entire Aliyun account via
+[Aliyun Resource Access Management (Aliyun 
RAM)](https://www.aliyun.com/product/ram).
+3. Directory last access time is not tracked.
+4. The append operation is not supported.
+
+### Warning #2: Directory last access time is not tracked,
+features of Hadoop relying on this can have unexpected behaviour. E.g. the
+AggregatedLogDeletionService of YARN will not remove the appropriate logfiles.
+
+### Warning #3: Your Aliyun credentials are valuable
+
+Your Aliyun credentials not only pay for services, they offer read and write
+access to the data. Anyone with the account can not only read your datasets
+—they can delete them.
+
+Do not inadvertently share these credentials through means such as
+1. Checking in to SCM any configuration files containing the secrets.
+2. Logging them to a console, as they invariably end up being seen.
+3. Defining filesystem URIs with the credentials in the URL, such as
+`oss://accessKeyId:accessKeySecret@directory/file`. They will end up in
+logs and error messages.
+4. Including the secrets in bug reports.
+
+If you do any of these: change your credentials immediately!
+
+### Warning #4: The Aliyun OSS client provided by Aliyun E-MapReduce are 
different from this implementation
+
+Specifically: on Aliyun E-MapReduce, `oss://` is also supported but with
+a different implementation. If you are using Aliyun E-MapReduce,
+follow these instructions —and be aware that all issues related to Aliyun
+OSS integration in E-MapReduce can only be addressed by Aliyun themselves:
+please raise your issues with them.
+
+## OSS
+
+### Authentication properties
+
+
+  fs.oss.accessKeyId
+  Aliyun access key ID
+
+
+
+  fs.oss.accessKeySecret
+  Aliyun access key secret
+
+
+
+  fs.oss.credentials.provider
+  
+Class name of a credentials provider that implements
+com.aliyun.oss.common.auth.CredentialsProvider. Omit if using 
access/secret keys
+or another authentication mechanism. The specified class must provide 
an
+accessible constructor accepting java.net.URI and
+org.apache.hadoop.conf.Configuration, or an accessible default 
constructor.
+  
+
+
+### Other properties
+
+
+  fs.oss.endpoint
+  Aliyun OSS endpoint to connect to. An up-to-date list is
+provided in the Aliyun OSS Documentation.
+   
+
+
+
+  fs.oss.proxy.host
+  Hostname of the (optinal) proxy server for Aliyun OSS 
connection
+
+
+
+  fs.oss.proxy.port
+  Proxy server port
+
+
+
+  fs.oss.proxy.username
+  Username for authenticating with proxy server
+
+
+
+  fs.oss.proxy.password
+  Password for authenticating with proxy server.
+
+
+
+  fs.oss.proxy.domain
+  

[13/50] [abbrv] hadoop git commit: HDFS-10868. Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED.

2016-09-26 Thread jianhe
HDFS-10868. Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c54f6ef3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c54f6ef3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c54f6ef3

Branch: refs/heads/MAPREDUCE-6608
Commit: c54f6ef30fbd5fbb9663e182b76bafb55ef567ad
Parents: b8a30f2
Author: Andrew Wang 
Authored: Mon Sep 19 11:17:03 2016 -0700
Committer: Andrew Wang 
Committed: Mon Sep 19 11:17:03 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java | 3 ---
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java  | 4 
 .../test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java  | 2 --
 3 files changed, 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c54f6ef3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 642d4c8..4c754d9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -131,9 +131,6 @@ public interface HdfsClientConfigKeys {
   "dfs.client.key.provider.cache.expiry";
   longDFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT =
   TimeUnit.DAYS.toMillis(10); // 10 days
-  String  DFS_HDFS_BLOCKS_METADATA_ENABLED =
-  "dfs.datanode.hdfs-blocks-metadata.enabled";
-  boolean DFS_HDFS_BLOCKS_METADATA_ENABLED_DEFAULT = false;
 
   String  DFS_DATANODE_KERBEROS_PRINCIPAL_KEY =
   "dfs.datanode.kerberos.principal";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c54f6ef3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 3532d25..df45e2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -58,10 +58,6 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   .DFS_CHECKSUM_TYPE_KEY;
   public static final String  DFS_CHECKSUM_TYPE_DEFAULT =
   HdfsClientConfigKeys.DFS_CHECKSUM_TYPE_DEFAULT;
-  public static final String  DFS_HDFS_BLOCKS_METADATA_ENABLED =
-  HdfsClientConfigKeys.DFS_HDFS_BLOCKS_METADATA_ENABLED;
-  public static final boolean DFS_HDFS_BLOCKS_METADATA_ENABLED_DEFAULT =
-  HdfsClientConfigKeys.DFS_HDFS_BLOCKS_METADATA_ENABLED_DEFAULT;
   public static final String DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT =
   HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT;
   public static final String  DFS_WEBHDFS_NETTY_LOW_WATERMARK =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c54f6ef3/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
index 46420f1..bf29428 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
@@ -76,8 +76,6 @@ public class TestHdfsConfigFields extends 
TestConfigurationFieldsBase {
 configurationPropsToSkipCompare
 .add("dfs.corruptfilesreturned.max");
 configurationPropsToSkipCompare
-.add("dfs.datanode.hdfs-blocks-metadata.enabled");
-configurationPropsToSkipCompare
 .add("dfs.metrics.session-id");
 configurationPropsToSkipCompare
 .add("dfs.datanode.synconclose");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[43/50] [abbrv] hadoop git commit: HDFS-10876. Dispatcher#dispatch should log IOException stacktrace. Contributed by Manoj Govindassamy.

2016-09-26 Thread jianhe
HDFS-10876. Dispatcher#dispatch should log IOException stacktrace. Contributed 
by Manoj Govindassamy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/74b3dd51
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/74b3dd51
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/74b3dd51

Branch: refs/heads/MAPREDUCE-6608
Commit: 74b3dd514c86b46197e2e19d9824a423715cab30
Parents: 6e849cb
Author: Wei-Chiu Chuang 
Authored: Fri Sep 23 13:26:57 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Fri Sep 23 13:26:57 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/74b3dd51/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
index a75cb30..aea0ae4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
@@ -354,7 +354,7 @@ public class Dispatcher {
 target.getDDatanode().setHasSuccess();
 LOG.info("Successfully moved " + this);
   } catch (IOException e) {
-LOG.warn("Failed to move " + this + ": " + e.getMessage());
+LOG.warn("Failed to move " + this, e);
 target.getDDatanode().setHasFailure();
 // Proxy or target may have some issues, delay before using these nodes
 // further in order to avoid a potential storm of "threads quota


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/50] [abbrv] hadoop git commit: YARN-5163. Migrate TestClientToAMTokens and TestClientRMTokens tests from the old RPC engine. Contributed by Wei Zhou and Kai Zheng

2016-09-26 Thread jianhe
YARN-5163. Migrate TestClientToAMTokens and TestClientRMTokens tests from the 
old RPC engine. Contributed by Wei Zhou and Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/58bae354
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/58bae354
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/58bae354

Branch: refs/heads/MAPREDUCE-6608
Commit: 58bae3544747ad47373925add35b39bda8be8c5a
Parents: 8a40953
Author: Kai Zheng 
Authored: Sun Sep 18 08:43:36 2016 +0800
Committer: Kai Zheng 
Committed: Sun Sep 18 08:43:36 2016 +0800

--
 .../java/org/apache/hadoop/ipc/TestRpcBase.java |   4 +-
 .../src/test/proto/test_rpc_service.proto   |   4 +
 .../security/TestClientToAMTokens.java  | 108 ++-
 3 files changed, 65 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/58bae354/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
index 3c5885f..0d2f975 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
@@ -31,8 +31,6 @@ import org.junit.Assert;
 
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.retry.RetryPolicy;
-import org.apache.hadoop.ipc.protobuf.TestProtos;
-import org.apache.hadoop.ipc.protobuf.TestRpcServiceProtos;
 import org.apache.hadoop.security.KerberosInfo;
 import org.apache.hadoop.security.SaslRpcServer.AuthMethod;
 import org.apache.hadoop.security.token.Token;
@@ -481,7 +479,7 @@ public class TestRpcBase {
 }
   }
 
-  protected static TestProtos.EmptyRequestProto newEmptyRequest() {
+  public static TestProtos.EmptyRequestProto newEmptyRequest() {
 return TestProtos.EmptyRequestProto.newBuilder().build();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/58bae354/hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto 
b/hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
index 06f6c4f..32eafeb 100644
--- a/hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
+++ b/hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
@@ -67,3 +67,7 @@ service NewerProtobufRpcProto {
   rpc ping(EmptyRequestProto) returns (EmptyResponseProto);
   rpc echo(EmptyRequestProto) returns (EmptyResponseProto);
 }
+
+service CustomProto {
+  rpc ping(EmptyRequestProto) returns (EmptyResponseProto);
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/58bae354/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
index d36fb9f..a451356 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java
@@ -18,33 +18,19 @@
 
 package org.apache.hadoop.yarn.server.resourcemanager.security;
 
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.resourcemanager
-.ParameterizedSchedulerTestBase;
-import static org.junit.Assert.fail;
-import org.junit.Before;
-import static org.mockito.Matchers.any;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-import java.io.IOException;
-import java.lang.annotation.Annotation;
-import java.net.InetSocketAddress;
-import java.nio.ByteBuffer;
-import java.security.PrivilegedAction;
-import java.security.PrivilegedExceptionAction;
-import java.util.Timer;
-import 

[27/50] [abbrv] hadoop git commit: HDFS-10861. Refactor StripeReaders and use ECChunk version decode API. Contributed by Sammi Chen

2016-09-26 Thread jianhe
HDFS-10861. Refactor StripeReaders and use ECChunk version decode API. 
Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/734d54c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/734d54c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/734d54c1

Branch: refs/heads/MAPREDUCE-6608
Commit: 734d54c1a8950446e68098f62d8964e02ecc2890
Parents: 2b66d9e
Author: Kai Zheng 
Authored: Wed Sep 21 21:34:48 2016 +0800
Committer: Kai Zheng 
Committed: Wed Sep 21 21:34:48 2016 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java |   2 +-
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  22 +
 .../io/erasurecode/rawcoder/CoderUtil.java  |   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  20 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  | 654 +++
 .../hadoop/hdfs/PositionStripeReader.java   | 104 +++
 .../hadoop/hdfs/StatefulStripeReader.java   |  95 +++
 .../org/apache/hadoop/hdfs/StripeReader.java| 463 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 158 ++---
 .../hadoop/hdfs/util/TestStripedBlockUtil.java  |   1 -
 10 files changed, 844 insertions(+), 678 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index c35d608..023f37f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -85,7 +85,7 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
   private final TreeMap getBufferTree(boolean direct) {
 return direct ? directBuffers : buffers;
   }
-  
+
   @Override
   public synchronized ByteBuffer getBuffer(boolean direct, int length) {
 TreeMap tree = getBufferTree(direct);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index cd7c6be..536715b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -29,6 +29,9 @@ public class ECChunk {
 
   private ByteBuffer chunkBuffer;
 
+  // TODO: should be in a more general flags
+  private boolean allZero = false;
+
   /**
* Wrapping a ByteBuffer
* @param buffer buffer to be wrapped by the chunk
@@ -37,6 +40,13 @@ public class ECChunk {
 this.chunkBuffer = buffer;
   }
 
+  public ECChunk(ByteBuffer buffer, int offset, int len) {
+ByteBuffer tmp = buffer.duplicate();
+tmp.position(offset);
+tmp.limit(offset + len);
+this.chunkBuffer = tmp.slice();
+  }
+
   /**
* Wrapping a bytes array
* @param buffer buffer to be wrapped by the chunk
@@ -45,6 +55,18 @@ public class ECChunk {
 this.chunkBuffer = ByteBuffer.wrap(buffer);
   }
 
+  public ECChunk(byte[] buffer, int offset, int len) {
+this.chunkBuffer = ByteBuffer.wrap(buffer, offset, len);
+  }
+
+  public boolean isAllZero() {
+return allZero;
+  }
+
+  public void setAllZero(boolean allZero) {
+this.allZero = allZero;
+  }
+
   /**
* Convert to ByteBuffer
* @return ByteBuffer

http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
index b22d44f..ef34639 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
@@ -115,6 +115,9 @@ final class CoderUtil {
 

[37/50] [abbrv] hadoop git commit: YARN-5539. TimelineClient failed to retry on java.net.SocketTimeoutException: Read timed out (Junping Du via Varun Saxena)

2016-09-26 Thread jianhe
YARN-5539. TimelineClient failed to retry on java.net.SocketTimeoutException: 
Read timed out (Junping Du via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8a2d7b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8a2d7b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8a2d7b8

Branch: refs/heads/MAPREDUCE-6608
Commit: b8a2d7b8fc96302ba1ef99d24392f463734f1b82
Parents: 3293a7d
Author: Varun Saxena 
Authored: Fri Sep 23 13:27:31 2016 +0530
Committer: Varun Saxena 
Committed: Fri Sep 23 13:27:31 2016 +0530

--
 .../apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a2d7b8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
index 56da2a7..dc4d3e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
@@ -265,7 +265,8 @@ public class TimelineClientImpl extends TimelineClient {
 public boolean shouldRetryOn(Exception e) {
   // Only retry on connection exceptions
   return (e instanceof ClientHandlerException)
-  && (e.getCause() instanceof ConnectException);
+  && (e.getCause() instanceof ConnectException ||
+  e.getCause() instanceof SocketTimeoutException);
 }
   };
   try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[44/50] [abbrv] hadoop git commit: HDFS-10866. Fix Eclipse Java compile errors related to generic parameters. Contributed by Konstantin Shvachko.

2016-09-26 Thread jianhe
HDFS-10866. Fix Eclipse Java compile errors related to generic parameters. 
Contributed by Konstantin Shvachko.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6eb700ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6eb700ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6eb700ee

Branch: refs/heads/MAPREDUCE-6608
Commit: 6eb700ee7469cd77524676d62cbd20043f111288
Parents: 74b3dd5
Author: Konstantin V Shvachko 
Authored: Fri Sep 23 11:25:15 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Fri Sep 23 14:10:11 2016 -0700

--
 .../main/java/org/apache/hadoop/io/MapFile.java |  4 +--
 .../hadoop/fs/TestDelegationTokenRenewer.java   |  3 +--
 .../hadoop/hdfs/web/TestWebHdfsTokens.java  | 27 +---
 3 files changed, 15 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6eb700ee/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
index ee76458..96a4189 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
@@ -990,8 +990,8 @@ public class MapFile {
 reader.getKeyClass().asSubclass(WritableComparable.class),
 reader.getValueClass());
 
-  WritableComparable key = ReflectionUtils.newInstance(reader.getKeyClass()
-.asSubclass(WritableComparable.class), conf);
+  WritableComparable key = ReflectionUtils.newInstance(
+  reader.getKeyClass().asSubclass(WritableComparable.class), conf);
   Writable value = ReflectionUtils.newInstance(reader.getValueClass()
 .asSubclass(Writable.class), conf);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6eb700ee/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
index 6b66222..582bc31 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
@@ -49,7 +49,6 @@ public class TestDelegationTokenRenewer {
 renewer = DelegationTokenRenewer.getInstance();
   }
   
-  @SuppressWarnings("unchecked")
   @Test
   public void testAddRemoveRenewAction() throws IOException,
   InterruptedException {
@@ -81,7 +80,7 @@ public class TestDelegationTokenRenewer {
 verify(token).cancel(eq(conf));
 
 verify(fs, never()).getDelegationToken(null);
-verify(fs, never()).setDelegationToken(any(Token.class));
+verify(fs, never()).setDelegationToken(any());
 
 assertEquals("FileSystem not removed from DelegationTokenRenewer", 0,
 renewer.getRenewQueueLength());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6eb700ee/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
index 6192ad9..058f63f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
@@ -127,7 +127,7 @@ public class TestWebHdfsTokens {
 fs.toUrl(op, null);
 verify(fs, never()).getDelegationToken();
 verify(fs, never()).getDelegationToken(null);
-verify(fs, never()).setDelegationToken((Token)any(Token.class));
+verify(fs, never()).setDelegationToken(any());
   }
 
   @Test(timeout = 1000)
@@ -160,7 +160,6 @@ public class TestWebHdfsTokens {
 }
   }
   
-  @SuppressWarnings("unchecked") // for any(Token.class)
   @Test
   public void testLazyTokenFetchForWebhdfs() throws Exception {
 MiniDFSCluster cluster = null;
@@ -190,7 +189,6 @@ public class TestWebHdfsTokens {
 }
   }
   
-  @SuppressWarnings("unchecked") // for any(Token.class)
   @Test
   public void testLazyTokenFetchForSWebhdfs() throws 

[50/50] [abbrv] hadoop git commit: HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back.

2016-09-26 Thread jianhe
HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back.

HADOOP-12756 branch: Incorporate Aliyun OSS file system implementation. 
Contributors:
Mingfei Shi (mingfei@intel.com)
Genmao Yu (genmao@alibaba-inc.com)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5707f88d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5707f88d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5707f88d

Branch: refs/heads/MAPREDUCE-6608
Commit: 5707f88d8550346f167e45c2f8c4161eb3957e3a
Parents: e4e72db
Author: Kai Zheng 
Authored: Mon Sep 26 20:42:22 2016 +0800
Committer: Kai Zheng 
Committed: Mon Sep 26 20:42:22 2016 +0800

--
 .gitignore  |   2 +
 hadoop-project/pom.xml  |  22 +
 .../dev-support/findbugs-exclude.xml|  18 +
 hadoop-tools/hadoop-aliyun/pom.xml  | 154 ++
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 543 +++
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 516 ++
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 260 +
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 +
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 ++
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 +++
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 239 
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 +
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 145 +
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 ++
 .../contract/TestAliyunOSSContractCreate.java   |  35 ++
 .../contract/TestAliyunOSSContractDelete.java   |  34 ++
 .../contract/TestAliyunOSSContractDistCp.java   |  44 ++
 .../TestAliyunOSSContractGetFileStatus.java |  35 ++
 .../contract/TestAliyunOSSContractMkdir.java|  34 ++
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 ++
 .../contract/TestAliyunOSSContractRename.java   |  35 ++
 .../contract/TestAliyunOSSContractRootDir.java  |  69 +++
 .../oss/contract/TestAliyunOSSContractSeek.java |  34 ++
 .../src/test/resources/contract/aliyun-oss.xml  | 115 
 .../src/test/resources/core-site.xml|  46 ++
 .../src/test/resources/log4j.properties |  23 +
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 +
 hadoop-tools/pom.xml|   1 +
 34 files changed, 3658 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5707f88d/.gitignore
--
diff --git a/.gitignore b/.gitignore
index a5d69d0..194862b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,3 +31,5 @@ hadoop-tools/hadoop-aws/src/test/resources/auth-keys.xml
 hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml
 hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml
 patchprocess/
+hadoop-tools/hadoop-aliyun/src/test/resources/auth-keys.xml
+hadoop-tools/hadoop-aliyun/src/test/resources/contract-test-options.xml

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5707f88d/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index d9a01a0..49ea40f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -439,6 +439,12 @@
 
   
 org.apache.hadoop
+hadoop-aliyun
+${project.version}
+  
+
+  
+org.apache.hadoop
 hadoop-kms
 ${project.version}
 classes
@@ -1005,6 +1011,22 @@
 4.2.0
  
 
+  
+com.aliyun.oss
+aliyun-sdk-oss
+2.2.1
+
+  
+org.apache.httpcomponents
+httpclient
+  
+  
+commons-beanutils
+commons-beanutils
+  
+
+ 
+
  
xerces
xercesImpl

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5707f88d/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
new file mode 100644
index 000..40d78d0
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -0,0 +1,18 @@
+
+
+


[41/50] [abbrv] hadoop git commit: HDFS-10843. Update space quota when a UC block is completed rather than committed. Contributed by Erik Krogen.

2016-09-26 Thread jianhe
HDFS-10843. Update space quota when a UC block is completed rather than 
committed. Contributed by Erik Krogen.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a5bb88c8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a5bb88c8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a5bb88c8

Branch: refs/heads/MAPREDUCE-6608
Commit: a5bb88c8e0fd4bd19b6d377fecbe1d2d441514f6
Parents: bbdf350
Author: Konstantin V Shvachko 
Authored: Thu Sep 22 13:57:37 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Fri Sep 23 10:37:46 2016 -0700

--
 .../server/blockmanagement/BlockManager.java|  39 ++-
 .../hdfs/server/namenode/FSDirAttrOp.java   |  12 +-
 .../hdfs/server/namenode/FSDirectory.java   |  47 +++
 .../hdfs/server/namenode/FSNamesystem.java  |  39 +--
 .../hadoop/hdfs/server/namenode/Namesystem.java |   2 +
 .../namenode/TestDiskspaceQuotaUpdate.java  | 301 ++-
 6 files changed, 313 insertions(+), 127 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a5bb88c8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 3a12d74..886984a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -83,6 +83,7 @@ import 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.namenode.CachedBlock;
 import org.apache.hadoop.hdfs.server.namenode.INode.BlocksMapUpdateInfo;
+import org.apache.hadoop.hdfs.server.namenode.INodesInPath;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.Namesystem;
 import org.apache.hadoop.hdfs.server.namenode.ha.HAContext;
@@ -786,12 +787,13 @@ public class BlockManager implements BlockStatsMXBean {
* 
* @param bc block collection
* @param commitBlock - contains client reported block length and generation
+   * @param iip - INodes in path to bc
* @return true if the last block is changed to committed state.
* @throws IOException if the block does not have at least a minimal number
* of replicas reported from data-nodes.
*/
   public boolean commitOrCompleteLastBlock(BlockCollection bc,
-  Block commitBlock) throws IOException {
+  Block commitBlock, INodesInPath iip) throws IOException {
 if(commitBlock == null)
   return false; // not committing, this is a block allocation retry
 BlockInfo lastBlock = bc.getLastBlock();
@@ -811,7 +813,7 @@ public class BlockManager implements BlockStatsMXBean {
   if (committed) {
 addExpectedReplicasToPending(lastBlock);
   }
-  completeBlock(lastBlock, false);
+  completeBlock(lastBlock, iip, false);
 }
 return committed;
   }
@@ -841,11 +843,15 @@ public class BlockManager implements BlockStatsMXBean {
 
   /**
* Convert a specified block of the file to a complete block.
+   * @param curBlock - block to be completed
+   * @param iip - INodes in path to file containing curBlock; if null,
+   *  this will be resolved internally
+   * @param force - force completion of the block
* @throws IOException if the block does not have at least a minimal number
* of replicas reported from data-nodes.
*/
-  private void completeBlock(BlockInfo curBlock, boolean force)
-  throws IOException {
+  private void completeBlock(BlockInfo curBlock, INodesInPath iip,
+  boolean force) throws IOException {
 if (curBlock.isComplete()) {
   return;
 }
@@ -860,7 +866,8 @@ public class BlockManager implements BlockStatsMXBean {
   "Cannot complete block: block has not been COMMITTED by the client");
 }
 
-curBlock.convertToCompleteBlock();
+convertToCompleteBlock(curBlock, iip);
+
 // Since safe-mode only counts complete blocks, and we now have
 // one more complete block, we need to adjust the total up, and
 // also count it as safe, if we have at least the minimum replica
@@ -875,13 +882,29 @@ public class BlockManager implements BlockStatsMXBean {
   }
 
   /**
+   * Convert a specified block of the file to a complete block.
+   * Skips validity checking and safe mode block total 

[28/50] [abbrv] hadoop git commit: MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval. (Haibo Chen via kasha)

2016-09-26 Thread jianhe
MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least 
mapreduce.task.progress-report.interval. (Haibo Chen via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/537095d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/537095d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/537095d1

Branch: refs/heads/MAPREDUCE-6608
Commit: 537095d13cd38212ed162e0a360bdd9a8bd83498
Parents: 964e546
Author: Karthik Kambatla 
Authored: Wed Sep 21 18:30:11 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Sep 21 18:30:11 2016 -0700

--
 .../mapreduce/v2/app/TaskHeartbeatHandler.java  | 24 ++-
 .../v2/app/TestTaskHeartbeatHandler.java| 67 
 .../java/org/apache/hadoop/mapred/Task.java |  8 ++-
 .../apache/hadoop/mapreduce/MRJobConfig.java|  9 ++-
 .../hadoop/mapreduce/util/MRJobConfUtil.java| 16 +
 5 files changed, 113 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/537095d1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
index 303b4c1..6a716c7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
@@ -23,10 +23,12 @@ import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.util.MRJobConfUtil;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
 import 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdateEvent;
 import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent;
@@ -67,7 +69,7 @@ public class TaskHeartbeatHandler extends AbstractService {
   //received from a task.
   private Thread lostTaskCheckerThread;
   private volatile boolean stopped;
-  private int taskTimeOut = 5 * 60 * 1000;// 5 mins
+  private long taskTimeOut;
   private int taskTimeOutCheckInterval = 30 * 1000; // 30 seconds.
 
   private final EventHandler eventHandler;
@@ -87,7 +89,19 @@ public class TaskHeartbeatHandler extends AbstractService {
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
 super.serviceInit(conf);
-taskTimeOut = conf.getInt(MRJobConfig.TASK_TIMEOUT, 5 * 60 * 1000);
+taskTimeOut = conf.getLong(
+MRJobConfig.TASK_TIMEOUT, MRJobConfig.DEFAULT_TASK_TIMEOUT_MILLIS);
+
+// enforce task timeout is at least twice as long as task report interval
+long taskProgressReportIntervalMillis = MRJobConfUtil.
+getTaskProgressReportInterval(conf);
+long minimumTaskTimeoutAllowed = taskProgressReportIntervalMillis * 2;
+if(taskTimeOut < minimumTaskTimeoutAllowed) {
+  taskTimeOut = minimumTaskTimeoutAllowed;
+  LOG.info("Task timeout must be as least twice as long as the task " +
+  "status report interval. Setting task timeout to " + taskTimeOut);
+}
+
 taskTimeOutCheckInterval =
 conf.getInt(MRJobConfig.TASK_TIMEOUT_CHECK_INTERVAL_MS, 30 * 1000);
   }
@@ -140,7 +154,7 @@ public class TaskHeartbeatHandler extends AbstractService {
 
 while (iterator.hasNext()) {
   Map.Entry entry = iterator.next();
-  boolean taskTimedOut = (taskTimeOut > 0) && 
+  boolean taskTimedOut = (taskTimeOut > 0) &&
   (currentTime > (entry.getValue().getLastProgress() + 
taskTimeOut));

   if(taskTimedOut) {
@@ -163,4 +177,8 @@ public class TaskHeartbeatHandler extends AbstractService {
 }
   }
 
+  @VisibleForTesting
+  public long getTaskTimeOut() {
+return taskTimeOut;
+  }
 }


[26/50] [abbrv] hadoop git commit: HDFS-9333. Some tests using MiniDFSCluster errored complaining port in use. (iwasakims)

2016-09-26 Thread jianhe
HDFS-9333. Some tests using MiniDFSCluster errored complaining port in use. 
(iwasakims)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/964e546a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/964e546a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/964e546a

Branch: refs/heads/MAPREDUCE-6608
Commit: 964e546ab1dba5f5d53b209ec6c9a70a85654765
Parents: 5a58bfe
Author: Masatake Iwasaki 
Authored: Wed Sep 21 10:35:25 2016 +0900
Committer: Masatake Iwasaki 
Committed: Wed Sep 21 10:35:25 2016 +0900

--
 .../blockmanagement/TestBlockTokenWithDFS.java  |  8 ++-
 .../TestBlockTokenWithDFSStriped.java   | 23 +++-
 .../hdfs/tools/TestDFSZKFailoverController.java | 18 ++-
 3 files changed, 42 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/964e546a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
index e7e7739..9374ae8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
@@ -61,6 +61,7 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.log4j.Level;
@@ -349,7 +350,12 @@ public class TestBlockTokenWithDFS {
 Configuration conf = getConf(numDataNodes);
 
 try {
-  cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
+  // prefer non-ephemeral port to avoid port collision on restartNameNode
+  cluster = new MiniDFSCluster.Builder(conf)
+  .nameNodePort(ServerSocketUtil.getPort(19820, 100))
+  .nameNodeHttpPort(ServerSocketUtil.getPort(19870, 100))
+  .numDataNodes(numDataNodes)
+  .build();
   cluster.waitActive();
   assertEquals(numDataNodes, cluster.getDataNodes().size());
   doTestRead(conf, cluster, false);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/964e546a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
index 64a48c2..1714561 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 import org.apache.hadoop.hdfs.server.balancer.TestBalancer;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
@@ -59,7 +60,27 @@ public class TestBlockTokenWithDFSStriped extends 
TestBlockTokenWithDFS {
   @Override
   public void testRead() throws Exception {
 conf = getConf();
-cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
+
+/*
+ * prefer non-ephemeral port to avoid conflict with tests using
+ * ephemeral ports on MiniDFSCluster#restartDataNode(true).
+ */
+Configuration[] overlays = new Configuration[numDNs];
+for (int i = 0; i < overlays.length; i++) {
+  int offset = i * 10;
+  Configuration c = new Configuration();
+  c.set(DFSConfigKeys.DFS_DATANODE_ADDRESS_KEY, "127.0.0.1:"
+  + ServerSocketUtil.getPort(19866 + offset, 100));
+  c.set(DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY, "127.0.0.1:"
+  + ServerSocketUtil.getPort(19867 + offset, 100));
+  overlays[i] = c;
+}
+
+cluster = new 

[40/50] [abbrv] hadoop git commit: HDFS-10886. Replace fs.default.name with fs.defaultFS in viewfs document.

2016-09-26 Thread jianhe
HDFS-10886. Replace fs.default.name with fs.defaultFS in viewfs document.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bbdf350f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bbdf350f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bbdf350f

Branch: refs/heads/MAPREDUCE-6608
Commit: bbdf350ff9fb624fe736c1eb9271c5dcb4e14b06
Parents: c4480f7
Author: Brahma Reddy Battula 
Authored: Fri Sep 23 19:21:34 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Sep 23 19:21:34 2016 +0530

--
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java  | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbdf350f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 23465a6..3beda53 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -83,7 +83,7 @@ import org.apache.hadoop.util.Time;
  * ViewFs is specified with the following URI: viewfs:/// 
  * 
  * To use viewfs one would typically set the default file system in the
- * config  (i.e. fs.default.name< = viewfs:///) along with the
+ * config  (i.e. fs.defaultFS < = viewfs:///) along with the
  * mount table config variables as described below. 
  * 
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbdf350f/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index 5f88def..ddec9b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -108,7 +108,7 @@ The mount points of a mount table are specified in the 
standard Hadoop configura
 
 ```xml
 
-  fs.default.name
+  fs.defaultFS
   viewfs://clusterX
 
 ```


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/50] [abbrv] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-26 Thread jianhe
YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed 
by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b66d9ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b66d9ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b66d9ec

Branch: refs/heads/MAPREDUCE-6608
Commit: 2b66d9ec5bdaec7e6b278926fbb6f222c4e3afaa
Parents: e52d6e7
Author: Jian He 
Authored: Tue Sep 20 15:03:07 2016 +0800
Committer: Jian He 
Committed: Tue Sep 20 15:03:31 2016 +0800

--
 .../dev-support/findbugs-exclude.xml|   10 +
 .../scheduler/capacity/AbstractCSQueue.java |  378 ++--
 .../scheduler/capacity/LeafQueue.java   | 1819 ++
 .../scheduler/capacity/ParentQueue.java |  825 
 .../scheduler/capacity/PlanQueue.java   |  122 +-
 .../scheduler/capacity/ReservationQueue.java|   67 +-
 .../capacity/TestContainerResizing.java |4 +-
 7 files changed, 1787 insertions(+), 1438 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a5c0f71..01b1da7 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -564,4 +564,14 @@
     
     
   
+
+  
+  
+
+    
+  
+  
+    
+
+  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index 1d8f929..096f5ea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -60,25 +61,25 @@ import com.google.common.collect.Sets;
 
 public abstract class AbstractCSQueue implements CSQueue {
   private static final Log LOG = LogFactory.getLog(AbstractCSQueue.class);  
-  CSQueue parent;
+  volatile CSQueue parent;
   final String queueName;
   volatile int numContainers;
   
   final Resource minimumAllocation;
   volatile Resource maximumAllocation;
-  QueueState state;
+  volatile QueueState state;
   final CSQueueMetrics metrics;
   protected final PrivilegedEntity queueEntity;
 
   final ResourceCalculator resourceCalculator;
   Set accessibleLabels;
-  RMNodeLabelsManager labelManager;
+  final RMNodeLabelsManager labelManager;
   String defaultLabelExpression;
   
   Map acls = 
   new HashMap();
   volatile boolean reservationsContinueLooking;
-  private boolean preemptionDisabled;
+  private volatile boolean preemptionDisabled;
 
   // Track resource usage-by-label like used-resource/pending-resource, etc.
   volatile ResourceUsage queueUsage;
@@ -94,6 +95,9 @@ public abstract class AbstractCSQueue implements CSQueue {
 
   protected ActivitiesManager activitiesManager;
 
+  protected ReentrantReadWriteLock.ReadLock readLock;
+  protected ReentrantReadWriteLock.WriteLock writeLock;
+
   public AbstractCSQueue(CapacitySchedulerContext cs,
   String queueName, CSQueue parent, CSQueue old) throws IOException {
 this.labelManager = cs.getRMContext().getNodeLabelManager();
@@ -116,7 +120,11 @@ public abstract class AbstractCSQueue implements CSQueue {
 queueEntity = new PrivilegedEntity(EntityType.QUEUE, getQueuePath());
 
 // initialize QueueCapacities
-

[45/50] [abbrv] hadoop git commit: YARN-5664. Fix Yarn documentation to link to correct versions. Contributed by Xiao Chen

2016-09-26 Thread jianhe
YARN-5664. Fix Yarn documentation to link to correct versions. Contributed by 
Xiao Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d16d2d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d16d2d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d16d2d7

Branch: refs/heads/MAPREDUCE-6608
Commit: 4d16d2d72f085e4efb6db27d397802b21df18815
Parents: 6eb700e
Author: Naganarasimha 
Authored: Sat Sep 24 23:21:03 2016 +0530
Committer: Naganarasimha 
Committed: Sat Sep 24 23:21:03 2016 +0530

--
 .../hadoop-yarn-site/src/site/markdown/CapacityScheduler.md| 2 +-
 .../hadoop-yarn-site/src/site/markdown/NodeLabel.md| 2 +-
 .../hadoop-yarn-site/src/site/markdown/ReservationSystem.md| 2 +-
 .../src/site/markdown/WritingYarnApplications.md   | 6 +++---
 .../hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md | 6 +++---
 5 files changed, 9 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d16d2d7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index 6aa7007..9c9b03e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -39,7 +39,7 @@ Overview
 
 The `CapacityScheduler` is designed to run Hadoop applications as a shared, 
multi-tenant cluster in an operator-friendly manner while maximizing the 
throughput and the utilization of the cluster.
 
-Traditionally each organization has it own private set of compute resources 
that have sufficient capacity to meet the organization's SLA under peak or near 
peak conditions. This generally leads to poor average utilization and overhead 
of managing multiple independent clusters, one per each organization. Sharing 
clusters between organizations is a cost-effective manner of running large 
Hadoop installations since this allows them to reap benefits of economies of 
scale without creating private clusters. However, organizations are concerned 
about sharing a cluster because they are worried about others using the 
resources that are critical for their SLAs.
+Traditionally each organization has it own private set of compute resources 
that have sufficient capacity to meet the organization's SLA under peak or 
near-peak conditions. This generally leads to poor average utilization and 
overhead of managing multiple independent clusters, one per each organization. 
Sharing clusters between organizations is a cost-effective manner of running 
large Hadoop installations since this allows them to reap benefits of economies 
of scale without creating private clusters. However, organizations are 
concerned about sharing a cluster because they are worried about others using 
the resources that are critical for their SLAs.
 
 The `CapacityScheduler` is designed to allow sharing a large cluster while 
giving each organization capacity guarantees. The central idea is that the 
available resources in the Hadoop cluster are shared among multiple 
organizations who collectively fund the cluster based on their computing needs. 
There is an added benefit that an organization can access any excess capacity 
not being used by others. This provides elasticity for the organizations in a 
cost-effective manner.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d16d2d7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md
index 1fecf07..af75bfe 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md
@@ -192,5 +192,5 @@ Following label-related fields can be seen on web UI:
 Useful links
 
 
-* [YARN Capacity 
Scheduler](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html),
 if you need more understanding about how to configure Capacity Scheduler
+* [YARN Capacity Scheduler](./CapacityScheduler.html), if you need more 
understanding about how to configure Capacity Scheduler
 * Write YARN application using node labels, you can see following two links as 

[38/50] [abbrv] hadoop git commit: HADOOP-13643. Math error in AbstractContractDistCpTest. Contributed by Aaron Fabbri.

2016-09-26 Thread jianhe
HADOOP-13643. Math error in AbstractContractDistCpTest. Contributed by Aaron 
Fabbri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5ef51e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5ef51e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5ef51e7

Branch: refs/heads/MAPREDUCE-6608
Commit: e5ef51e717647328db9f2b80f21fe44b99079d08
Parents: b8a2d7b
Author: Steve Loughran 
Authored: Fri Sep 23 10:00:32 2016 +0100
Committer: Steve Loughran 
Committed: Fri Sep 23 10:01:30 2016 +0100

--
 .../apache/hadoop/tools/contract/AbstractContractDistCpTest.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5ef51e7/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
index a4f50c7..21a14d3 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
@@ -160,7 +160,7 @@ public abstract class AbstractContractDistCpTest
 Path inputFile3 = new Path(inputDir, "file3");
 mkdirs(srcFS, inputDir);
 int fileSizeKb = conf.getInt("scale.test.distcp.file.size.kb", 10 * 1024);
-int fileSizeMb = fileSizeKb * 1024;
+int fileSizeMb = fileSizeKb / 1024;
 getLog().info("{} with file size {}", testName.getMethodName(), 
fileSizeMb);
 byte[] data1 = dataset((fileSizeMb + 1) * 1024 * 1024, 33, 43);
 createFile(srcFS, inputFile1, true, data1);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[14/50] [abbrv] hadoop git commit: YARN-5540. Scheduler spends too much time looking at empty priorities. Contributed by Jason Lowe

2016-09-26 Thread jianhe
YARN-5540. Scheduler spends too much time looking at empty priorities. 
Contributed by Jason Lowe


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7558dbbb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7558dbbb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7558dbbb

Branch: refs/heads/MAPREDUCE-6608
Commit: 7558dbbb481eab055e794beb3603bbe5671a4b4c
Parents: c54f6ef
Author: Jason Lowe 
Authored: Mon Sep 19 20:31:35 2016 +
Committer: Jason Lowe 
Committed: Mon Sep 19 20:31:35 2016 +

--
 .../scheduler/AppSchedulingInfo.java| 96 +++-
 .../scheduler/TestAppSchedulingInfo.java| 65 +
 2 files changed, 118 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7558dbbb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
index c677345..39820f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
@@ -26,8 +26,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeMap;
-import java.util.TreeSet;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -59,7 +59,6 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 public class AppSchedulingInfo {
   
   private static final Log LOG = LogFactory.getLog(AppSchedulingInfo.class);
-  private static final int EPOCH_BIT_SHIFT = 40;
 
   private final ApplicationId applicationId;
   private final ApplicationAttemptId applicationAttemptId;
@@ -79,7 +78,8 @@ public class AppSchedulingInfo {
 
   private Set requestedPartitions = new HashSet<>();
 
-  final Set schedulerKeys = new TreeSet<>();
+  private final ConcurrentSkipListMap
+  schedulerKeys = new ConcurrentSkipListMap<>();
   final Map>
   resourceRequestMap = new ConcurrentHashMap<>();
   final Map();
   requestsOnNode.put(schedulerKey, requestsOnNodeWithPriority);
+  incrementSchedulerKeyReference(schedulerKey);
 }
 
 requestsOnNodeWithPriority.put(containerId, request);
@@ -250,11 +251,30 @@ public class AppSchedulingInfo {
   LOG.debug("Added increase request:" + request.getContainerId()
   + " delta=" + delta);
 }
-
-// update Scheduler Keys
-schedulerKeys.add(schedulerKey);
   }
-  
+
+  private void incrementSchedulerKeyReference(
+  SchedulerRequestKey schedulerKey) {
+Integer schedulerKeyCount = schedulerKeys.get(schedulerKey);
+if (schedulerKeyCount == null) {
+  schedulerKeys.put(schedulerKey, 1);
+} else {
+  schedulerKeys.put(schedulerKey, schedulerKeyCount + 1);
+}
+  }
+
+  private void decrementSchedulerKeyReference(
+  SchedulerRequestKey schedulerKey) {
+Integer schedulerKeyCount = schedulerKeys.get(schedulerKey);
+if (schedulerKeyCount != null) {
+  if (schedulerKeyCount > 1) {
+schedulerKeys.put(schedulerKey, schedulerKeyCount - 1);
+  } else {
+schedulerKeys.remove(schedulerKey);
+  }
+}
+  }
+
   public synchronized boolean removeIncreaseRequest(NodeId nodeId,
   SchedulerRequestKey schedulerKey, ContainerId containerId) {
 Map>
@@ -275,6 +295,7 @@ public class AppSchedulingInfo {
 // remove hierarchies if it becomes empty
 if (requestsOnNodeWithPriority.isEmpty()) {
   requestsOnNode.remove(schedulerKey);
+  decrementSchedulerKeyReference(schedulerKey);
 }
 if 

[36/50] [abbrv] hadoop git commit: Revert "TimelineClient failed to retry on java.net.SocketTimeoutException: Read timed out"

2016-09-26 Thread jianhe
Revert "TimelineClient failed to retry on java.net.SocketTimeoutException: Read 
timed out"

This reverts commit 2e6ee957161ab63a02a7861b727efa6310b275b2.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3293a7d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3293a7d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3293a7d9

Branch: refs/heads/MAPREDUCE-6608
Commit: 3293a7d92d563e45dd69be4ed60c01ec94af8a21
Parents: 2e6ee95
Author: Varun Saxena 
Authored: Fri Sep 23 13:25:46 2016 +0530
Committer: Varun Saxena 
Committed: Fri Sep 23 13:25:46 2016 +0530

--
 .../apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java| 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3293a7d9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
index dc4d3e6..56da2a7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
@@ -265,8 +265,7 @@ public class TimelineClientImpl extends TimelineClient {
 public boolean shouldRetryOn(Exception e) {
   // Only retry on connection exceptions
   return (e instanceof ClientHandlerException)
-  && (e.getCause() instanceof ConnectException ||
-  e.getCause() instanceof SocketTimeoutException);
+  && (e.getCause() instanceof ConnectException);
 }
   };
   try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/50] [abbrv] hadoop git commit: HADOOP-13169. Randomize file list in SimpleCopyListing. Contributed by Rajesh Balamohan.

2016-09-26 Thread jianhe
HADOOP-13169. Randomize file list in SimpleCopyListing. Contributed by Rajesh 
Balamohan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/98bdb513
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/98bdb513
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/98bdb513

Branch: refs/heads/MAPREDUCE-6608
Commit: 98bdb5139769eb55893971b43b9c23da9513a784
Parents: 7558dbb
Author: Chris Nauroth 
Authored: Mon Sep 19 15:16:47 2016 -0700
Committer: Chris Nauroth 
Committed: Mon Sep 19 15:16:47 2016 -0700

--
 .../apache/hadoop/tools/DistCpConstants.java|   4 +
 .../apache/hadoop/tools/SimpleCopyListing.java  | 114 +--
 .../apache/hadoop/tools/TestCopyListing.java|  83 +-
 3 files changed, 189 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98bdb513/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index 95d26df..96f364c 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -58,6 +58,10 @@ public class DistCpConstants {
   public static final String CONF_LABEL_APPEND = "distcp.copy.append";
   public static final String CONF_LABEL_DIFF = "distcp.copy.diff";
   public static final String CONF_LABEL_BANDWIDTH_MB = 
"distcp.map.bandwidth.mb";
+  public static final String CONF_LABEL_SIMPLE_LISTING_FILESTATUS_SIZE =
+  "distcp.simplelisting.file.status.size";
+  public static final String CONF_LABEL_SIMPLE_LISTING_RANDOMIZE_FILES =
+  "distcp.simplelisting.randomize.files";
   public static final String CONF_LABEL_FILTERS_FILE =
   "distcp.filters.file";
   public static final String CONF_LABEL_MAX_CHUNKS_TOLERABLE =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/98bdb513/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
index 3f52203..bc30aa1 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.tools;
 
+import com.google.common.collect.Lists;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.fs.Path;
@@ -42,7 +43,10 @@ import com.google.common.annotations.VisibleForTesting;
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
+import java.util.List;
+import java.util.Random;
 
 import static org.apache.hadoop.tools.DistCpConstants
 .HDFS_RESERVED_RAW_DIRECTORY_NAME;
@@ -56,13 +60,19 @@ import static org.apache.hadoop.tools.DistCpConstants
 public class SimpleCopyListing extends CopyListing {
   private static final Log LOG = LogFactory.getLog(SimpleCopyListing.class);
 
+  public static final int DEFAULT_FILE_STATUS_SIZE = 1000;
+  public static final boolean DEFAULT_RANDOMIZE_FILE_LISTING = true;
+
   private long totalPaths = 0;
   private long totalDirs = 0;
   private long totalBytesToCopy = 0;
   private int numListstatusThreads = 1;
+  private final int fileStatusLimit;
+  private final boolean randomizeFileListing;
   private final int maxRetries = 3;
   private CopyFilter copyFilter;
   private DistCpSync distCpSync;
+  private final Random rnd = new Random();
 
   /**
* Protected constructor, to initialize configuration.
@@ -76,6 +86,17 @@ public class SimpleCopyListing extends CopyListing {
 numListstatusThreads = getConf().getInt(
 DistCpConstants.CONF_LABEL_LISTSTATUS_THREADS,
 DistCpConstants.DEFAULT_LISTSTATUS_THREADS);
+fileStatusLimit = Math.max(1, getConf()
+.getInt(DistCpConstants.CONF_LABEL_SIMPLE_LISTING_FILESTATUS_SIZE,
+DEFAULT_FILE_STATUS_SIZE));
+randomizeFileListing = getConf().getBoolean(
+DistCpConstants.CONF_LABEL_SIMPLE_LISTING_RANDOMIZE_FILES,
+DEFAULT_RANDOMIZE_FILE_LISTING);
+if (LOG.isDebugEnabled()) {
+  LOG.debug("numListstatusThreads=" + numListstatusThreads
+  + ", 

[30/50] [abbrv] hadoop git commit: HDFS-10877. Make RemoteEditLogManifest.committedTxnId optional in Protocol Buffers. Contributed by Sean Mackrory.

2016-09-26 Thread jianhe
HDFS-10877. Make RemoteEditLogManifest.committedTxnId optional in Protocol 
Buffers. Contributed by Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/40acacee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/40acacee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/40acacee

Branch: refs/heads/MAPREDUCE-6608
Commit: 40acacee085494ca52205d37449a46c058d5d325
Parents: 8d619b4
Author: Andrew Wang 
Authored: Thu Sep 22 11:43:11 2016 -0700
Committer: Andrew Wang 
Committed: Thu Sep 22 11:43:11 2016 -0700

--
 .../server/protocol/RemoteEditLogManifest.java  |  7 -
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto |  2 +-
 .../hadoop/hdfs/protocolPB/TestPBHelper.java| 30 ++--
 3 files changed, 28 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/40acacee/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
index 686f7c2..8252b3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
@@ -22,6 +22,7 @@ import java.util.List;
 
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 
 /**
  * An enumeration of logs available on a remote NameNode.
@@ -30,11 +31,15 @@ public class RemoteEditLogManifest {
 
   private List logs;
 
-  private long committedTxnId = -1;
+  private long committedTxnId = HdfsServerConstants.INVALID_TXID;
 
   public RemoteEditLogManifest() {
   }
 
+  public RemoteEditLogManifest(List logs) {
+this(logs, HdfsServerConstants.INVALID_TXID);
+  }
+
   public RemoteEditLogManifest(List logs, long committedTxnId) {
 this.logs = logs;
 this.committedTxnId = committedTxnId;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/40acacee/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
index e87dc95..910e03b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
@@ -88,7 +88,7 @@ message RemoteEditLogProto {
  */
 message RemoteEditLogManifestProto {
   repeated RemoteEditLogProto logs = 1;
-  required uint64 committedTxnId = 2;
+  optional uint64 committedTxnId = 2;
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/40acacee/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
index af17756..4072071 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
@@ -69,6 +69,7 @@ import 
org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.StorageInfo;
@@ -323,26 +324,37 @@ public class TestPBHelper {
 RemoteEditLog l1 = PBHelper.convert(lProto);
 compare(l, l1);
   }
-  
-  @Test
-  public void testConvertRemoteEditLogManifest() {
-List logs = new ArrayList();
-logs.add(new RemoteEditLog(1, 10));
-logs.add(new RemoteEditLog(11, 20));
-RemoteEditLogManifest m = new RemoteEditLogManifest(logs, 20);
+
+  private void convertAndCheckRemoteEditLogManifest(RemoteEditLogManifest m,
+  

[23/50] [abbrv] hadoop git commit: HADOOP-13601. Fix a log message typo in AbstractDelegationTokenSecretManager. Contributed by Mehran Hassani.

2016-09-26 Thread jianhe
HADOOP-13601. Fix a log message typo in AbstractDelegationTokenSecretManager. 
Contributed by Mehran Hassani.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e80386d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e80386d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e80386d6

Branch: refs/heads/MAPREDUCE-6608
Commit: e80386d69d5fb6a08aa3366e42d2518747af569f
Parents: 9f03b40
Author: Mingliang Liu 
Authored: Tue Sep 20 13:19:44 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Sep 20 13:20:01 2016 -0700

--
 .../token/delegation/AbstractDelegationTokenSecretManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e80386d6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
index 1d7f2f5..cc2efc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
@@ -528,7 +528,7 @@ extends AbstractDelegationTokenIdentifier>
 DataInputStream in = new DataInputStream(buf);
 TokenIdent id = createIdentifier();
 id.readFields(in);
-LOG.info("Token cancelation requested for identifier: "+id);
+LOG.info("Token cancellation requested for identifier: " + id);
 
 if (id.getUser() == null) {
   throw new InvalidToken("Token with no owner");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[42/50] [abbrv] hadoop git commit: YARN-5622. TestYarnCLI.testGetContainers fails due to mismatched date formats. Contributed by Eric Badger.

2016-09-26 Thread jianhe
YARN-5622. TestYarnCLI.testGetContainers fails due to mismatched date formats. 
Contributed by Eric Badger.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e849cb6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e849cb6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e849cb6

Branch: refs/heads/MAPREDUCE-6608
Commit: 6e849cb658438c0561de485e01f3de7df47bf9ad
Parents: a5bb88c
Author: Kihwal Lee 
Authored: Fri Sep 23 14:09:25 2016 -0500
Committer: Kihwal Lee 
Committed: Fri Sep 23 14:09:25 2016 -0500

--
 .../org/apache/hadoop/yarn/client/cli/TestYarnCLI.java| 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e849cb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
index 3fdea40..ef64365 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
@@ -35,8 +35,6 @@ import java.io.OutputStreamWriter;
 import java.io.PrintStream;
 import java.io.PrintWriter;
 import java.io.UnsupportedEncodingException;
-import java.text.DateFormat;
-import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.EnumSet;
@@ -81,6 +79,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
 import org.apache.hadoop.yarn.util.Records;
+import org.apache.hadoop.yarn.util.Times;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
@@ -300,7 +299,6 @@ public class TestYarnCLI {
 reports.add(container);
 reports.add(container1);
 reports.add(container2);
-DateFormat dateFormat=new SimpleDateFormat("EEE MMM dd HH:mm:ss Z ");
 when(client.getContainers(any(ApplicationAttemptId.class))).thenReturn(
 reports);
 sysOutStream.reset();
@@ -316,13 +314,13 @@ public class TestYarnCLI {
 pw.printf(ApplicationCLI.CONTAINER_PATTERN, "Container-Id", "Start Time",
 "Finish Time", "State", "Host", "Node Http Address", "LOG-URL");
 pw.printf(ApplicationCLI.CONTAINER_PATTERN, 
"container_1234_0005_01_01",
-dateFormat.format(new Date(time1)), dateFormat.format(new Date(time2)),
+Times.format(time1), Times.format(time2),
 "COMPLETE", "host:1234", "http://host:2345;, "logURL");
 pw.printf(ApplicationCLI.CONTAINER_PATTERN, 
"container_1234_0005_01_02",
-dateFormat.format(new Date(time1)), dateFormat.format(new Date(time2)),
+Times.format(time1), Times.format(time2),
 "COMPLETE", "host:1234", "http://host:2345;, "logURL");
 pw.printf(ApplicationCLI.CONTAINER_PATTERN, 
"container_1234_0005_01_03",
-dateFormat.format(new Date(time1)), "N/A", "RUNNING", "host:1234",
+Times.format(time1), "N/A", "RUNNING", "host:1234",
 "http://host:2345;, "");
 pw.close();
 String appReportStr = baos.toString("UTF-8");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[46/50] [abbrv] hadoop git commit: HDFS-10869. Remove the unused method InodeId#checkId(). Contributed by Jagadesh Kiran N

2016-09-26 Thread jianhe
HDFS-10869. Remove the unused method InodeId#checkId(). Contributed by Jagadesh 
Kiran N


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e37e243
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e37e243
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e37e243

Branch: refs/heads/MAPREDUCE-6608
Commit: 3e37e243ee041e843e060b17c622ab50c8f9ff11
Parents: 4d16d2d
Author: Brahma Reddy Battula 
Authored: Sun Sep 25 11:02:46 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Sun Sep 25 11:02:46 2016 +0530

--
 .../apache/hadoop/hdfs/server/namenode/INodeId.java | 16 
 1 file changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e37e243/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java
index 10139bf..16c28b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java
@@ -17,10 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
-import java.io.FileNotFoundException;
-
 import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.util.SequentialNumber;
 
 /**
@@ -39,19 +36,6 @@ public class INodeId extends SequentialNumber {
   public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
   public static final long INVALID_INODE_ID = -1;
 
-  /**
-   * To check if the request id is the same as saved id. Don't check fileId
-   * with GRANDFATHER_INODE_ID for backward compatibility.
-   */
-  public static void checkId(long requestId, INode inode)
-  throws FileNotFoundException {
-if (requestId != HdfsConstants.GRANDFATHER_INODE_ID && requestId != 
inode.getId()) {
-  throw new FileNotFoundException(
-  "ID mismatch. Request id and saved id: " + requestId + " , "
-  + inode.getId() + " for file " + inode.getFullPathName());
-}
-  }
-  
   INodeId() {
 super(ROOT_INODE_ID);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: YARN-4232. TopCLI console support for HA mode. Contributed by Bibin A Chundatt

2016-09-26 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/MAPREDUCE-6608 e62053030 -> 14a696f36


YARN-4232. TopCLI console support for HA mode. Contributed by Bibin A Chundatt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ade7c2bc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ade7c2bc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ade7c2bc

Branch: refs/heads/MAPREDUCE-6608
Commit: ade7c2bc9ccf09d843ccb3dfa56c1453a9e87318
Parents: 501a778
Author: Naganarasimha 
Authored: Sat Sep 17 09:52:39 2016 +0530
Committer: Naganarasimha 
Committed: Sat Sep 17 09:52:39 2016 +0530

--
 .../apache/hadoop/yarn/client/cli/TopCLI.java   | 114 ---
 .../hadoop/yarn/client/cli/TestTopCLI.java  | 106 +
 .../hadoop/yarn/webapp/util/WebAppUtils.java|  35 +++---
 3 files changed, 224 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ade7c2bc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
index eca1266..f99dd48 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
@@ -20,10 +20,13 @@ package org.apache.hadoop.yarn.client.cli;
 
 import java.io.IOException;
 import java.io.InputStream;
+import java.net.ConnectException;
+import java.net.MalformedURLException;
 import java.net.URL;
 import java.net.URLConnection;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.EnumMap;
@@ -37,6 +40,10 @@ import java.util.Set;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import javax.net.ssl.HttpsURLConnection;
+import javax.net.ssl.SSLSocketFactory;
+
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.cache.Cache;
 import com.google.common.cache.CacheBuilder;
 import org.apache.commons.cli.CommandLine;
@@ -50,7 +57,12 @@ import org.apache.commons.lang.time.DateFormatUtils;
 import org.apache.commons.lang.time.DurationFormatUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.HttpConfig.Policy;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
+import org.apache.hadoop.security.ssl.SSLFactory;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.util.ToolRunner;
 import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
@@ -60,12 +72,17 @@ import org.apache.hadoop.yarn.api.records.QueueInfo;
 import org.apache.hadoop.yarn.api.records.QueueStatistics;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.api.records.YarnClusterMetrics;
+import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
+import org.codehaus.jettison.json.JSONException;
 import org.codehaus.jettison.json.JSONObject;
 
 public class TopCLI extends YarnCLI {
 
+  private static final String CLUSTER_INFO_URL = "/ws/v1/cluster/info";
+
   private static final Log LOG = LogFactory.getLog(TopCLI.class);
   private String CLEAR = "\u001b[2J";
   private String CLEAR_LINE = "\u001b[2K";
@@ -742,18 +759,12 @@ public class TopCLI extends YarnCLI {
 
   long getRMStartTime() {
 try {
-  URL url =
-  new URL("http://;
-  + client.getConfig().get(YarnConfiguration.RM_WEBAPP_ADDRESS)
-  + "/ws/v1/cluster/info");
-  URLConnection conn = url.openConnection();
-  conn.connect();
-  InputStream in = conn.getInputStream();
-  String encoding = conn.getContentEncoding();
-  encoding = encoding == null ? "UTF-8" : encoding;
-  String body = IOUtils.toString(in, encoding);
-  JSONObject obj = new JSONObject(body);
-  JSONObject clusterInfo = obj.getJSONObject("clusterInfo");
+  // connect with url
+  URL url = getClusterUrl();
+  

[33/50] [abbrv] hadoop git commit: YARN-3692. Allow REST API to set a user generated message when killing an application. Contributed by Rohith Sharma K S

2016-09-26 Thread jianhe
YARN-3692. Allow REST API to set a user generated message when killing an 
application. Contributed by Rohith Sharma K S


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0372dc6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0372dc6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0372dc6

Branch: refs/heads/MAPREDUCE-6608
Commit: d0372dc613136910160e9d42bd5eaa0d4bde2356
Parents: 5ffd4b7
Author: Naganarasimha 
Authored: Fri Sep 23 06:30:49 2016 +0530
Committer: Naganarasimha 
Committed: Fri Sep 23 06:30:49 2016 +0530

--
 .../hadoop/mapred/ResourceMgrDelegate.java  |  6 ++
 .../protocolrecords/KillApplicationRequest.java | 18 
 .../src/main/proto/yarn_service_protos.proto|  1 +
 .../hadoop/yarn/client/api/YarnClient.java  | 14 +
 .../yarn/client/api/impl/YarnClientImpl.java| 22 +++-
 .../impl/pb/KillApplicationRequestPBImpl.java   | 18 
 .../server/resourcemanager/ClientRMService.java | 20 +-
 .../resourcemanager/webapp/RMWebServices.java   |  8 +--
 .../resourcemanager/webapp/dao/AppState.java|  8 +++
 .../resourcemanager/TestClientRMService.java|  7 ++-
 .../TestRMWebServicesAppsModification.java  |  4 
 11 files changed, 113 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0372dc6/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
index 159b518..c302553 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
@@ -511,4 +511,10 @@ public class ResourceMgrDelegate extends YarnClient {
   throws YarnException, IOException {
 client.signalToContainer(containerId, command);
   }
+
+  @Override
+  public void killApplication(ApplicationId appId, String diagnostics)
+  throws YarnException, IOException {
+client.killApplication(appId, diagnostics);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0372dc6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
index 606cf4e..a7679a0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.api.protocolrecords;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.util.Records;
@@ -57,4 +58,21 @@ public abstract class KillApplicationRequest {
   @Public
   @Stable
   public abstract void setApplicationId(ApplicationId applicationId);
+
+  /**
+   * Get the diagnostics to which the application is being killed.
+   * @return diagnostics to which the application is being killed
+   */
+  @Public
+  @Unstable
+  public abstract String getDiagnostics();
+
+  /**
+   * Set the diagnostics to which the application is being killed.
+   * @param diagnostics diagnostics to which the application is being
+   *  killed
+   */
+  @Public
+  @Unstable
+  public abstract void setDiagnostics(String diagnostics);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0372dc6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto

[09/50] [abbrv] hadoop git commit: YARN-5637. Changes in NodeManager to support Container rollback and commit. (asuresh)

2016-09-26 Thread jianhe
YARN-5637. Changes in NodeManager to support Container rollback and commit. 
(asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3552c2b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3552c2b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3552c2b9

Branch: refs/heads/MAPREDUCE-6608
Commit: 3552c2b99dff4f21489ff284f9dcba40e897a1e5
Parents: ea839bd
Author: Arun Suresh 
Authored: Fri Sep 16 16:53:18 2016 -0700
Committer: Arun Suresh 
Committed: Sun Sep 18 10:55:18 2016 -0700

--
 .../containermanager/ContainerManagerImpl.java  |  68 ++-
 .../containermanager/container/Container.java   |   4 +
 .../container/ContainerEventType.java   |   1 +
 .../container/ContainerImpl.java| 188 ++-
 .../container/ContainerReInitEvent.java |  20 +-
 .../TestContainerManagerWithLCE.java|  42 -
 .../containermanager/TestContainerManager.java  | 152 +--
 .../nodemanager/webapp/MockContainer.java   |  10 +
 8 files changed, 401 insertions(+), 84 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3552c2b9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index f909ca5..8a9ad99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -165,8 +165,8 @@ import static 
org.apache.hadoop.service.Service.STATE.STARTED;
 public class ContainerManagerImpl extends CompositeService implements
 ContainerManager {
 
-  private enum ReinitOp {
-UPGRADE, COMMIT, ROLLBACK, LOCALIZE;
+  private enum ReInitOp {
+RE_INIT, COMMIT, ROLLBACK, LOCALIZE;
   }
   /**
* Extra duration to wait for applications to be killed on shutdown.
@@ -1535,7 +1535,7 @@ public class ContainerManagerImpl extends 
CompositeService implements
 
 ContainerId containerId = request.getContainerId();
 Container container = preUpgradeOrLocalizeCheck(containerId,
-ReinitOp.LOCALIZE);
+ReInitOp.LOCALIZE);
 try {
   Map req =
   container.getResourceSet().addResources(request.getLocalResources());
@@ -1551,16 +1551,31 @@ public class ContainerManagerImpl extends 
CompositeService implements
 return ResourceLocalizationResponse.newInstance();
   }
 
-  public void upgradeContainer(ContainerId containerId,
-  ContainerLaunchContext upgradeLaunchContext) throws YarnException {
+  /**
+   * ReInitialize a container using a new Launch Context. If the
+   * retryFailureContext is not provided, The container is
+   * terminated on Failure.
+   *
+   * NOTE: Auto-Commit is true by default. This also means that the rollback
+   *   context is purged as soon as the command to start the new process
+   *   is sent. (The Container moves to RUNNING state)
+   *
+   * @param containerId Container Id.
+   * @param autoCommit Auto Commit flag.
+   * @param reInitLaunchContext Target Launch Context.
+   * @throws YarnException Yarn Exception.
+   */
+  public void reInitializeContainer(ContainerId containerId,
+  ContainerLaunchContext reInitLaunchContext, boolean autoCommit)
+  throws YarnException {
 Container container = preUpgradeOrLocalizeCheck(containerId,
-ReinitOp.UPGRADE);
+ReInitOp.RE_INIT);
 ResourceSet resourceSet = new ResourceSet();
 try {
-  resourceSet.addResources(upgradeLaunchContext.getLocalResources());
+  resourceSet.addResources(reInitLaunchContext.getLocalResources());
   dispatcher.getEventHandler().handle(
-  new ContainerReInitEvent(containerId, upgradeLaunchContext,
-  resourceSet));
+  new ContainerReInitEvent(containerId, reInitLaunchContext,
+  resourceSet, autoCommit));
   container.setIsReInitializing(true);
 } catch (URISyntaxException e) {
   LOG.info("Error when parsing local resource URI 

[17/50] [abbrv] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-26 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
index 3e9785f..ffb6892 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
@@ -107,68 +107,77 @@ public class ParentQueue extends AbstractCSQueue {
 ", fullname=" + getQueuePath());
   }
 
-  synchronized void setupQueueConfigs(Resource clusterResource)
+  void setupQueueConfigs(Resource clusterResource)
   throws IOException {
-super.setupQueueConfigs(clusterResource);
-StringBuilder aclsString = new StringBuilder();
-for (Map.Entry e : acls.entrySet()) {
-  aclsString.append(e.getKey() + ":" + e.getValue().getAclString());
-}
+try {
+  writeLock.lock();
+  super.setupQueueConfigs(clusterResource);
+  StringBuilder aclsString = new StringBuilder();
+  for (Map.Entry e : acls.entrySet()) {
+aclsString.append(e.getKey() + ":" + e.getValue().getAclString());
+  }
 
-StringBuilder labelStrBuilder = new StringBuilder(); 
-if (accessibleLabels != null) {
-  for (String s : accessibleLabels) {
-labelStrBuilder.append(s);
-labelStrBuilder.append(",");
+  StringBuilder labelStrBuilder = new StringBuilder();
+  if (accessibleLabels != null) {
+for (String s : accessibleLabels) {
+  labelStrBuilder.append(s);
+  labelStrBuilder.append(",");
+}
   }
-}
 
-LOG.info(queueName +
-", capacity=" + this.queueCapacities.getCapacity() +
-", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity() +
-", maxCapacity=" + this.queueCapacities.getMaximumCapacity() +
-", absoluteMaxCapacity=" + 
this.queueCapacities.getAbsoluteMaximumCapacity() +
-", state=" + state +
-", acls=" + aclsString + 
-", labels=" + labelStrBuilder.toString() + "\n" +
-", reservationsContinueLooking=" + reservationsContinueLooking);
+  LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity()
+  + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity()
+  + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity()
+  + ", absoluteMaxCapacity=" + this.queueCapacities
+  .getAbsoluteMaximumCapacity() + ", state=" + state + ", acls="
+  + aclsString + ", labels=" + labelStrBuilder.toString() + "\n"
+  + ", reservationsContinueLooking=" + reservationsContinueLooking);
+} finally {
+  writeLock.unlock();
+}
   }
 
   private static float PRECISION = 0.0005f; // 0.05% precision
-  synchronized void setChildQueues(Collection childQueues) {
-// Validate
-float childCapacities = 0;
-for (CSQueue queue : childQueues) {
-  childCapacities += queue.getCapacity();
-}
-float delta = Math.abs(1.0f - childCapacities);  // crude way to check
-// allow capacities being set to 0, and enforce child 0 if parent is 0
-if (((queueCapacities.getCapacity() > 0) && (delta > PRECISION)) || 
-((queueCapacities.getCapacity() == 0) && (childCapacities > 0))) {
-  throw new IllegalArgumentException("Illegal" +
-   " capacity of " + childCapacities + 
-   " for children of queue " + queueName);
-}
-// check label capacities
-for (String nodeLabel : queueCapacities.getExistingNodeLabels()) {
-  float capacityByLabel = queueCapacities.getCapacity(nodeLabel);
-  // check children's labels
-  float sum = 0;
+
+  void setChildQueues(Collection childQueues) {
+try {
+  writeLock.lock();
+  // Validate
+  float childCapacities = 0;
   for (CSQueue queue : childQueues) {
-sum += queue.getQueueCapacities().getCapacity(nodeLabel);
+childCapacities += queue.getCapacity();
   }
-  if ((capacityByLabel > 0 && Math.abs(1.0f - sum) > PRECISION)
-  || (capacityByLabel == 0) && (sum > 0)) {
-throw new IllegalArgumentException("Illegal" + " capacity of "
-+ sum + " for 

[12/50] [abbrv] hadoop git commit: YARN-3141. Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp. Contributed by Wangda Tan

2016-09-26 Thread jianhe
YARN-3141. Improve locks in 
SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp. Contributed by 
Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8a30f2f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8a30f2f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8a30f2f

Branch: refs/heads/MAPREDUCE-6608
Commit: b8a30f2f170ffbd590e7366c3c944ab4919e40df
Parents: ea29e3b
Author: Jian He 
Authored: Mon Sep 19 16:58:39 2016 +0800
Committer: Jian He 
Committed: Mon Sep 19 17:08:01 2016 +0800

--
 .../scheduler/SchedulerApplicationAttempt.java  | 744 +++
 .../allocator/RegularContainerAllocator.java|   2 +-
 .../scheduler/common/fica/FiCaSchedulerApp.java | 418 ++-
 .../scheduler/fair/FSAppAttempt.java| 465 ++--
 4 files changed, 922 insertions(+), 707 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a30f2f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
index 97d29cf..adc3a97 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
@@ -26,8 +26,11 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.collect.ConcurrentHashMultiset;
 import org.apache.commons.lang.time.DateUtils;
 import org.apache.commons.lang.time.FastDateFormat;
 import org.apache.commons.logging.Log;
@@ -71,8 +74,6 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.common.collect.HashMultiset;
-import com.google.common.collect.Multiset;
 
 /**
  * Represents an application attempt from the viewpoint of the scheduler.
@@ -97,14 +98,14 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
   protected final AppSchedulingInfo appSchedulingInfo;
   protected ApplicationAttemptId attemptId;
   protected Map liveContainers =
-  new HashMap();
+  new ConcurrentHashMap<>();
   protected final Map>
   reservedContainers = new HashMap<>();
 
-  private final Multiset reReservations =
-  HashMultiset.create();
+  private final ConcurrentHashMultiset reReservations =
+  ConcurrentHashMultiset.create();
   
-  private Resource resourceLimit = Resource.newInstance(0, 0);
+  private volatile Resource resourceLimit = Resource.newInstance(0, 0);
   private boolean unmanagedAM = true;
   private boolean amRunning = false;
   private LogAggregationContext logAggregationContext;
@@ -138,8 +139,9 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
* the application successfully schedules a task (at rack or node local), it
* is reset to 0.
*/
-  Multiset schedulingOpportunities = 
HashMultiset.create();
-  
+  private ConcurrentHashMultiset schedulingOpportunities =
+  ConcurrentHashMultiset.create();
+
   /**
* Count how many times the application has been given an opportunity to
* schedule a non-partitioned resource request at each priority. Each time 
the
@@ -147,15 +149,16 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
* incremented, and each time the application successfully schedules a task,
* it is reset to 0 when schedule any task at corresponding priority.
*/
-  Multiset missedNonPartitionedReqSchedulingOpportunity =
-  HashMultiset.create();
+  private ConcurrentHashMultiset
+  missedNonPartitionedReqSchedulingOpportunity =
+  ConcurrentHashMultiset.create();
   
   // Time of the last 

[18/50] [abbrv] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-26 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 922d711..6129772 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -20,6 +20,7 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import java.io.IOException;
 import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
@@ -85,11 +86,11 @@ public class LeafQueue extends AbstractCSQueue {
   private static final Log LOG = LogFactory.getLog(LeafQueue.class);
 
   private float absoluteUsedCapacity = 0.0f;
-  private int userLimit;
-  private float userLimitFactor;
+  private volatile int userLimit;
+  private volatile float userLimitFactor;
 
   protected int maxApplications;
-  protected int maxApplicationsPerUser;
+  protected volatile int maxApplicationsPerUser;
   
   private float maxAMResourcePerQueuePercent;
 
@@ -97,15 +98,15 @@ public class LeafQueue extends AbstractCSQueue {
   private volatile boolean rackLocalityFullReset;
 
   Map applicationAttemptMap =
-  new HashMap();
+  new ConcurrentHashMap<>();
 
   private Priority defaultAppPriorityPerQueue;
 
-  private OrderingPolicy pendingOrderingPolicy = null;
+  private final OrderingPolicy pendingOrderingPolicy;
 
   private volatile float minimumAllocationFactor;
 
-  private Map users = new HashMap();
+  private Map users = new ConcurrentHashMap<>();
 
   private final RecordFactory recordFactory = 
 RecordFactoryProvider.getRecordFactory(null);
@@ -122,7 +123,7 @@ public class LeafQueue extends AbstractCSQueue {
 
   private volatile ResourceLimits cachedResourceLimitsForHeadroom = null;
 
-  private OrderingPolicy orderingPolicy = null;
+  private volatile OrderingPolicy orderingPolicy = null;
 
   // Summation of consumed ratios for all users in queue
   private float totalUserConsumedRatio = 0;
@@ -131,7 +132,7 @@ public class LeafQueue extends AbstractCSQueue {
   // record all ignore partition exclusivityRMContainer, this will be used to 
do
   // preemption, key is the partition of the RMContainer allocated on
   private Map 
ignorePartitionExclusivityRMContainers =
-  new HashMap<>();
+  new ConcurrentHashMap<>();
 
   @SuppressWarnings({ "unchecked", "rawtypes" })
   public LeafQueue(CapacitySchedulerContext cs,
@@ -154,125 +155,125 @@ public class LeafQueue extends AbstractCSQueue {
 setupQueueConfigs(cs.getClusterResource());
   }
 
-  protected synchronized void setupQueueConfigs(Resource clusterResource)
+  protected void setupQueueConfigs(Resource clusterResource)
   throws IOException {
-super.setupQueueConfigs(clusterResource);
-
-this.lastClusterResource = clusterResource;
-
-this.cachedResourceLimitsForHeadroom = new ResourceLimits(clusterResource);
-
-// Initialize headroom info, also used for calculating application 
-// master resource limits.  Since this happens during queue initialization
-// and all queues may not be realized yet, we'll use (optimistic) 
-// absoluteMaxCapacity (it will be replaced with the more accurate 
-// absoluteMaxAvailCapacity during headroom/userlimit/allocation events)
-setQueueResourceLimitsInfo(clusterResource);
+try {
+  writeLock.lock();
+  super.setupQueueConfigs(clusterResource);
 
-CapacitySchedulerConfiguration conf = csContext.getConfiguration();
+  this.lastClusterResource = clusterResource;
 
-
setOrderingPolicy(conf.getOrderingPolicy(getQueuePath()));
+  this.cachedResourceLimitsForHeadroom = new ResourceLimits(
+  clusterResource);
 
-userLimit = conf.getUserLimit(getQueuePath());
-userLimitFactor = conf.getUserLimitFactor(getQueuePath());
+  // Initialize headroom info, also used for 

[16/50] [abbrv] hadoop git commit: HDFS-10875. Optimize du -x to cache intermediate result. Contributed by Xiao Chen.

2016-09-26 Thread jianhe
HDFS-10875. Optimize du -x to cache intermediate result. Contributed by Xiao 
Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e52d6e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e52d6e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e52d6e7a

Branch: refs/heads/MAPREDUCE-6608
Commit: e52d6e7a46ceef74dd8d8a3d49c49420e3271365
Parents: 98bdb51
Author: Xiao Chen 
Authored: Mon Sep 19 21:44:42 2016 -0700
Committer: Xiao Chen 
Committed: Mon Sep 19 21:44:42 2016 -0700

--
 .../apache/hadoop/hdfs/server/namenode/INodeDirectory.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e52d6e7a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
index 9a8f9b2..24c8815 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
@@ -630,14 +630,15 @@ public class INodeDirectory extends 
INodeWithAdditionalFields
   ContentSummaryComputationContext summary) {
 final DirectoryWithSnapshotFeature sf = getDirectoryWithSnapshotFeature();
 if (sf != null && snapshotId == Snapshot.CURRENT_STATE_ID) {
+  final ContentCounts counts = new ContentCounts.Builder().build();
   // if the getContentSummary call is against a non-snapshot path, the
   // computation should include all the deleted files/directories
   sf.computeContentSummary4Snapshot(summary.getBlockStoragePolicySuite(),
-  summary.getCounts());
-  // Also compute ContentSummary for snapshotCounts (So we can extract it
+  counts);
+  summary.getCounts().addContents(counts);
+  // Also add ContentSummary to snapshotCounts (So we can extract it
   // later from the ContentSummary of all).
-  sf.computeContentSummary4Snapshot(summary.getBlockStoragePolicySuite(),
-  summary.getSnapshotCounts());
+  summary.getSnapshotCounts().addContents(counts);
 }
 final DirectoryWithQuotaFeature q = getDirectoryWithQuotaFeature();
 if (q != null && snapshotId == Snapshot.CURRENT_STATE_ID) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[21/50] [abbrv] hadoop git commit: Addendum patch for fix javadocs failure which is caused by YARN-3141. (wangda)

2016-09-26 Thread jianhe
Addendum patch for fix javadocs failure which is caused by YARN-3141. (wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e45307c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e45307c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e45307c9

Branch: refs/heads/MAPREDUCE-6608
Commit: e45307c9a063248fcfb08281025d87c4abd343b1
Parents: c6d1d74
Author: Wangda Tan 
Authored: Tue Sep 20 11:21:01 2016 -0700
Committer: Wangda Tan 
Committed: Tue Sep 20 11:21:01 2016 -0700

--
 .../resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e45307c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
index f40ecd7..fd43e74 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
@@ -328,7 +328,7 @@ public class FiCaSchedulerApp extends 
SchedulerApplicationAttempt {
* of the resources that will be allocated to and preempted from this
* application.
*
-   * @param rc
+   * @param resourceCalculator
* @param clusterResource
* @param minimumAllocation
* @return an allocation


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/50] [abbrv] hadoop git commit: MAPREDUCE-6774. Add support for HDFS erasure code policy to TestDFSIO. Contributed by Sammi Chen

2016-09-26 Thread jianhe
MAPREDUCE-6774. Add support for HDFS erasure code policy to TestDFSIO. 
Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/501a7785
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/501a7785
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/501a7785

Branch: refs/heads/MAPREDUCE-6608
Commit: 501a77856d6b6edfb261547117e719da7a9cd221
Parents: 58bae35
Author: Kai Zheng 
Authored: Sun Sep 18 09:03:15 2016 +0800
Committer: Kai Zheng 
Committed: Sun Sep 18 09:03:15 2016 +0800

--
 .../java/org/apache/hadoop/fs/TestDFSIO.java| 159 +++
 1 file changed, 124 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/501a7785/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
index 05d4d77..d218169 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.SequenceFile;
 import org.apache.hadoop.io.SequenceFile.CompressionType;
@@ -106,9 +107,14 @@ public class TestDFSIO implements Tool {
 " [-nrFiles N]" +
 " [-size Size[B|KB|MB|GB|TB]]" +
 " [-resFile resultFileName] [-bufferSize Bytes]" +
-" [-storagePolicy storagePolicyName]";
+" [-storagePolicy storagePolicyName]" +
+" [-erasureCodePolicy erasureCodePolicyName]";
 
   private Configuration config;
+  private static final String STORAGE_POLICY_NAME_KEY =
+  "test.io.block.storage.policy";
+  private static final String ERASURE_CODE_POLICY_NAME_KEY =
+  "test.io.erasure.code.policy";
 
   static{
 Configuration.addDefaultResource("hdfs-default.xml");
@@ -211,9 +217,9 @@ public class TestDFSIO implements Tool {
 bench = new TestDFSIO();
 bench.getConf().setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
 cluster = new MiniDFSCluster.Builder(bench.getConf())
-.numDataNodes(2)
-.format(true)
-.build();
+.numDataNodes(2)
+.format(true)
+.build();
 FileSystem fs = cluster.getFileSystem();
 bench.createControlFile(fs, DEFAULT_NR_BYTES, DEFAULT_NR_FILES);
 
@@ -356,7 +362,7 @@ public class TestDFSIO implements Tool {
 ReflectionUtils.newInstance(codec, getConf());
   }
 
-  blockStoragePolicy = getConf().get("test.io.block.storage.policy", null);
+  blockStoragePolicy = getConf().get(STORAGE_POLICY_NAME_KEY, null);
 }
 
 @Override // IOMapperBase
@@ -388,9 +394,10 @@ public class TestDFSIO implements Tool {
*/
   public static class WriteMapper extends IOStatMapper {
 
-public WriteMapper() { 
-  for(int i=0; i < bufferSize; i++)
-buffer[i] = (byte)('0' + i % 50);
+public WriteMapper() {
+  for (int i = 0; i < bufferSize; i++) {
+buffer[i] = (byte) ('0' + i % 50);
+  }
 }
 
 @Override // IOMapperBase
@@ -431,6 +438,9 @@ public class TestDFSIO implements Tool {
 fs.delete(getDataDir(config), true);
 fs.delete(writeDir, true);
 long tStart = System.currentTimeMillis();
+if (isECEnabled()) {
+  createAndEnableECOnPath(fs, getDataDir(config));
+}
 runIOTest(WriteMapper.class, writeDir);
 long execTime = System.currentTimeMillis() - tStart;
 return execTime;
@@ -734,6 +744,7 @@ public class TestDFSIO implements Tool {
 TestType testType = null;
 int bufferSize = DEFAULT_BUFFER_SIZE;
 long nrBytes = 1*MEGA;
+String erasureCodePolicyName = null;
 int nrFiles = 1;
 long skipSize = 0;
 String resFileName = DEFAULT_RES_FILE_NAME;
@@ -785,26 +796,31 @@ public class TestDFSIO implements Tool {
 resFileName = args[++i];
   } 

[04/50] [abbrv] hadoop git commit: YARN-5657. Fix TestDefaultContainerExecutor. (asuresh)

2016-09-26 Thread jianhe
YARN-5657. Fix TestDefaultContainerExecutor. (asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f67237cb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f67237cb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f67237cb

Branch: refs/heads/MAPREDUCE-6608
Commit: f67237cbe7bc48a1b9088e990800b37529f1db2a
Parents: 4174b97
Author: Arun Suresh 
Authored: Sat Sep 17 09:29:03 2016 -0700
Committer: Arun Suresh 
Committed: Sat Sep 17 09:32:05 2016 -0700

--
 .../server/nodemanager/TestDefaultContainerExecutor.java | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f67237cb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
index e2ec693..3bb8008 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDefaultContainerExecutor.java
@@ -22,6 +22,7 @@ import static org.apache.hadoop.fs.CreateFlag.CREATE;
 import static org.apache.hadoop.fs.CreateFlag.OVERWRITE;
 import static org.junit.Assert.assertTrue;
 import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyBoolean;
 import static org.mockito.Matchers.isA;
 import static org.mockito.Mockito.doAnswer;
 import static org.mockito.Mockito.mock;
@@ -386,8 +387,8 @@ public class TestDefaultContainerExecutor {
   }
 }
 return null;
-  }
-}).when(mockUtil).copy(any(Path.class), any(Path.class));
+  }}).when(mockUtil).copy(any(Path.class), any(Path.class),
+anyBoolean(), anyBoolean());
 
 doAnswer(new Answer() {
   @Override
@@ -478,7 +479,8 @@ public class TestDefaultContainerExecutor {
 }
 
 // Verify that the calls happen the expected number of times
-verify(mockUtil, times(1)).copy(any(Path.class), any(Path.class));
+verify(mockUtil, times(1)).copy(any(Path.class), any(Path.class),
+anyBoolean(), anyBoolean());
 verify(mockLfs, times(2)).getFsStatus(any(Path.class));
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[10/50] [abbrv] hadoop git commit: YARN-5577. [Atsv2] Document object passing in infofilters with an example (Rohith Sharma K S via Varun Saxena)

2016-09-26 Thread jianhe
YARN-5577. [Atsv2] Document object passing in infofilters with an example 
(Rohith Sharma K S via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ea29e3bc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ea29e3bc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ea29e3bc

Branch: refs/heads/MAPREDUCE-6608
Commit: ea29e3bc27f15516f4346d1312eef703bcd3d032
Parents: 3552c2b
Author: Varun Saxena 
Authored: Mon Sep 19 14:33:06 2016 +0530
Committer: Varun Saxena 
Committed: Mon Sep 19 14:33:06 2016 +0530

--
 .../hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md| 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea29e3bc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
index b6a0da4..6b7bd08 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
@@ -712,6 +712,8 @@ none of the apps match the predicates, an empty list will 
be returned.
   "eq" means equals, "ne" means not equals and existence of key is not 
required for a match and "ene" means not equals but existence of key is
   required. We can combine any number of ANDs' and ORs' to create complex 
expressions.  Brackets can be used to club expressions together.
   _For example_ : infofilters can be "(((infokey1 eq value1) AND (infokey2 ne 
value1)) OR (infokey1 ene value3))".
+  Note : If value is an object then value can be given in the form of JSON 
format without any space.
+  _For example_ : infofilters can be (infokey1 eq 
{"key":"value","key":"value"...}).
   Please note that URL unsafe characters such as spaces will have to be 
suitably encoded.
 1. `conffilters` - If specified, matched applications must have exact matches 
to the given config name and must be either equal or not equal
   to the given config value. Both the config name and value must be strings. 
conffilters are represented in the same form as infofilters.
@@ -837,6 +839,8 @@ match the predicates, an empty list will be returned.
   "eq" means equals, "ne" means not equals and existence of key is not 
required for a match and "ene" means not equals but existence of key is
   required. We can combine any number of ANDs' and ORs' to create complex 
expressions.  Brackets can be used to club expressions together.
   _For example_ : infofilters can be "(((infokey1 eq value1) AND (infokey2 ne 
value1)) OR (infokey1 ene value3))".
+  Note : If value is an object then value can be given in the form of JSON 
format without any space.
+  _For example_ : infofilters can be (infokey1 eq 
{"key":"value","key":"value"...}).
   Please note that URL unsafe characters such as spaces will have to be 
suitably encoded.
 1. `conffilters` - If specified, matched applications must have exact matches 
to the given config name and must be either equal or not equal
   to the given config value. Both the config name and value must be strings. 
conffilters are represented in the same form as infofilters.
@@ -1035,6 +1039,8 @@ If none of the entities match the predicates, an empty 
list will be returned.
   "eq" means equals, "ne" means not equals and existence of key is not 
required for a match and "ene" means not equals but existence of key is
   required. We can combine any number of ANDs' and ORs' to create complex 
expressions.  Brackets can be used to club expressions together.
   _For example_ : infofilters can be "(((infokey1 eq value1) AND (infokey2 ne 
value1)) OR (infokey1 ene value3))".
+  Note : If value is an object then value can be given in the form of JSON 
format without any space.
+  _For example_ : infofilters can be (infokey1 eq 
{"key":"value","key":"value"...}).
   Please note that URL unsafe characters such as spaces will have to be 
suitably encoded.
 1. `conffilters` - If specified, matched entities must have exact matches to 
the given config name and must be either equal or not equal
   to the given config value. Both the config name and value must be strings. 
conffilters are represented in the same form as infofilters.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/50] [abbrv] hadoop git commit: MAPREDUCE-6777. Typos in 4 log messages. Contributed by Mehran Hassani

2016-09-26 Thread jianhe
MAPREDUCE-6777. Typos in 4 log messages. Contributed by Mehran Hassani


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d21c280
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d21c280
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d21c280

Branch: refs/heads/MAPREDUCE-6608
Commit: 7d21c280a82b2f02675bf0048f0e965d99a05ae7
Parents: ade7c2b
Author: Naganarasimha 
Authored: Sat Sep 17 10:19:59 2016 +0530
Committer: Naganarasimha 
Committed: Sat Sep 17 10:19:59 2016 +0530

--
 .../java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java | 2 +-
 .../apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java | 2 +-
 .../src/main/java/org/apache/hadoop/mapred/Task.java   | 2 +-
 .../src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d21c280/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
index 49a00c5..2c0ea2b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
@@ -260,7 +260,7 @@ public class TaskAttemptListenerImpl extends 
CompositeService
 
   @Override
   public void done(TaskAttemptID taskAttemptID) throws IOException {
-LOG.info("Done acknowledgement from " + taskAttemptID.toString());
+LOG.info("Done acknowledgment from " + taskAttemptID.toString());
 
 org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId attemptID =
 TypeConverter.toYarn(taskAttemptID);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d21c280/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
index 53d8a36..e61bbaa 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
@@ -400,7 +400,7 @@ public class JobHistoryEventHandler extends AbstractService
 }
 mi.shutDownTimer();
   } catch (IOException e) {
-LOG.info("Exception while cancelling delayed flush timer. "
+LOG.info("Exception while canceling delayed flush timer. "
 + "Likely caused by a failed flush " + e.getMessage());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d21c280/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 3c58a67..d20e9bd 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -1312,7 +1312,7 @@ abstract public class Task implements Writable, 
Configurable {
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
-LOG.info("Runnning cleanup for the task");
+LOG.info("Running cleanup for the task");
 // do the cleanup
 

[08/50] [abbrv] hadoop git commit: HDFS-10489. Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones. Contributed by Xiao Chen.

2016-09-26 Thread jianhe
HDFS-10489. Deprecate dfs.encryption.key.provider.uri for HDFS encryption 
zones. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ea839bd4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ea839bd4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ea839bd4

Branch: refs/heads/MAPREDUCE-6608
Commit: ea839bd48e4478fc7b6d0a69e0eaeae2de5e0f0d
Parents: 96142ef
Author: Xiao Chen 
Authored: Sat Sep 17 22:25:39 2016 -0700
Committer: Xiao Chen 
Committed: Sat Sep 17 22:25:39 2016 -0700

--
 .../hadoop/crypto/key/KeyProviderFactory.java  |  3 ++-
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |  8 
 .../src/main/resources/core-default.xml|  8 
 .../src/site/markdown/DeprecatedProperties.md  |  1 +
 .../hadoop-kms/src/site/markdown/index.md.vm   | 10 +-
 .../java/org/apache/hadoop/hdfs/DFSUtilClient.java | 13 +++--
 .../org/apache/hadoop/hdfs/HdfsConfiguration.java  |  3 +++
 .../org/apache/hadoop/hdfs/KeyProviderCache.java   |  6 +++---
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  1 -
 .../org/apache/hadoop/test/TestHdfsHelper.java |  4 +++-
 .../hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java   |  4 ++--
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  2 --
 .../src/main/resources/hdfs-default.xml|  8 
 .../src/site/markdown/TransparentEncryption.md |  2 +-
 .../org/apache/hadoop/cli/TestCryptoAdminCLI.java  |  4 ++--
 .../org/apache/hadoop/hdfs/TestAclsEndToEnd.java   |  3 ++-
 .../java/org/apache/hadoop/hdfs/TestDFSUtil.java   | 12 
 .../apache/hadoop/hdfs/TestEncryptionZones.java| 17 ++---
 .../hadoop/hdfs/TestEncryptionZonesWithHA.java |  3 ++-
 .../apache/hadoop/hdfs/TestKeyProviderCache.java   | 10 +-
 .../apache/hadoop/hdfs/TestReservedRawPaths.java   |  3 ++-
 .../hdfs/TestSecureEncryptionZoneWithKMS.java  |  6 --
 .../server/namenode/TestNestedEncryptionZones.java |  4 +++-
 .../namenode/metrics/TestNameNodeMetrics.java  |  3 ++-
 24 files changed, 83 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea839bd4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
index ce99d79..b16960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderFactory.java
@@ -29,6 +29,7 @@ import java.util.ServiceLoader;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 
 /**
  * A factory to create a list of KeyProvider based on the path given in a
@@ -39,7 +40,7 @@ import org.apache.hadoop.conf.Configuration;
 @InterfaceStability.Unstable
 public abstract class KeyProviderFactory {
   public static final String KEY_PROVIDER_PATH =
-  "hadoop.security.key.provider.path";
+  CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH;
 
   public abstract KeyProvider createProvider(URI providerName,
  Configuration conf

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea839bd4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 0a3afb7..b5b107c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -628,6 +628,14 @@ public class CommonConfigurationKeysPublic {
   public static final String  HADOOP_SECURITY_IMPERSONATION_PROVIDER_CLASS =
 "hadoop.security.impersonation.provider.class";
 
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String HADOOP_SECURITY_KEY_PROVIDER_PATH =
+  "hadoop.security.key.provider.path";
+
   //  
   /**
* @see


svn commit: r1762278 - /hadoop/common/site/main/publish/images/hadoop-logo.jpg

2016-09-26 Thread cdouglas
Author: cdouglas
Date: Mon Sep 26 06:59:29 2016
New Revision: 1762278

URL: http://svn.apache.org/viewvc?rev=1762278=rev
Log:
HADOOP-13184. Add "Apache" to Hadoop project logo. Contributed by Abhishek Kumar

Modified:
hadoop/common/site/main/publish/images/hadoop-logo.jpg

Modified: hadoop/common/site/main/publish/images/hadoop-logo.jpg
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/images/hadoop-logo.jpg?rev=1762278=1762277=1762278=diff
==
Binary files - no diff available.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1762276 - /hadoop/logos/asf_hadoop/

2016-09-26 Thread cdouglas
Author: cdouglas
Date: Mon Sep 26 06:37:52 2016
New Revision: 1762276

URL: http://svn.apache.org/viewvc?rev=1762276=rev
Log:
HADOOP-13184. Add "Apache" to Hadoop project logo

Added:
hadoop/logos/asf_hadoop/
hadoop/logos/asf_hadoop/hadoop-logo-new.pdf
hadoop/logos/asf_hadoop/hadoop-logo-new.psd   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-new.tif   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-no-back-1000.png   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-no-back-500.png   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-no-back-5000.png   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-no-back-8000.png   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-white-back-1000.jpg   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-white-back-1.jpg   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-white-back-500.jpg   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-white-back-5000.jpg   (with props)
hadoop/logos/asf_hadoop/hadoop-logo-white-back-8000.jpg   (with props)

Added: hadoop/logos/asf_hadoop/hadoop-logo-new.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-new.pdf?rev=1762276=auto
==
Binary files hadoop/logos/asf_hadoop/hadoop-logo-new.pdf (added) and 
hadoop/logos/asf_hadoop/hadoop-logo-new.pdf Mon Sep 26 06:37:52 2016 differ

Added: hadoop/logos/asf_hadoop/hadoop-logo-new.psd
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-new.psd?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-new.psd
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-new.tif
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-new.tif?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-new.tif
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-no-back-1000.png
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-no-back-1000.png?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-no-back-1000.png
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-no-back-500.png
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-no-back-500.png?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-no-back-500.png
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-no-back-5000.png
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-no-back-5000.png?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-no-back-5000.png
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-no-back-8000.png
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-no-back-8000.png?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-no-back-8000.png
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-white-back-1000.jpg
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-white-back-1000.jpg?rev=1762276=auto
==
Binary file - no diff available.

Propchange: hadoop/logos/asf_hadoop/hadoop-logo-white-back-1000.jpg
--
svn:mime-type = application/octet-stream

Added: hadoop/logos/asf_hadoop/hadoop-logo-white-back-1.jpg
URL: 
http://svn.apache.org/viewvc/hadoop/logos/asf_hadoop/hadoop-logo-white-back-1.jpg?rev=1762276=auto
==
Binary file - no diff available.

Propchange: 

svn commit: r1762274 - /hadoop/logos/posters/

2016-09-26 Thread cdouglas
Author: cdouglas
Date: Mon Sep 26 06:31:32 2016
New Revision: 1762274

URL: http://svn.apache.org/viewvc?rev=1762274=rev
Log:
Remove build break poster

Removed:
hadoop/logos/posters/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

2016-09-26 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 4c2b20ca3 -> ece3ca0cb


YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

(cherry picked from commit 14a696f369f7e3802587f57c8fff3aa51b5ab576)
(cherry picked from commit 74f2df16a919711cbd10464d11851110c1728edf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ece3ca0c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ece3ca0c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ece3ca0c

Branch: refs/heads/branch-2.8
Commit: ece3ca0cba3eb1faa77dea104b56e6d561f1b72c
Parents: 4c2b20c
Author: Akira Ajisaka 
Authored: Mon Sep 26 15:00:01 2016 +0900
Committer: Akira Ajisaka 
Committed: Mon Sep 26 15:01:39 2016 +0900

--
 .../server/resourcemanager/recovery/ZKRMStateStore.java | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ece3ca0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 0f3ebe6..efd6f15 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -761,17 +761,14 @@ public class ZKRMStateStore extends RMStateStore {
 String nodeCreatePath =
 getNodePath(dtMasterKeysRootPath, DELEGATION_KEY_PREFIX
 + delegationKey.getKeyId());
-ByteArrayOutputStream os = new ByteArrayOutputStream();
-DataOutputStream fsOut = new DataOutputStream(os);
 if (LOG.isDebugEnabled()) {
   LOG.debug("Storing RMDelegationKey_" + delegationKey.getKeyId());
 }
-delegationKey.write(fsOut);
-try {
+ByteArrayOutputStream os = new ByteArrayOutputStream();
+try(DataOutputStream fsOut = new DataOutputStream(os)) {
+  delegationKey.write(fsOut);
   safeCreate(nodeCreatePath, os.toByteArray(), zkAcl,
   CreateMode.PERSISTENT);
-} finally {
-  os.close();
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

2016-09-26 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5707f88d8 -> 14a696f36


YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/14a696f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/14a696f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/14a696f3

Branch: refs/heads/trunk
Commit: 14a696f369f7e3802587f57c8fff3aa51b5ab576
Parents: 5707f88
Author: Akira Ajisaka 
Authored: Mon Sep 26 15:00:01 2016 +0900
Committer: Akira Ajisaka 
Committed: Mon Sep 26 15:00:01 2016 +0900

--
 .../server/resourcemanager/recovery/ZKRMStateStore.java | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/14a696f3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 9e05f6d..c24b3e9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -748,17 +748,14 @@ public class ZKRMStateStore extends RMStateStore {
 String nodeCreatePath =
 getNodePath(dtMasterKeysRootPath, DELEGATION_KEY_PREFIX
 + delegationKey.getKeyId());
-ByteArrayOutputStream os = new ByteArrayOutputStream();
-DataOutputStream fsOut = new DataOutputStream(os);
 if (LOG.isDebugEnabled()) {
   LOG.debug("Storing RMDelegationKey_" + delegationKey.getKeyId());
 }
-delegationKey.write(fsOut);
-try {
+ByteArrayOutputStream os = new ByteArrayOutputStream();
+try(DataOutputStream fsOut = new DataOutputStream(os)) {
+  delegationKey.write(fsOut);
   safeCreate(nodeCreatePath, os.toByteArray(), zkAcl,
   CreateMode.PERSISTENT);
-} finally {
-  os.close();
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

2016-09-26 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 eec0afa09 -> 74f2df16a


YARN-5663. Small refactor in ZKRMStateStore. Contributed by Oleksii Dymytrov.

(cherry picked from commit 14a696f369f7e3802587f57c8fff3aa51b5ab576)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/74f2df16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/74f2df16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/74f2df16

Branch: refs/heads/branch-2
Commit: 74f2df16a919711cbd10464d11851110c1728edf
Parents: eec0afa
Author: Akira Ajisaka 
Authored: Mon Sep 26 15:00:01 2016 +0900
Committer: Akira Ajisaka 
Committed: Mon Sep 26 15:01:13 2016 +0900

--
 .../server/resourcemanager/recovery/ZKRMStateStore.java | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/74f2df16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 9e05f6d..c24b3e9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -748,17 +748,14 @@ public class ZKRMStateStore extends RMStateStore {
 String nodeCreatePath =
 getNodePath(dtMasterKeysRootPath, DELEGATION_KEY_PREFIX
 + delegationKey.getKeyId());
-ByteArrayOutputStream os = new ByteArrayOutputStream();
-DataOutputStream fsOut = new DataOutputStream(os);
 if (LOG.isDebugEnabled()) {
   LOG.debug("Storing RMDelegationKey_" + delegationKey.getKeyId());
 }
-delegationKey.write(fsOut);
-try {
+ByteArrayOutputStream os = new ByteArrayOutputStream();
+try(DataOutputStream fsOut = new DataOutputStream(os)) {
+  delegationKey.write(fsOut);
   safeCreate(nodeCreatePath, os.toByteArray(), zkAcl,
   CreateMode.PERSISTENT);
-} finally {
-  os.close();
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org