hadoop git commit: YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed by Vinod Kumar Vavilapalli

2018-05-29 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 438ef4951 -> 31ab960f4


YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed 
by Vinod Kumar Vavilapalli


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/31ab960f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/31ab960f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/31ab960f

Branch: refs/heads/trunk
Commit: 31ab960f4f931df273481927b897388895d803ba
Parents: 438ef49
Author: Jason Lowe 
Authored: Tue May 29 11:00:30 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 11:00:30 2018 -0500

--
 hadoop-project/pom.xml  | 5 +
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml| 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31ab960f/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 73c3f5b..59a9bd2 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1144,6 +1144,11 @@
 1.8.5
   
   
+org.objenesis
+objenesis
+1.0
+  
+  
 org.mock-server
 mockserver-netty
 3.9.2

http://git-wip-us.apache.org/repos/asf/hadoop/blob/31ab960f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index f310518..0527095 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -155,6 +155,11 @@
   leveldbjni-all
 
 
+
+  org.objenesis
+  objenesis
+
+
 
 
   org.apache.hadoop


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed by Vinod Kumar Vavilapalli

2018-05-29 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 6918d9e9c -> 500b0ee2c


YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed 
by Vinod Kumar Vavilapalli

(cherry picked from commit 31ab960f4f931df273481927b897388895d803ba)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/500b0ee2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/500b0ee2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/500b0ee2

Branch: refs/heads/branch-3.1
Commit: 500b0ee2cef5f645e9cd55516df48d3007e3986a
Parents: 6918d9e
Author: Jason Lowe 
Authored: Tue May 29 11:00:30 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 11:03:19 2018 -0500

--
 hadoop-project/pom.xml  | 5 +
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml| 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/500b0ee2/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 6caf26f..702a7d3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -993,6 +993,11 @@
 1.8.5
   
   
+org.objenesis
+objenesis
+1.0
+  
+  
 org.mock-server
 mockserver-netty
 3.9.2

http://git-wip-us.apache.org/repos/asf/hadoop/blob/500b0ee2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index 3cab4c6..13a373a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -155,6 +155,11 @@
   leveldbjni-all
 
 
+
+  org.objenesis
+  objenesis
+
+
 
 
   org.apache.hadoop


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[5/5] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread inigoiri
HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.

(cherry picked from commit 3c75f8e4933221fa60a87e86a3db5e4727530b6f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c3dce262
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c3dce262
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c3dce262

Branch: refs/heads/branch-2.9
Commit: c3dce2620e111ce7b4552b570406bb9d18e7acc9
Parents: f8c03a8
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:13:02 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3dce262/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 7a5b25e..1a6d580 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -48,9 +48,11 @@ import org.junit.Test;
  */
 public class TestTrash extends TestCase {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -680,7 +682,7 @@ public class TestTrash extends TestCase {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -807,8 +809,8 @@ public class TestTrash extends TestCase {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/5] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0fc8b43dc -> 09fbbff69
  refs/heads/branch-2.9 f8c03a808 -> c3dce2620
  refs/heads/branch-3.0 595b44e2d -> 1f594f31d
  refs/heads/branch-3.1 500b0ee2c -> 1dd9670dd
  refs/heads/trunk 31ab960f4 -> 3c75f8e49


HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c75f8e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c75f8e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c75f8e4

Branch: refs/heads/trunk
Commit: 3c75f8e4933221fa60a87e86a3db5e4727530b6f
Parents: 31ab960
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:11:08 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c75f8e4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 12aed29..fa2d21f 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -49,9 +49,11 @@ import org.apache.hadoop.util.Time;
  */
 public class TestTrash {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -682,7 +684,7 @@ public class TestTrash {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -809,8 +811,8 @@ public class TestTrash {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[3/5] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread inigoiri
HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.

(cherry picked from commit 3c75f8e4933221fa60a87e86a3db5e4727530b6f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1f594f31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1f594f31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1f594f31

Branch: refs/heads/branch-3.0
Commit: 1f594f31dd11609bfa673848509c5443aabfd958
Parents: 595b44e
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:12:08 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f594f31/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 12aed29..fa2d21f 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -49,9 +49,11 @@ import org.apache.hadoop.util.Time;
  */
 public class TestTrash {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -682,7 +684,7 @@ public class TestTrash {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -809,8 +811,8 @@ public class TestTrash {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[4/5] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread inigoiri
HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.

(cherry picked from commit 3c75f8e4933221fa60a87e86a3db5e4727530b6f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09fbbff6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09fbbff6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09fbbff6

Branch: refs/heads/branch-2
Commit: 09fbbff695f511df7fa82a244483816d037c4898
Parents: 0fc8b43
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:12:47 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09fbbff6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 7a5b25e..1a6d580 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -48,9 +48,11 @@ import org.junit.Test;
  */
 public class TestTrash extends TestCase {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -680,7 +682,7 @@ public class TestTrash extends TestCase {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -807,8 +809,8 @@ public class TestTrash extends TestCase {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/5] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread inigoiri
HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.

(cherry picked from commit 3c75f8e4933221fa60a87e86a3db5e4727530b6f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1dd9670d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1dd9670d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1dd9670d

Branch: refs/heads/branch-3.1
Commit: 1dd9670ddd8767f3860c6662d860181aacacf744
Parents: 500b0ee
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:11:38 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1dd9670d/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 12aed29..fa2d21f 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -49,9 +49,11 @@ import org.apache.hadoop.util.Time;
  */
 public class TestTrash {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -682,7 +684,7 @@ public class TestTrash {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -809,8 +811,8 @@ public class TestTrash {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed by Vinod Kumar Vavilapalli

2018-05-29 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 1f594f31d -> d5708bbcd


YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed 
by Vinod Kumar Vavilapalli

(cherry picked from commit 31ab960f4f931df273481927b897388895d803ba)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d5708bbc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d5708bbc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d5708bbc

Branch: refs/heads/branch-3.0
Commit: d5708bbcdc86a18cb43f2243629d69dcdae702de
Parents: 1f594f3
Author: Jason Lowe 
Authored: Tue May 29 11:00:30 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 11:15:07 2018 -0500

--
 hadoop-project/pom.xml  | 5 +
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml| 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5708bbc/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index b69bba3..e697f4d 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -979,6 +979,11 @@
 1.8.5
   
   
+org.objenesis
+objenesis
+1.0
+  
+  
 org.mock-server
 mockserver-netty
 3.9.2

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5708bbc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index ddba171..9bdac13 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -155,6 +155,11 @@
   leveldbjni-all
 
 
+
+  org.objenesis
+  objenesis
+
+
 
 
   org.apache.hadoop


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: YARN-8339. Service AM should localize static/archive resource types to container working directory instead of 'resources'. (Suma Shivaprasad via wangda)

2018-05-29 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3c75f8e49 -> 17aa40f66


YARN-8339. Service AM should localize static/archive resource types to 
container working directory instead of 'resources'. (Suma Shivaprasad via 
wangda)

Change-Id: I9f8e8f621650347f6c2f9e3420edee9eb2f356a4


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3061bfcd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3061bfcd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3061bfcd

Branch: refs/heads/trunk
Commit: 3061bfcde53210d2032df3814243498b27a997b7
Parents: 3c75f8e
Author: Wangda Tan 
Authored: Tue May 29 09:23:11 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:23:11 2018 -0700

--
 .../org/apache/hadoop/yarn/service/provider/ProviderUtils.java | 3 +--
 .../apache/hadoop/yarn/service/provider/TestProviderUtils.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3061bfcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
index 1ad5fd8..ac90992 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
@@ -298,8 +298,7 @@ public class ProviderUtils implements YarnServiceConstants {
 destFile = new Path(staticFile.getDestFile());
   }
 
-  String symlink = APP_RESOURCES_DIR + "/" + destFile.getName();
-  addLocalResource(launcher, symlink, localResource, destFile);
+  addLocalResource(launcher, destFile.getName(), localResource, destFile);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3061bfcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
index 6e8bc43..5d794d2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
@@ -154,11 +154,11 @@ public class TestProviderUtils {
 
 ProviderUtils.handleStaticFilesForLocalization(launcher, sfs,
 compLaunchCtx);
-
Mockito.verify(launcher).addLocalResource(Mockito.eq("resources/destFile1"),
+Mockito.verify(launcher).addLocalResource(Mockito.eq("destFile1"),
 any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/destFile_2"), any(LocalResource.class));
+Mockito.eq("destFile_2"), any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/sourceFile4"), any(LocalResource.class));
+Mockito.eq("sourceFile4"), any(LocalResource.class));
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/2] hadoop git commit: YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via wangda)

2018-05-29 Thread wangda
YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via 
wangda)

Change-Id: I79a42154e8f86ab1c3cc939b3745024b8eebe5f4


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17aa40f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17aa40f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17aa40f6

Branch: refs/heads/trunk
Commit: 17aa40f669f197d43387d67dc00040d14cd00948
Parents: 3061bfc
Author: Wangda Tan 
Authored: Tue May 29 09:27:36 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:27:36 2018 -0700

--
 .../apache/hadoop/yarn/util/resource/ResourceCalculator.java | 4 ++--
 .../monitor/capacity/CapacitySchedulerPreemptionUtils.java   | 8 
 2 files changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/17aa40f6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
index 51078cd..27394f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
@@ -260,10 +260,10 @@ public abstract class ResourceCalculator {
 
   /**
* Check if resource has any major resource types (which are all NodeManagers
-   * included) has a >0 value.
+   * included) has a {@literal >} 0 value.
*
* @param resource resource
-   * @return returns true if any resource is >0
+   * @return returns true if any resource is {@literal >} 0
*/
   public abstract boolean isAnyMajorResourceAboveZero(Resource resource);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/17aa40f6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
index 5396d61..690eb02 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
@@ -136,12 +136,12 @@ public class CapacitySchedulerPreemptionUtils {
* @param conservativeDRF
*  should we do conservativeDRF preemption or not.
*  When true:
-   *stop preempt container when any major resource type <= 0 for 
to-
-   *preempt.
+   *stop preempt container when any major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of intra-queue preemption
*  When false:
-   *stop preempt container when: all major resource type <= 0 for
-   *to-preempt.
+   *stop preempt container when: all major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of inter-queue preemption
* @return should we preempt rmContainer. If we should, deduct from
* resourceToObtainByPartition


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/2] hadoop git commit: YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via wangda)

2018-05-29 Thread wangda
YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via 
wangda)

Change-Id: I79a42154e8f86ab1c3cc939b3745024b8eebe5f4
(cherry picked from commit 17aa40f669f197d43387d67dc00040d14cd00948)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3eb1cb18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3eb1cb18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3eb1cb18

Branch: refs/heads/branch-3.1
Commit: 3eb1cb18c716dd7b131b4ceb4b7af3892b83187d
Parents: b262ea1
Author: Wangda Tan 
Authored: Tue May 29 09:27:36 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:28:34 2018 -0700

--
 .../apache/hadoop/yarn/util/resource/ResourceCalculator.java | 4 ++--
 .../monitor/capacity/CapacitySchedulerPreemptionUtils.java   | 8 
 2 files changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3eb1cb18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
index 51078cd..27394f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
@@ -260,10 +260,10 @@ public abstract class ResourceCalculator {
 
   /**
* Check if resource has any major resource types (which are all NodeManagers
-   * included) has a >0 value.
+   * included) has a {@literal >} 0 value.
*
* @param resource resource
-   * @return returns true if any resource is >0
+   * @return returns true if any resource is {@literal >} 0
*/
   public abstract boolean isAnyMajorResourceAboveZero(Resource resource);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3eb1cb18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
index 5396d61..690eb02 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
@@ -136,12 +136,12 @@ public class CapacitySchedulerPreemptionUtils {
* @param conservativeDRF
*  should we do conservativeDRF preemption or not.
*  When true:
-   *stop preempt container when any major resource type <= 0 for 
to-
-   *preempt.
+   *stop preempt container when any major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of intra-queue preemption
*  When false:
-   *stop preempt container when: all major resource type <= 0 for
-   *to-preempt.
+   *stop preempt container when: all major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of inter-queue preemption
* @return should we preempt rmContainer. If we should, deduct from
* resourceToObtainByPartition


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: YARN-8339. Service AM should localize static/archive resource types to container working directory instead of 'resources'. (Suma Shivaprasad via wangda)

2018-05-29 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 1dd9670dd -> 3eb1cb18c


YARN-8339. Service AM should localize static/archive resource types to 
container working directory instead of 'resources'. (Suma Shivaprasad via 
wangda)

Change-Id: I9f8e8f621650347f6c2f9e3420edee9eb2f356a4
(cherry picked from commit 3061bfcde53210d2032df3814243498b27a997b7)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b262ea13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b262ea13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b262ea13

Branch: refs/heads/branch-3.1
Commit: b262ea13818d74f180f10077e9f47bef30d02a06
Parents: 1dd9670
Author: Wangda Tan 
Authored: Tue May 29 09:23:11 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:28:27 2018 -0700

--
 .../org/apache/hadoop/yarn/service/provider/ProviderUtils.java | 3 +--
 .../apache/hadoop/yarn/service/provider/TestProviderUtils.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b262ea13/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
index 1ad5fd8..ac90992 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
@@ -298,8 +298,7 @@ public class ProviderUtils implements YarnServiceConstants {
 destFile = new Path(staticFile.getDestFile());
   }
 
-  String symlink = APP_RESOURCES_DIR + "/" + destFile.getName();
-  addLocalResource(launcher, symlink, localResource, destFile);
+  addLocalResource(launcher, destFile.getName(), localResource, destFile);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b262ea13/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
index 6e8bc43..5d794d2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
@@ -154,11 +154,11 @@ public class TestProviderUtils {
 
 ProviderUtils.handleStaticFilesForLocalization(launcher, sfs,
 compLaunchCtx);
-
Mockito.verify(launcher).addLocalResource(Mockito.eq("resources/destFile1"),
+Mockito.verify(launcher).addLocalResource(Mockito.eq("destFile1"),
 any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/destFile_2"), any(LocalResource.class));
+Mockito.eq("destFile_2"), any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/sourceFile4"), any(LocalResource.class));
+Mockito.eq("sourceFile4"), any(LocalResource.class));
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-125. Cleanup HDDS CheckStyle issues. Contributed by Anu Engineer.

2018-05-29 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 17aa40f66 -> 9502b47bd


HDDS-125. Cleanup HDDS CheckStyle issues.
Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9502b47b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9502b47b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9502b47b

Branch: refs/heads/trunk
Commit: 9502b47bd2a3cf32edae635293169883c2914475
Parents: 17aa40f
Author: Anu Engineer 
Authored: Tue May 29 09:54:06 2018 -0700
Committer: Anu Engineer 
Committed: Tue May 29 09:54:06 2018 -0700

--
 .../hadoop/hdds/scm/block/BlockManagerImpl.java |  1 -
 .../hdds/scm/block/DeletedBlockLogImpl.java |  2 +-
 .../hdds/scm/container/ContainerMapping.java|  6 +-
 .../scm/container/ContainerStateManager.java| 24 +++
 .../hadoop/hdds/scm/container/Mapping.java  |  9 ++-
 .../hdds/scm/node/SCMNodeStorageStatMXBean.java |  4 +-
 .../hdds/scm/node/SCMNodeStorageStatMap.java| 19 +++---
 .../hdds/scm/node/StorageReportResult.java  |  8 +--
 .../hdds/scm/node/states/Node2ContainerMap.java |  2 +-
 .../hdds/scm/pipelines/PipelineSelector.java|  5 +-
 .../scm/server/StorageContainerManager.java |  3 +-
 .../TestStorageContainerManagerHttpServer.java  |  1 -
 .../hadoop/hdds/scm/block/package-info.java | 23 +++
 .../scm/container/TestContainerMapping.java | 12 ++--
 .../hdds/scm/container/closer/package-info.java | 22 +++
 .../hadoop/hdds/scm/container/package-info.java | 22 +++
 .../hdds/scm/container/states/package-info.java | 22 +++
 .../hadoop/hdds/scm/node/TestNodeManager.java   | 66 ++--
 .../scm/node/TestSCMNodeStorageStatMap.java | 32 +-
 .../hadoop/hdds/scm/node/package-info.java  | 22 +++
 .../ozone/container/common/TestEndPoint.java|  2 -
 .../ozone/container/common/package-info.java| 22 +++
 .../ozone/container/placement/package-info.java | 22 +++
 .../replication/TestContainerSupervisor.java|  7 ++-
 24 files changed, 263 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index 5a98e85..d17d6c0 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -41,7 +41,6 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
-import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
index cabcb46..cedc506 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
@@ -190,7 +190,7 @@ public class DeletedBlockLogImpl implements DeletedBlockLog 
{
 try {
   for(Long txID : txIDs) {
 try {
-  byte [] deleteBlockBytes =
+  byte[] deleteBlockBytes =
   deletedStore.get(Longs.toByteArray(txID));
   if (deleteBlockBytes == null) {
 LOG.warn("Delete txID {} not found", txID);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index e569874..2d88621 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -152,7 +152,8 @@ public class ContainerMapping implements Mapping {
 Conta

[44/50] [abbrv] hadoop git commit: YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via wangda)

2018-05-29 Thread botong
YARN-8369. Javadoc build failed due to 'bad use of >'. (Takanobu Asanuma via 
wangda)

Change-Id: I79a42154e8f86ab1c3cc939b3745024b8eebe5f4


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17aa40f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17aa40f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17aa40f6

Branch: refs/heads/YARN-7402
Commit: 17aa40f669f197d43387d67dc00040d14cd00948
Parents: 3061bfc
Author: Wangda Tan 
Authored: Tue May 29 09:27:36 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:27:36 2018 -0700

--
 .../apache/hadoop/yarn/util/resource/ResourceCalculator.java | 4 ++--
 .../monitor/capacity/CapacitySchedulerPreemptionUtils.java   | 8 
 2 files changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/17aa40f6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
index 51078cd..27394f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
@@ -260,10 +260,10 @@ public abstract class ResourceCalculator {
 
   /**
* Check if resource has any major resource types (which are all NodeManagers
-   * included) has a >0 value.
+   * included) has a {@literal >} 0 value.
*
* @param resource resource
-   * @return returns true if any resource is >0
+   * @return returns true if any resource is {@literal >} 0
*/
   public abstract boolean isAnyMajorResourceAboveZero(Resource resource);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/17aa40f6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
index 5396d61..690eb02 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
@@ -136,12 +136,12 @@ public class CapacitySchedulerPreemptionUtils {
* @param conservativeDRF
*  should we do conservativeDRF preemption or not.
*  When true:
-   *stop preempt container when any major resource type <= 0 for 
to-
-   *preempt.
+   *stop preempt container when any major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of intra-queue preemption
*  When false:
-   *stop preempt container when: all major resource type <= 0 for
-   *to-preempt.
+   *stop preempt container when: all major resource type
+   *{@literal <=} 0 for to-preempt.
*This is default preemption behavior of inter-queue preemption
* @return should we preempt rmContainer. If we should, deduct from
* resourceToObtainByPartition


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[07/50] [abbrv] hadoop git commit: YARN-8348. Incorrect and missing AfterClass in HBase-tests to fix NPE failures. Contributed by Giovanni Matteo Fumarola.

2018-05-29 Thread botong
YARN-8348. Incorrect and missing AfterClass in HBase-tests to fix NPE failures. 
Contributed by Giovanni Matteo Fumarola.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d7261561
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d7261561
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d7261561

Branch: refs/heads/YARN-7402
Commit: d72615611cfa6bd82756270d4b10136ec1e56741
Parents: e99e5bf
Author: Inigo Goiri 
Authored: Wed May 23 14:43:59 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 14:43:59 2018 -0700

--
 .../storage/TestHBaseTimelineStorageApps.java| 4 +++-
 .../storage/TestHBaseTimelineStorageDomain.java  | 8 
 .../storage/TestHBaseTimelineStorageEntities.java| 4 +++-
 .../storage/TestHBaseTimelineStorageSchema.java  | 8 
 .../storage/flow/TestHBaseStorageFlowActivity.java   | 4 +++-
 .../storage/flow/TestHBaseStorageFlowRun.java| 4 +++-
 .../storage/flow/TestHBaseStorageFlowRunCompaction.java  | 4 +++-
 7 files changed, 31 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7261561/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
index bc33427..0dee442 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
@@ -1936,6 +1936,8 @@ public class TestHBaseTimelineStorageApps {
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
-util.shutdownMiniCluster();
+if (util != null) {
+  util.shutdownMiniCluster();
+}
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7261561/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageDomain.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageDomain.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageDomain.java
index 2932e0c..1f59088 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageDomain.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageDomain.java
@@ -32,6 +32,7 @@ import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelp
 import 
org.apache.hadoop.yarn.server.timelineservice.storage.domain.DomainColumn;
 import 
org.apache.hadoop.yarn.server.timelineservice.storage.domain.DomainRowKey;
 import 
org.apache.hadoop.yarn.server.timelineservice.storage.domain.DomainTableRW;
+import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -123,4 +124,11 @@ public class TestHBaseTimelineStorageDomain {
 assertEquals("user1,user2 group1,group2", readers);
 assertEquals("writer1,writer2", writers);
   }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+if (util != null) {
+  util.shutdownMiniCluster();
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7261561/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
---

[36/50] [abbrv] hadoop git commit: YARN-4781. Support intra-queue preemption for fairness ordering policy. Contributed by Eric Payne.

2018-05-29 Thread botong
YARN-4781. Support intra-queue preemption for fairness ordering policy. 
Contributed by Eric Payne.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7c343669
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7c343669
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7c343669

Branch: refs/heads/YARN-7402
Commit: 7c343669baf660df3b70d58987d6e68aec54d6fa
Parents: 61df174
Author: Sunil G 
Authored: Mon May 28 16:32:53 2018 +0530
Committer: Sunil G 
Committed: Mon May 28 16:32:53 2018 +0530

--
 .../FifoIntraQueuePreemptionPlugin.java |  37 ++-
 .../capacity/IntraQueueCandidatesSelector.java  |  40 +++
 .../monitor/capacity/TempAppPerPartition.java   |   9 +
 .../AbstractComparatorOrderingPolicy.java   |   2 -
 ...alCapacityPreemptionPolicyMockFramework.java |  12 +-
 ...yPreemptionPolicyIntraQueueFairOrdering.java | 276 +++
 6 files changed, 366 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c343669/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
index 40f333f..12c178c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
@@ -34,6 +34,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector.TAFairOrderingComparator;
 import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector.TAPriorityComparator;
 import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.IntraQueuePreemptionOrderPolicy;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
@@ -41,6 +42,8 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.FairOrderingPolicy;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.OrderingPolicy;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
@@ -263,8 +266,17 @@ public class FifoIntraQueuePreemptionPlugin
   Resource queueReassignableResource,
   PriorityQueue orderedByPriority) {
 
-Comparator reverseComp = Collections
-.reverseOrder(new TAPriorityComparator());
+Comparator reverseComp;
+OrderingPolicy queueOrderingPolicy =
+tq.leafQueue.getOrderingPolicy();
+if (queueOrderingPolicy instanceof FairOrderingPolicy
+&& (context.getIntraQueuePreemptionOrderPolicy()
+== IntraQueuePreemptionOrderPolicy.USERLIMIT_FIRST)) {
+  reverseComp = Collections.reverseOrder(
+  new TAFairOrderingComparator(this.rc, clusterResource));
+} else {
+  reverseComp = Collections.reverseOrder(new TAPriorityComparator());
+}
 TreeSet orderedApps = new TreeSet<>(reverseComp);
 
 String partition = tq.partition;
@@ -355,7 +367,16 @@ public class FifoIntraQueuePreemptionPlugin
   TempQueuePerPartition tq, Collection apps,
   Resource clusterResource,
   Map perUserAMUsed) {
-TAPriorityComparator taComparator = new TAPriorityComparator();
+Comparator taComparator;
+OrderingPolicy orderingPolicy =
+tq.leafQueue.getOrderingPolicy();
+if (orderingPolicy instanceof FairOrderingPolicy
+&& (context.getIntraQueuePreemptionO

[14/50] [abbrv] hadoop git commit: HDDS-45. Removal of old OzoneRestClient. Contributed by Lokesh Jain.

2018-05-29 Thread botong
http://git-wip-us.apache.org/repos/asf/hadoop/blob/774daa8d/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
index 5b67657..a9b8175 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java
@@ -23,23 +23,31 @@ import static 
org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
 import static org.apache.hadoop.ozone.OzoneConsts.CHUNK_SIZE;
 import static org.junit.Assert.*;
 
+import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang.RandomStringUtils;
-import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.ozone.web.client.OzoneRestClient;
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.ozone.client.VolumeArgs;
+import org.apache.hadoop.ozone.client.io.OzoneInputStream;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.rpc.RpcClient;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
 
-import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.web.client.OzoneBucket;
-import org.apache.hadoop.ozone.web.client.OzoneVolume;
-import org.apache.hadoop.ozone.web.request.OzoneQuota;
 import org.junit.rules.Timeout;
 
+import java.io.IOException;
+import java.io.InputStream;
+
 /**
  * End-to-end testing of Ozone REST operations.
  */
@@ -52,7 +60,9 @@ public class TestOzoneRestWithMiniCluster {
 
   private static MiniOzoneCluster cluster;
   private static OzoneConfiguration conf;
-  private static OzoneRestClient ozoneClient;
+  private static ClientProtocol client;
+  private static ReplicationFactor replicationFactor = ReplicationFactor.ONE;
+  private static ReplicationType replicationType = ReplicationType.STAND_ALONE;
 
   @Rule
   public ExpectedException exception = ExpectedException.none();
@@ -62,180 +72,125 @@ public class TestOzoneRestWithMiniCluster {
 conf = new OzoneConfiguration();
 cluster = MiniOzoneCluster.newBuilder(conf).build();
 cluster.waitForClusterToBeReady();
-int port = cluster.getHddsDatanodes().get(0)
-.getDatanodeDetails().getOzoneRestPort();
-ozoneClient = new OzoneRestClient(
-String.format("http://localhost:%d";, port));
-ozoneClient.setUserAuth(OzoneConsts.OZONE_SIMPLE_HDFS_USER);
+client = new RpcClient(conf);
   }
 
   @AfterClass
-  public static void shutdown() throws InterruptedException {
+  public static void shutdown() throws InterruptedException, IOException {
 if (cluster != null) {
   cluster.shutdown();
 }
-IOUtils.cleanupWithLogger(null, ozoneClient);
+client.close();
   }
 
   @Test
   public void testCreateAndGetVolume() throws Exception {
-String volumeName = nextId("volume");
-OzoneVolume volume = ozoneClient.createVolume(volumeName, "bilbo", 
"100TB");
-assertNotNull(volume);
-assertEquals(volumeName, volume.getVolumeName());
-assertEquals(ozoneClient.getUserAuth(), volume.getCreatedby());
-assertEquals("bilbo", volume.getOwnerName());
-assertNotNull(volume.getQuota());
-assertEquals(OzoneQuota.parseQuota("100TB").sizeInBytes(),
-volume.getQuota().sizeInBytes());
-volume = ozoneClient.getVolume(volumeName);
-assertNotNull(volume);
-assertEquals(volumeName, volume.getVolumeName());
-assertEquals(ozoneClient.getUserAuth(), volume.getCreatedby());
-assertEquals("bilbo", volume.getOwnerName());
-assertNotNull(volume.getQuota());
-assertEquals(OzoneQuota.parseQuota("100TB").sizeInBytes(),
-volume.getQuota().sizeInBytes());
+createAndGetVolume();
   }
 
   @Test
   public void testCreateAndGetBucket() throws Exception {
-String volumeName = nextId("volume");
-String bucketName = nextId("bucket");
-OzoneVolume volume = ozoneClient.createVolume(volumeName, "bilbo", 
"100TB");
-assertNotNull(volume);
-assertEquals(volumeName, volume.getVolumeName());
-assertEquals(ozoneClient.getUserAuth(), volume.getCreatedby());
-assertEquals("bilbo", volume.getOwnerName());
-assertNotNull(vo

[08/50] [abbrv] hadoop git commit: YARN-8327. Fix TestAggregatedLogFormat#testReadAcontainerLogs1 on Windows. Contributed by Giovanni Matteo Fumarola.

2018-05-29 Thread botong
YARN-8327. Fix TestAggregatedLogFormat#testReadAcontainerLogs1 on Windows. 
Contributed by Giovanni Matteo Fumarola.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f09dc730
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f09dc730
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f09dc730

Branch: refs/heads/YARN-7402
Commit: f09dc73001fd5f3319765fa997f4b0ca9e8f2aff
Parents: d726156
Author: Inigo Goiri 
Authored: Wed May 23 15:59:30 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 15:59:30 2018 -0700

--
 .../logaggregation/TestAggregatedLogFormat.java  | 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f09dc730/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
index efbaa4c..f85445e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
@@ -254,13 +254,18 @@ public class TestAggregatedLogFormat {
 // Since we could not open the fileInputStream for stderr, this file is not
 // aggregated.
 String s = writer.toString();
-int expectedLength =
-"LogType:stdout".length()
-+ (logUploadedTime ? ("\nLog Upload Time:" + Times.format(System
-  .currentTimeMillis())).length() : 0)
-+ ("\nLogLength:" + numChars).length()
-+ "\nLog Contents:\n".length() + numChars + "\n".length()
-+ "\nEnd of LogType:stdout\n".length();
+
+int expectedLength = "LogType:stdout".length()
++ (logUploadedTime
+? (System.lineSeparator() + "Log Upload Time:"
++ Times.format(System.currentTimeMillis())).length()
+: 0)
++ (System.lineSeparator() + "LogLength:" + numChars).length()
++ (System.lineSeparator() + "Log Contents:" + System.lineSeparator())
+.length()
++ numChars + ("\n").length() + ("End of LogType:stdout"
++ System.lineSeparator() + System.lineSeparator()).length();
+
 Assert.assertTrue("LogType not matched", s.contains("LogType:stdout"));
 Assert.assertTrue("log file:stderr should not be aggregated.", 
!s.contains("LogType:stderr"));
 Assert.assertTrue("log file:logs should not be aggregated.", 
!s.contains("LogType:logs"));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[37/50] [abbrv] hadoop git commit: HDFS-13627. TestErasureCodingExerciseAPIs fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HDFS-13627. TestErasureCodingExerciseAPIs fails on Windows. Contributed by 
Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/91d7c74e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/91d7c74e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/91d7c74e

Branch: refs/heads/YARN-7402
Commit: 91d7c74e6aa4850922f68bab490b585443e4fccb
Parents: 7c34366
Author: Inigo Goiri 
Authored: Mon May 28 10:26:47 2018 -0700
Committer: Inigo Goiri 
Committed: Mon May 28 10:26:47 2018 -0700

--
 .../org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java   | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/91d7c74e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java
index 4335527..c63ba34 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java
@@ -40,6 +40,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.DataOutputStream;
+import java.io.File;
 import java.io.IOException;
 import java.nio.file.Paths;
 import java.security.NoSuchAlgorithmException;
@@ -91,8 +92,10 @@ public class TestErasureCodingExerciseAPIs {
 // Set up java key store
 String testRootDir = Paths.get(new FileSystemTestHelper().getTestRootDir())
 .toString();
+Path targetFile = new Path(new File(testRootDir).getAbsolutePath(),
+"test.jks");
 String keyProviderURI = JavaKeyStoreProvider.SCHEME_NAME + "://file"
-+ new Path(testRootDir, "test.jks").toUri();
++ targetFile.toUri();
 conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH,
 keyProviderURI);
 conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[40/50] [abbrv] hadoop git commit: HADOOP-15455. Incorrect debug message in KMSACL#hasAccess. Contributed by Yuen-Kuei Hsueh.

2018-05-29 Thread botong
HADOOP-15455. Incorrect debug message in KMSACL#hasAccess. Contributed by 
Yuen-Kuei Hsueh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/438ef495
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/438ef495
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/438ef495

Branch: refs/heads/YARN-7402
Commit: 438ef4951a38171f193eaf2631da31d0f4bc3c62
Parents: 8fdc993
Author: Wei-Chiu Chuang 
Authored: Mon May 28 17:32:32 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Mon May 28 17:32:32 2018 -0700

--
 .../java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/438ef495/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
index b02f34e..17faec2 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
@@ -247,9 +247,9 @@ public class KMSACLs implements Runnable, KeyACLs {
 if (blacklist == null) {
   LOG.debug("No blacklist for {}", type.toString());
 } else if (access) {
-  LOG.debug("user is in {}" , blacklist.getAclString());
-} else {
   LOG.debug("user is not in {}" , blacklist.getAclString());
+} else {
+  LOG.debug("user is in {}" , blacklist.getAclString());
 }
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/50] [abbrv] hadoop git commit: YARN-4599. Set OOM control for memory cgroups. (Miklos Szegedi via Haibo Chen)

2018-05-29 Thread botong
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9964799/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java
new file mode 100644
index 000..118d172
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java
@@ -0,0 +1,319 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.junit.Test;
+
+import java.io.File;
+import java.nio.charset.Charset;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.doNothing;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+/**
+ * Test for elastic non-strict memory controller based on cgroups.
+ */
+public class TestCGroupElasticMemoryController {
+  private YarnConfiguration conf = new YarnConfiguration();
+  private File script = new File("target/" +
+  TestCGroupElasticMemoryController.class.getName());
+
+  /**
+   * Test that at least one memory type is requested.
+   * @throws YarnException on exception
+   */
+  @Test(expected = YarnException.class)
+  public void testConstructorOff()
+  throws YarnException {
+CGroupElasticMemoryController controller =
+new CGroupElasticMemoryController(
+conf,
+null,
+null,
+false,
+false,
+1
+);
+  }
+
+  /**
+   * Test that the OOM logic is pluggable.
+   * @throws YarnException on exception
+   */
+  @Test
+  public void testConstructorHandler()
+  throws YarnException {
+conf.setClass(YarnConfiguration.NM_ELASTIC_MEMORY_CONTROL_OOM_HANDLER,
+DummyRunnableWithContext.class, Runnable.class);
+CGroupsHandler handler = mock(CGroupsHandler.class);
+when(handler.getPathForCGroup(any(), any())).thenReturn("");
+CGroupElasticMemoryController controller =
+new CGroupElasticMemoryController(
+conf,
+null,
+handler,
+true,
+false,
+1
+);
+  }
+
+  /**
+   * Test that the handler is notified about multiple OOM events.
+   * @throws Exception on exception
+   */
+  @Test
+  public void testMultipleOOMEvents() throws Exception {
+conf.set(YarnConfiguration.NM_ELASTIC_MEMORY_CONTROL_OOM_LISTENER_PATH,
+script.getAbsolutePath());
+try {
+  FileUtils.writeStringToFile(script,
+  "#!/bin/bash\nprintf oomevent;printf oomevent;\n",
+  Charset.defaultCharset(), false);
+  assertTrue("Could not set executable",
+  script.setExecutable(true));
+
+  CGroupsHandler cgroups = mock(CGroupsHandler.class);
+  when(cgroups.getPathForCGroup(any(), any())).thenReturn("");
+  when(cgroups.getCGroupParam(any(), any(), any()))
+  .thenReturn("under_oom 0");
+
+  Runnable handler = mock(Runnable.class);
+  doNothing().when(handler).run();
+
+  CGroupElast

[17/50] [abbrv] hadoop git commit: HDFS-13611. Unsafe use of Text as a ConcurrentHashMap key in PBHelperClient.

2018-05-29 Thread botong
HDFS-13611. Unsafe use of Text as a ConcurrentHashMap key in PBHelperClient.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c9b63deb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c9b63deb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c9b63deb

Branch: refs/heads/YARN-7402
Commit: c9b63deb533274ca8ef4939f6cd13f728a067f7b
Parents: 1388de1
Author: Andrew Wang 
Authored: Thu May 24 09:56:23 2018 -0700
Committer: Andrew Wang 
Committed: Thu May 24 09:56:23 2018 -0700

--
 .../java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9b63deb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index 579ac43..490ccb4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -247,7 +247,7 @@ public class PBHelperClient {
 ByteString value = fixedByteStringCache.get(key);
 if (value == null) {
   value = ByteString.copyFromUtf8(key.toString());
-  fixedByteStringCache.put(key, value);
+  fixedByteStringCache.put(new Text(key.copyBytes()), value);
 }
 return value;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[46/50] [abbrv] hadoop git commit: YARN-6648. [GPG] Add SubClusterCleaner in Global Policy Generator. (botong)

2018-05-29 Thread botong
YARN-6648. [GPG] Add SubClusterCleaner in Global Policy Generator. (botong)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/46a4a945
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/46a4a945
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/46a4a945

Branch: refs/heads/YARN-7402
Commit: 46a4a945732afdefec9828d1c43b77d32609bb8a
Parents: bca8e9b
Author: Botong Huang 
Authored: Thu Feb 1 14:43:48 2018 -0800
Committer: Botong Huang 
Committed: Tue May 29 10:48:40 2018 -0700

--
 .../dev-support/findbugs-exclude.xml|   5 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  18 +++
 .../src/main/resources/yarn-default.xml |  24 
 .../store/impl/MemoryFederationStateStore.java  |  13 ++
 .../utils/FederationStateStoreFacade.java   |  41 ++-
 .../GlobalPolicyGenerator.java  |  92 ++-
 .../subclustercleaner/SubClusterCleaner.java| 109 +
 .../subclustercleaner/package-info.java |  19 +++
 .../TestSubClusterCleaner.java  | 118 +++
 9 files changed, 409 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a4a945/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 5841361..bf2e376 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -380,6 +380,11 @@
 
 
   
+  
+
+
+
+  
  
   
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a4a945/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index f7f82f8..7c78e0d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3326,6 +3326,24 @@ public class YarnConfiguration extends Configuration {
   public static final boolean DEFAULT_ROUTER_WEBAPP_PARTIAL_RESULTS_ENABLED =
   false;
 
+  private static final String FEDERATION_GPG_PREFIX =
+  FEDERATION_PREFIX + "gpg.";
+
+  // The number of threads to use for the GPG scheduled executor service
+  public static final String GPG_SCHEDULED_EXECUTOR_THREADS =
+  FEDERATION_GPG_PREFIX + "scheduled.executor.threads";
+  public static final int DEFAULT_GPG_SCHEDULED_EXECUTOR_THREADS = 10;
+
+  // The interval at which the subcluster cleaner runs, -1 means disabled
+  public static final String GPG_SUBCLUSTER_CLEANER_INTERVAL_MS =
+  FEDERATION_GPG_PREFIX + "subcluster.cleaner.interval-ms";
+  public static final long DEFAULT_GPG_SUBCLUSTER_CLEANER_INTERVAL_MS = -1;
+
+  // The expiration time for a subcluster heartbeat, default is 30 minutes
+  public static final String GPG_SUBCLUSTER_EXPIRATION_MS =
+  FEDERATION_GPG_PREFIX + "subcluster.heartbeat.expiration-ms";
+  public static final long DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS = 180;
+
   
   // Other Configs
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a4a945/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index b0ffc48..8a450d3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3524,6 +3524,30 @@
 
   
 
+  The number of threads to use for the GPG scheduled executor service.
+
+yarn.federation.gpg.scheduled.executor.threads
+10
+  
+
+  
+
+  The interval at which the subcluster cleaner runs, -1 means disabled.
+
+yarn.federation.gpg.subcluster.cleaner.interval-ms
+-1
+  
+
+  
+
+  The expiration time for a subcluster heartbeat, default is 30 minutes.
+
+yarn.federation.gpg.subcluster.heartbeat.

[16/50] [abbrv] hadoop git commit: YARN-6919. Add default volume mount list. Contributed by Eric Badger

2018-05-29 Thread botong
YARN-6919. Add default volume mount list. Contributed by Eric Badger


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1388de18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1388de18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1388de18

Branch: refs/heads/YARN-7402
Commit: 1388de18ad51434569589a8f5b0b05c38fe02ab3
Parents: 774daa8
Author: Shane Kumpf 
Authored: Thu May 24 09:30:39 2018 -0600
Committer: Shane Kumpf 
Committed: Thu May 24 09:30:39 2018 -0600

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  10 ++
 .../src/main/resources/yarn-default.xml |  14 ++
 .../runtime/DockerLinuxContainerRuntime.java|  38 +
 .../runtime/TestDockerContainerRuntime.java | 138 +++
 4 files changed, 200 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1388de18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 004a59f..f7f82f8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2002,6 +2002,16 @@ public class YarnConfiguration extends Configuration {
*/
   public static final int DEFAULT_NM_DOCKER_STOP_GRACE_PERIOD = 10;
 
+  /** The default list of read-only mounts to be bind-mounted into all
+   *  Docker containers that use DockerContainerRuntime. */
+  public static final String NM_DOCKER_DEFAULT_RO_MOUNTS =
+  DOCKER_CONTAINER_RUNTIME_PREFIX + "default-ro-mounts";
+
+  /** The default list of read-write mounts to be bind-mounted into all
+   *  Docker containers that use DockerContainerRuntime. */
+  public static final String NM_DOCKER_DEFAULT_RW_MOUNTS =
+  DOCKER_CONTAINER_RUNTIME_PREFIX + "default-rw-mounts";
+
   /** The mode in which the Java Container Sandbox should run detailed by
*  the JavaSandboxLinuxContainerRuntime. */
   public static final String YARN_CONTAINER_SANDBOX =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1388de18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index c82474c..b0ffc48 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1811,6 +1811,20 @@
   
 
   
+The default list of read-only mounts to be bind-mounted
+  into all Docker containers that use DockerContainerRuntime.
+yarn.nodemanager.runtime.linux.docker.default-ro-mounts
+
+  
+
+  
+The default list of read-write mounts to be bind-mounted
+  into all Docker containers that use DockerContainerRuntime.
+yarn.nodemanager.runtime.linux.docker.default-rw-mounts
+
+  
+
+  
 The mode in which the Java Container Sandbox should run 
detailed by
   the JavaSandboxLinuxContainerRuntime.
 yarn.nodemanager.runtime.linux.sandbox-mode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1388de18/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index e131e9d..5e2233b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linu

[11/50] [abbrv] hadoop git commit: HDFS-13598. Reduce unnecessary byte-to-string transform operation in INodesInPath#toString. Contributed by Gabor Bota.

2018-05-29 Thread botong
HDFS-13598. Reduce unnecessary byte-to-string transform operation in 
INodesInPath#toString. Contributed by Gabor Bota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7a87add4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7a87add4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7a87add4

Branch: refs/heads/YARN-7402
Commit: 7a87add4ea4c317aa9377d1fc8e43fb5e7418a46
Parents: d996479
Author: Yiqun Lin 
Authored: Thu May 24 10:57:35 2018 +0800
Committer: Yiqun Lin 
Committed: Thu May 24 10:57:35 2018 +0800

--
 .../java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a87add4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
index 8235bf0..50ead61 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
@@ -484,7 +484,7 @@ public class INodesInPath {
 }
 
 final StringBuilder b = new StringBuilder(getClass().getSimpleName())
-.append(": path = ").append(DFSUtil.byteArray2PathString(path))
+.append(": path = ").append(getPath())
 .append("\n  inodes = ");
 if (inodes == null) {
   b.append("null");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/50] [abbrv] hadoop git commit: HADOOP-15486. Make NetworkTopology#netLock fair. Contributed by Nanda kumar.

2018-05-29 Thread botong
HADOOP-15486. Make NetworkTopology#netLock fair. Contributed by Nanda kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51ce02bb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51ce02bb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51ce02bb

Branch: refs/heads/YARN-7402
Commit: 51ce02bb54d6047a8191624a86d427b0c9445cb1
Parents: aa23d49
Author: Arpit Agarwal 
Authored: Wed May 23 10:30:12 2018 -0700
Committer: Arpit Agarwal 
Committed: Wed May 23 10:30:12 2018 -0700

--
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51ce02bb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index 256f07b..1f077a7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -105,7 +105,7 @@ public class NetworkTopology {
   private boolean clusterEverBeenMultiRack = false;
 
   /** the lock used to manage access */
-  protected ReadWriteLock netlock = new ReentrantReadWriteLock();
+  protected ReadWriteLock netlock = new ReentrantReadWriteLock(true);
 
   // keeping the constructor because other components like MR still uses this.
   public NetworkTopology() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[48/50] [abbrv] hadoop git commit: YARN-7707. [GPG] Policy generator framework. Contributed by Young Chen

2018-05-29 Thread botong
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5da8ca6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
new file mode 100644
index 000..2ff879e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
@@ -0,0 +1,196 @@
+ {
+  "type": "capacityScheduler",
+  "capacity": 100.0,
+  "usedCapacity": 0.0,
+  "maxCapacity": 100.0,
+  "queueName": "root",
+  "queues": {
+"queue": [
+  {
+"type": "capacitySchedulerLeafQueueInfo",
+"capacity": 100.0,
+"usedCapacity": 0.0,
+"maxCapacity": 100.0,
+"absoluteCapacity": 100.0,
+"absoluteMaxCapacity": 100.0,
+"absoluteUsedCapacity": 0.0,
+"numApplications": 484,
+"queueName": "default",
+"state": "RUNNING",
+"resourcesUsed": {
+  "memory": 0,
+  "vCores": 0
+},
+"hideReservationQueues": false,
+"nodeLabels": [
+  "*"
+],
+"numActiveApplications": 484,
+"numPendingApplications": 0,
+"numContainers": 0,
+"maxApplications": 1,
+"maxApplicationsPerUser": 1,
+"userLimit": 100,
+"users": {
+  "user": [
+{
+  "username": "Default",
+  "resourcesUsed": {
+"memory": 0,
+"vCores": 0
+  },
+  "numPendingApplications": 0,
+  "numActiveApplications": 468,
+  "AMResourceUsed": {
+"memory": 30191616,
+"vCores": 468
+  },
+  "userResourceLimit": {
+"memory": 31490048,
+"vCores": 7612
+  }
+}
+  ]
+},
+"userLimitFactor": 1.0,
+"AMResourceLimit": {
+  "memory": 31490048,
+  "vCores": 7612
+},
+"usedAMResource": {
+  "memory": 30388224,
+  "vCores": 532
+},
+"userAMResourceLimit": {
+  "memory": 31490048,
+  "vCores": 7612
+},
+"preemptionDisabled": true
+  },
+  {
+"type": "capacitySchedulerLeafQueueInfo",
+"capacity": 100.0,
+"usedCapacity": 0.0,
+"maxCapacity": 100.0,
+"absoluteCapacity": 100.0,
+"absoluteMaxCapacity": 100.0,
+"absoluteUsedCapacity": 0.0,
+"numApplications": 484,
+"queueName": "default2",
+"state": "RUNNING",
+"resourcesUsed": {
+  "memory": 0,
+  "vCores": 0
+},
+"hideReservationQueues": false,
+"nodeLabels": [
+  "*"
+],
+"numActiveApplications": 484,
+"numPendingApplications": 0,
+"numContainers": 0,
+"maxApplications": 1,
+"maxApplicationsPerUser": 1,
+"userLimit": 100,
+"users": {
+  "user": [
+{
+  "username": "Default",
+  "resourcesUsed": {
+"memory": 0,
+"vCores": 0
+  },
+  "numPendingApplications": 0,
+  "numActiveApplications": 468,
+  "AMResourceUsed": {
+"memory": 30191616,
+"vCores": 468
+  },
+  "userResourceLimit": {
+"memory": 31490048,
+"vCores": 7612
+  }
+}
+  ]
+},
+"userLimitFactor": 1.0,
+"AMResourceLimit": {
+  "memory": 31490048,
+  "vCores": 7612
+},
+"usedAMResource": {
+  "memory": 30388224,
+  "vCores": 532
+},
+"userAMResourceLimit": {
+  "memory": 31490048,
+  "vCores": 7612
+},
+"preemptionDisabled": true
+  }
+]
+  },
+  "health": {
+"lastrun": 1517951638085,
+"operationsInfo": {
+  "entry": {
+"key": "last-alloc

[21/50] [abbrv] hadoop git commit: YARN-8191. Fair scheduler: queue deletion without RM restart. (Gergo Repas via Haibo Chen)

2018-05-29 Thread botong
YARN-8191. Fair scheduler: queue deletion without RM restart. (Gergo Repas via 
Haibo Chen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/86bc6425
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/86bc6425
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/86bc6425

Branch: refs/heads/YARN-7402
Commit: 86bc6425d425913899f1d951498bd040e453b3d0
Parents: d9852eb
Author: Haibo Chen 
Authored: Thu May 24 17:07:21 2018 -0700
Committer: Haibo Chen 
Committed: Thu May 24 17:12:34 2018 -0700

--
 .../fair/AllocationFileLoaderService.java   |  16 +-
 .../scheduler/fair/FSLeafQueue.java |  31 ++
 .../resourcemanager/scheduler/fair/FSQueue.java |   9 +
 .../scheduler/fair/FairScheduler.java   |  29 +-
 .../scheduler/fair/QueueManager.java| 155 +++--
 .../fair/TestAllocationFileLoaderService.java   | 100 +++---
 .../scheduler/fair/TestQueueManager.java| 337 +++
 7 files changed, 596 insertions(+), 81 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/86bc6425/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
index d8d9051..7a40b6a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
@@ -87,7 +87,7 @@ public class AllocationFileLoaderService extends 
AbstractService {
   private Path allocFile;
   private FileSystem fs;
 
-  private Listener reloadListener;
+  private final Listener reloadListener;
 
   @VisibleForTesting
   long reloadIntervalMs = ALLOC_RELOAD_INTERVAL_MS;
@@ -95,15 +95,16 @@ public class AllocationFileLoaderService extends 
AbstractService {
   private Thread reloadThread;
   private volatile boolean running = true;
 
-  public AllocationFileLoaderService() {
-this(SystemClock.getInstance());
+  public AllocationFileLoaderService(Listener reloadListener) {
+this(reloadListener, SystemClock.getInstance());
   }
 
   private List defaultPermissions;
 
-  public AllocationFileLoaderService(Clock clock) {
+  public AllocationFileLoaderService(Listener reloadListener, Clock clock) {
 super(AllocationFileLoaderService.class.getName());
 this.clock = clock;
+this.reloadListener = reloadListener;
   }
 
   @Override
@@ -114,6 +115,7 @@ public class AllocationFileLoaderService extends 
AbstractService {
   reloadThread = new Thread(() -> {
 while (running) {
   try {
+reloadListener.onCheck();
 long time = clock.getTime();
 long lastModified =
 fs.getFileStatus(allocFile).getModificationTime();
@@ -207,10 +209,6 @@ public class AllocationFileLoaderService extends 
AbstractService {
 return allocPath;
   }
 
-  public synchronized void setReloadListener(Listener reloadListener) {
-this.reloadListener = reloadListener;
-  }
-
   /**
* Updates the allocation list from the allocation config file. This file is
* expected to be in the XML format specified in the design doc.
@@ -351,5 +349,7 @@ public class AllocationFileLoaderService extends 
AbstractService {
 
   public interface Listener {
 void onReload(AllocationConfiguration info) throws IOException;
+
+void onCheck();
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/86bc6425/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLea

[45/50] [abbrv] hadoop git commit: HDDS-125. Cleanup HDDS CheckStyle issues. Contributed by Anu Engineer.

2018-05-29 Thread botong
HDDS-125. Cleanup HDDS CheckStyle issues.
Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9502b47b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9502b47b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9502b47b

Branch: refs/heads/YARN-7402
Commit: 9502b47bd2a3cf32edae635293169883c2914475
Parents: 17aa40f
Author: Anu Engineer 
Authored: Tue May 29 09:54:06 2018 -0700
Committer: Anu Engineer 
Committed: Tue May 29 09:54:06 2018 -0700

--
 .../hadoop/hdds/scm/block/BlockManagerImpl.java |  1 -
 .../hdds/scm/block/DeletedBlockLogImpl.java |  2 +-
 .../hdds/scm/container/ContainerMapping.java|  6 +-
 .../scm/container/ContainerStateManager.java| 24 +++
 .../hadoop/hdds/scm/container/Mapping.java  |  9 ++-
 .../hdds/scm/node/SCMNodeStorageStatMXBean.java |  4 +-
 .../hdds/scm/node/SCMNodeStorageStatMap.java| 19 +++---
 .../hdds/scm/node/StorageReportResult.java  |  8 +--
 .../hdds/scm/node/states/Node2ContainerMap.java |  2 +-
 .../hdds/scm/pipelines/PipelineSelector.java|  5 +-
 .../scm/server/StorageContainerManager.java |  3 +-
 .../TestStorageContainerManagerHttpServer.java  |  1 -
 .../hadoop/hdds/scm/block/package-info.java | 23 +++
 .../scm/container/TestContainerMapping.java | 12 ++--
 .../hdds/scm/container/closer/package-info.java | 22 +++
 .../hadoop/hdds/scm/container/package-info.java | 22 +++
 .../hdds/scm/container/states/package-info.java | 22 +++
 .../hadoop/hdds/scm/node/TestNodeManager.java   | 66 ++--
 .../scm/node/TestSCMNodeStorageStatMap.java | 32 +-
 .../hadoop/hdds/scm/node/package-info.java  | 22 +++
 .../ozone/container/common/TestEndPoint.java|  2 -
 .../ozone/container/common/package-info.java| 22 +++
 .../ozone/container/placement/package-info.java | 22 +++
 .../replication/TestContainerSupervisor.java|  7 ++-
 24 files changed, 263 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index 5a98e85..d17d6c0 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -41,7 +41,6 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
-import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
index cabcb46..cedc506 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
@@ -190,7 +190,7 @@ public class DeletedBlockLogImpl implements DeletedBlockLog 
{
 try {
   for(Long txID : txIDs) {
 try {
-  byte [] deleteBlockBytes =
+  byte[] deleteBlockBytes =
   deletedStore.get(Longs.toByteArray(txID));
   if (deleteBlockBytes == null) {
 LOG.warn("Delete txID {} not found", txID);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9502b47b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index e569874..2d88621 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -152,7 +152,8 @@ public class ContainerMapping implements Mapping {
 ContainerInfo containerInfo;
 lock.lock();
 try {
-  byte[] containerB

[15/50] [abbrv] hadoop git commit: HDDS-45. Removal of old OzoneRestClient. Contributed by Lokesh Jain.

2018-05-29 Thread botong
HDDS-45. Removal of old OzoneRestClient. Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/774daa8d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/774daa8d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/774daa8d

Branch: refs/heads/YARN-7402
Commit: 774daa8d532f91fe8e342a8da2cfa65a8629
Parents: c05b5d4
Author: Mukul Kumar Singh 
Authored: Thu May 24 15:53:42 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Thu May 24 15:53:42 2018 +0530

--
 .../apache/hadoop/hdds/scm/XceiverClient.java   |  22 +-
 .../hadoop/ozone/web/client/OzoneBucket.java| 646 ---
 .../hadoop/ozone/web/client/OzoneKey.java   |  44 -
 .../ozone/web/client/OzoneRestClient.java   | 804 ---
 .../hadoop/ozone/web/client/OzoneVolume.java| 584 --
 .../hadoop/ozone/web/client/package-info.java   |  34 -
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |   3 +-
 .../apache/hadoop/ozone/RatisTestHelper.java|  14 +-
 .../ozone/web/TestOzoneRestWithMiniCluster.java | 207 ++---
 .../hadoop/ozone/web/client/TestBuckets.java| 193 +++--
 .../ozone/web/client/TestBucketsRatis.java  |  15 +-
 .../hadoop/ozone/web/client/TestKeys.java   | 286 ---
 .../hadoop/ozone/web/client/TestKeysRatis.java  |  29 +-
 .../hadoop/ozone/web/client/TestVolume.java | 285 +++
 .../ozone/web/client/TestVolumeRatis.java   |  29 +-
 15 files changed, 548 insertions(+), 2647 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/774daa8d/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
index 6d33cd4..42e02f9 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClient.java
@@ -54,6 +54,7 @@ public class XceiverClient extends XceiverClientSpi {
   private Bootstrap b;
   private EventLoopGroup group;
   private final Semaphore semaphore;
+  private boolean closed = false;
 
   /**
* Constructs a client that can communicate with the Container framework on
@@ -74,6 +75,10 @@ public class XceiverClient extends XceiverClientSpi {
 
   @Override
   public void connect() throws Exception {
+if (closed) {
+  throw new IOException("This channel is not connected.");
+}
+
 if (channel != null && channel.isActive()) {
   throw new IOException("This client is already connected to a host.");
 }
@@ -97,6 +102,18 @@ public class XceiverClient extends XceiverClientSpi {
 channel = b.connect(leader.getHostName(), port).sync().channel();
   }
 
+  public void reconnect() throws IOException {
+try {
+  connect();
+  if (channel == null || !channel.isActive()) {
+throw new IOException("This channel is not connected.");
+  }
+} catch (Exception e) {
+  LOG.error("Error while connecting: ", e);
+  throw new IOException(e);
+}
+  }
+
   /**
* Returns if the exceiver client connects to a server.
*
@@ -109,6 +126,7 @@ public class XceiverClient extends XceiverClientSpi {
 
   @Override
   public void close() {
+closed = true;
 if (group != null) {
   group.shutdownGracefully().awaitUninterruptibly();
 }
@@ -124,7 +142,7 @@ public class XceiverClient extends XceiverClientSpi {
   ContainerProtos.ContainerCommandRequestProto request) throws IOException 
{
 try {
   if ((channel == null) || (!channel.isActive())) {
-throw new IOException("This channel is not connected.");
+reconnect();
   }
   XceiverClientHandler handler =
   channel.pipeline().get(XceiverClientHandler.class);
@@ -160,7 +178,7 @@ public class XceiverClient extends XceiverClientSpi {
   sendCommandAsync(ContainerProtos.ContainerCommandRequestProto request)
   throws IOException, ExecutionException, InterruptedException {
 if ((channel == null) || (!channel.isActive())) {
-  throw new IOException("This channel is not connected.");
+  reconnect();
 }
 XceiverClientHandler handler =
 channel.pipeline().get(XceiverClientHandler.class);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/774daa8d/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneBucket.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneBucket.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneBuck

[26/50] [abbrv] hadoop git commit: HDDS-96. Add an option in ozone script to generate a site file with minimally required ozone configs. Contributed by Dinesh Chitlangia.

2018-05-29 Thread botong
HDDS-96. Add an option in ozone script to generate a site file with minimally 
required ozone configs.
Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8733012a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8733012a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8733012a

Branch: refs/heads/YARN-7402
Commit: 8733012ae35f2762d704f94975a762885d116795
Parents: 1e0d4b1
Author: Anu Engineer 
Authored: Fri May 25 13:06:14 2018 -0700
Committer: Anu Engineer 
Committed: Fri May 25 13:06:14 2018 -0700

--
 .../hadoop/hdds/conf/OzoneConfiguration.java|   6 +-
 hadoop-ozone/common/src/main/bin/ozone  |   4 +
 ...TestGenerateOzoneRequiredConfigurations.java | 100 +++
 .../GenerateOzoneRequiredConfigurations.java| 174 +++
 .../hadoop/ozone/genconf/package-info.java  |  24 +++
 5 files changed, 305 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8733012a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
index f07718c..36d953c 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
@@ -137,7 +137,7 @@ public class OzoneConfiguration extends Configuration {
 
 @Override
 public String toString() {
-  return this.getName() + " " + this.getValue() + this.getTag();
+  return this.getName() + " " + this.getValue() + " " + this.getTag();
 }
 
 @Override
@@ -152,11 +152,11 @@ public class OzoneConfiguration extends Configuration {
 }
   }
 
-  public static void activate(){
+  public static void activate() {
 // adds the default resources
 Configuration.addDefaultResource("hdfs-default.xml");
 Configuration.addDefaultResource("hdfs-site.xml");
 Configuration.addDefaultResource("ozone-default.xml");
 Configuration.addDefaultResource("ozone-site.xml");
   }
-}
\ No newline at end of file
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8733012a/hadoop-ozone/common/src/main/bin/ozone
--
diff --git a/hadoop-ozone/common/src/main/bin/ozone 
b/hadoop-ozone/common/src/main/bin/ozone
index 00261c7..6843bdd 100755
--- a/hadoop-ozone/common/src/main/bin/ozone
+++ b/hadoop-ozone/common/src/main/bin/ozone
@@ -47,6 +47,7 @@ function hadoop_usage
   hadoop_add_subcommand "scm" daemon "run the Storage Container Manager 
service"
   hadoop_add_subcommand "scmcli" client "run the CLI of the Storage Container 
Manager "
   hadoop_add_subcommand "version" client "print the version"
+  hadoop_add_subcommand "genconf" client "generate minimally required ozone 
configs and output to ozone-site.xml in specified path"
 
   hadoop_generate_usage "${HADOOP_SHELL_EXECNAME}" false
 }
@@ -118,6 +119,9 @@ function ozonecmd_case
 version)
   HADOOP_CLASSNAME=org.apache.hadoop.util.VersionInfo
 ;;
+genconf)
+  
HADOOP_CLASSNAME=org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations
+;;
 *)
   HADOOP_CLASSNAME="${subcmd}"
   if ! hadoop_validate_classname "${HADOOP_CLASSNAME}"; then

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8733012a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
new file mode 100644
index 000..82582a6
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, so

[25/50] [abbrv] hadoop git commit: HDFS-13618. Fix TestDataNodeFaultInjector test failures on Windows. Contributed by Xiao Liang.

2018-05-29 Thread botong
HDFS-13618. Fix TestDataNodeFaultInjector test failures on Windows. Contributed 
by Xiao Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1e0d4b1c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1e0d4b1c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1e0d4b1c

Branch: refs/heads/YARN-7402
Commit: 1e0d4b1c283fb98a95c60a1723f594befb3c18a9
Parents: 02322de
Author: Inigo Goiri 
Authored: Fri May 25 09:10:32 2018 -0700
Committer: Inigo Goiri 
Committed: Fri May 25 09:14:28 2018 -0700

--
 .../hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1e0d4b1c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
index 1507844..4afacd9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
@@ -118,7 +118,7 @@ public class TestDataNodeFaultInjector {
   final MetricsDataNodeFaultInjector mdnFaultInjector) throws Exception {
 
 final Path baseDir = new Path(
-PathUtils.getTestDir(getClass()).getAbsolutePath(),
+PathUtils.getTestDir(getClass()).getPath(),
 GenericTestUtils.getMethodName());
 final DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get();
 DataNodeFaultInjector.set(mdnFaultInjector);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/50] [abbrv] hadoop git commit: YARN-8316. Improved diagnostic message for ATS unavailability for YARN Service. Contributed by Billie Rinaldi

2018-05-29 Thread botong
YARN-8316.  Improved diagnostic message for ATS unavailability for YARN Service.
Contributed by Billie Rinaldi


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7ff5a402
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7ff5a402
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7ff5a402

Branch: refs/heads/YARN-7402
Commit: 7ff5a40218241ad2380595175a493794129a7402
Parents: 2d19e7d
Author: Eric Yang 
Authored: Thu May 24 16:26:02 2018 -0400
Committer: Eric Yang 
Committed: Thu May 24 16:26:02 2018 -0400

--
 .../org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java   | 2 +-
 .../org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ff5a402/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index 072e606..1ceb462 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ -400,7 +400,7 @@ public class YarnClientImpl extends YarnClient {
 + e.getMessage());
 return null;
   }
-  throw e;
+  throw new IOException(e);
 } catch (NoClassDefFoundError e) {
   NoClassDefFoundError wrappedError = new NoClassDefFoundError(
   e.getMessage() + ". It appears that the timeline client "

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ff5a402/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
index b84b49c..70ff47b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
@@ -1159,7 +1159,7 @@ public class TestYarnClient extends 
ParameterizedSchedulerTestBase {
   TimelineClient createTimelineClient() throws IOException, YarnException {
 timelineClient = mock(TimelineClient.class);
 when(timelineClient.getDelegationToken(any(String.class)))
-  .thenThrow(new IOException("Best effort test exception"));
+  .thenThrow(new RuntimeException("Best effort test exception"));
 return timelineClient;
   }
 });
@@ -1175,7 +1175,7 @@ public class TestYarnClient extends 
ParameterizedSchedulerTestBase {
   client.serviceInit(conf);
   client.getTimelineDelegationToken();
   Assert.fail("Get delegation token should have thrown an exception");
-} catch (Exception e) {
+} catch (IOException e) {
   // Success
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[33/50] [abbrv] hadoop git commit: MAPREDUCE-7097. MapReduce JHS should honor yarn.webapp.filter-entity-list-by-user. Contributed by Sunil Govindan.

2018-05-29 Thread botong
MAPREDUCE-7097. MapReduce JHS should honor 
yarn.webapp.filter-entity-list-by-user. Contributed by  Sunil Govindan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88cbe57c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88cbe57c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88cbe57c

Branch: refs/heads/YARN-7402
Commit: 88cbe57c069a1d2dd3bfb32e3ad742566470a10b
Parents: d14e26b
Author: Rohith Sharma K S 
Authored: Mon May 28 12:45:07 2018 +0530
Committer: Rohith Sharma K S 
Committed: Mon May 28 14:05:49 2018 +0530

--
 .../mapreduce/v2/hs/webapp/HsJobBlock.java  | 18 ++-
 .../mapreduce/v2/hs/webapp/TestHsJobBlock.java  | 20 ++--
 .../apache/hadoop/yarn/webapp/Controller.java   |  4 
 .../org/apache/hadoop/yarn/webapp/View.java | 24 +---
 4 files changed, 55 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/88cbe57c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
index 18040f0..9b845cd 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobBlock.java
@@ -27,6 +27,8 @@ import static org.apache.hadoop.yarn.webapp.view.JQueryUI._TH;
 import java.util.Date;
 import java.util.List;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.TaskID;
 import org.apache.hadoop.mapreduce.v2.api.records.AMInfo;
 import org.apache.hadoop.mapreduce.v2.api.records.JobId;
@@ -39,8 +41,10 @@ import org.apache.hadoop.mapreduce.v2.hs.webapp.dao.JobInfo;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JHAdminConfig;
 import org.apache.hadoop.mapreduce.v2.util.MRApps;
 import org.apache.hadoop.mapreduce.v2.util.MRApps.TaskAttemptStateUI;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil;
 import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.util.Times;
 import org.apache.hadoop.yarn.webapp.ResponseInfo;
 import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet;
@@ -56,9 +60,14 @@ import com.google.inject.Inject;
  */
 public class HsJobBlock extends HtmlBlock {
   final AppContext appContext;
+  private UserGroupInformation ugi;
+  private boolean isFilterAppListByUserEnabled;
 
-  @Inject HsJobBlock(AppContext appctx) {
+  @Inject HsJobBlock(Configuration conf, AppContext appctx, ViewContext ctx) {
+super(ctx);
 appContext = appctx;
+isFilterAppListByUserEnabled = conf
+.getBoolean(YarnConfiguration.FILTER_ENTITY_LIST_BY_USER, false);
   }
 
   /*
@@ -78,6 +87,13 @@ public class HsJobBlock extends HtmlBlock {
   html.p().__("Sorry, ", jid, " not found.").__();
   return;
 }
+ugi = getCallerUGI();
+if (isFilterAppListByUserEnabled && ugi != null
+&& !j.checkAccess(ugi, JobACL.VIEW_JOB)) {
+  html.p().__("Sorry, ", jid, " could not be viewed for '",
+  ugi.getUserName(), "'.").__();
+  return;
+}
 if(j instanceof UnparsedJob) {
   final int taskCount = j.getTotalMaps() + j.getTotalReduces();
   UnparsedJob oversizedJob = (UnparsedJob) j;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88cbe57c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsJobBlock.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsJobBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsJobBlock.java
index 7fa238e..48e3d3b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsJobBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apa

[41/50] [abbrv] hadoop git commit: YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed by Vinod Kumar Vavilapalli

2018-05-29 Thread botong
YARN-8338. TimelineService V1.5 doesn't come up after HADOOP-15406. Contributed 
by Vinod Kumar Vavilapalli


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/31ab960f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/31ab960f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/31ab960f

Branch: refs/heads/YARN-7402
Commit: 31ab960f4f931df273481927b897388895d803ba
Parents: 438ef49
Author: Jason Lowe 
Authored: Tue May 29 11:00:30 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 11:00:30 2018 -0500

--
 hadoop-project/pom.xml  | 5 +
 .../hadoop-yarn-server-applicationhistoryservice/pom.xml| 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31ab960f/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 73c3f5b..59a9bd2 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1144,6 +1144,11 @@
 1.8.5
   
   
+org.objenesis
+objenesis
+1.0
+  
+  
 org.mock-server
 mockserver-netty
 3.9.2

http://git-wip-us.apache.org/repos/asf/hadoop/blob/31ab960f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index f310518..0527095 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -155,6 +155,11 @@
   leveldbjni-all
 
 
+
+  org.objenesis
+  objenesis
+
+
 
 
   org.apache.hadoop


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[23/50] [abbrv] hadoop git commit: YARN-8292: Fix the dominant resource preemption cannot happen when some of the resource vector becomes negative. Contributed by Wangda Tan.

2018-05-29 Thread botong
YARN-8292: Fix the dominant resource preemption cannot happen when some of the 
resource vector becomes negative. Contributed by Wangda Tan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d5509c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d5509c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d5509c6

Branch: refs/heads/YARN-7402
Commit: 8d5509c68156faaa6641f4e747fc9ff80adccf88
Parents: bddfe79
Author: Eric E Payne 
Authored: Fri May 25 16:06:09 2018 +
Committer: Eric E Payne 
Committed: Fri May 25 16:06:09 2018 +

--
 .../resource/DefaultResourceCalculator.java |  15 ++-
 .../resource/DominantResourceCalculator.java|  39 ---
 .../yarn/util/resource/ResourceCalculator.java  |  13 ++-
 .../hadoop/yarn/util/resource/Resources.java|   5 -
 .../AbstractPreemptableResourceCalculator.java  |  58 ---
 .../CapacitySchedulerPreemptionUtils.java   |  61 +--
 .../capacity/FifoCandidatesSelector.java|   8 +-
 .../FifoIntraQueuePreemptionPlugin.java |   4 +-
 .../capacity/IntraQueueCandidatesSelector.java  |   2 +-
 .../capacity/PreemptableResourceCalculator.java |   6 +-
 .../monitor/capacity/TempQueuePerPartition.java |   8 +-
 ...alCapacityPreemptionPolicyMockFramework.java |  30 ++
 .../TestPreemptionForQueueWithPriorities.java   | 103 ---
 ...pacityPreemptionPolicyInterQueueWithDRF.java |  60 ++-
 14 files changed, 312 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d5509c6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
index 6375c4a..ab6d7f5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
@@ -136,13 +136,18 @@ public class DefaultResourceCalculator extends 
ResourceCalculator {
   }
 
   @Override
-  public boolean isAnyMajorResourceZero(Resource resource) {
-return resource.getMemorySize() == 0f;
-  }
-
-  @Override
   public Resource normalizeDown(Resource r, Resource stepFactor) {
 return Resources.createResource(
 roundDown((r.getMemorySize()), stepFactor.getMemorySize()));
   }
+
+  @Override
+  public boolean isAnyMajorResourceZeroOrNegative(Resource resource) {
+return resource.getMemorySize() <= 0;
+  }
+
+  @Override
+  public boolean isAnyMajorResourceAboveZero(Resource resource) {
+return resource.getMemorySize() > 0;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d5509c6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
index 6fed23b..2e85ebc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
@@ -577,19 +577,6 @@ public class DominantResourceCalculator extends 
ResourceCalculator {
   }
 
   @Override
-  public boolean isAnyMajorResourceZero(Resource resource) {
-int maxLength = ResourceUtils.getNumberOfKnownResourceTypes();
-for (int i = 0; i < maxLength; i++) {
-  ResourceInformation resourceInformation = resource
-  .getResourceInformation(i);
-  if (resourceInformation.getValue() == 0L) {
-return true;
-  }
-}
-return false;
-  }
-
-  @Override
   public Resource normalizeDown(Resource r, Resource stepFactor) {
 Resource ret = Resource.newInstance(r);
 int maxLength = ResourceUtils.getNumberOfKnownResourceTypes();
@@ -613,4 +600,30 @@ public class DominantResourceCalculator extends 
ResourceCalculator {
 }
 return ret;
   }
+
+  @Override
+  public boolean isAnyMajorResourceZeroOrNegative(Resource resource) 

[50/50] [abbrv] hadoop git commit: YARN-3660. [GPG] Federation Global Policy Generator (service hook only). (Contributed by Botong Huang via curino)

2018-05-29 Thread botong
YARN-3660. [GPG] Federation Global Policy Generator (service hook only). 
(Contributed by Botong Huang via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bca8e9bf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bca8e9bf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bca8e9bf

Branch: refs/heads/YARN-7402
Commit: bca8e9bf9d6c0d99e15d45dfb714ca5677ac4e0a
Parents: 9502b47
Author: Carlo Curino 
Authored: Thu Jan 18 17:21:06 2018 -0800
Committer: Botong Huang 
Committed: Tue May 29 10:48:40 2018 -0700

--
 hadoop-project/pom.xml  |   6 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn|   5 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd|  55 +---
 .../hadoop-yarn/conf/yarn-env.sh|  12 ++
 .../pom.xml |  98 +
 .../globalpolicygenerator/GPGContext.java   |  31 +
 .../globalpolicygenerator/GPGContextImpl.java   |  41 ++
 .../GlobalPolicyGenerator.java  | 136 +++
 .../globalpolicygenerator/package-info.java |  19 +++
 .../TestGlobalPolicyGenerator.java  |  38 ++
 .../hadoop-yarn/hadoop-yarn-server/pom.xml  |   1 +
 hadoop-yarn-project/pom.xml |   4 +
 12 files changed, 424 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bca8e9bf/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 59a9bd2..2db538e 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -446,6 +446,12 @@
 
   
 org.apache.hadoop
+hadoop-yarn-server-globalpolicygenerator
+${project.version}
+  
+
+  
+org.apache.hadoop
 hadoop-yarn-services-core
 ${hadoop.version}
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bca8e9bf/hadoop-yarn-project/hadoop-yarn/bin/yarn
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index 69afe6f..8061859 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -39,6 +39,7 @@ function hadoop_usage
   hadoop_add_subcommand "container" client "prints container(s) report"
   hadoop_add_subcommand "daemonlog" admin "get/set the log level for each 
daemon"
   hadoop_add_subcommand "envvars" client "display computed Hadoop environment 
variables"
+  hadoop_add_subcommand "globalpolicygenerator" daemon "run the Global Policy 
Generator"
   hadoop_add_subcommand "jar " client "run a jar file"
   hadoop_add_subcommand "logs" client "dump container logs"
   hadoop_add_subcommand "node" admin "prints node report(s)"
@@ -103,6 +104,10 @@ ${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
   echo "HADOOP_TOOLS_LIB_JARS_DIR='${HADOOP_TOOLS_LIB_JARS_DIR}'"
   exit 0
 ;;
+globalpolicygenerator)
+  HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
+  
HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.globalpolicygenerator.GlobalPolicyGenerator'
+;;
 jar)
   HADOOP_CLASSNAME=org.apache.hadoop.util.RunJar
 ;;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bca8e9bf/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
index e1ac112..bebfd71 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
@@ -134,6 +134,10 @@ if "%1" == "--loglevel" (
 set 
CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-router\target\classes
   )
 
+  if exist 
%HADOOP_YARN_HOME%\yarn-server\yarn-server-globalpolicygenerator\target\classes 
(
+set 
CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-globalpolicygenerator\target\classes
+  )
+
   if exist %HADOOP_YARN_HOME%\build\test\classes (
 set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\build\test\classes
   )
@@ -155,7 +159,7 @@ if "%1" == "--loglevel" (
 
   set yarncommands=resourcemanager nodemanager proxyserver rmadmin version jar 
^
  application applicationattempt container node queue logs daemonlog 
historyserver ^
- timelineserver timelinereader router classpath
+ timelineserver timelinereader router globalpolicygenerator classpath
   for %%i in ( %yarncommands% ) do (
 if %yarn-command% == %%i set yarncommand=true
   )
@@ -259,7 +263,13 @@ goto :eof
 :router
   set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\router-config\log4j.properties
   set CLASS=org.apache.hadoop.yarn.server.router.Router
- 

[31/50] [abbrv] hadoop git commit: HDDS-78. Add per volume level storage stats in SCM. Contributed by Shashikant Banerjee.

2018-05-29 Thread botong
HDDS-78. Add per volume level storage stats in SCM.
Contributed by  Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0cf6e87f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0cf6e87f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0cf6e87f

Branch: refs/heads/YARN-7402
Commit: 0cf6e87f9212af10eae39cdcb1fe60e6d8191772
Parents: f24c842
Author: Anu Engineer 
Authored: Sat May 26 11:06:22 2018 -0700
Committer: Anu Engineer 
Committed: Sat May 26 11:11:14 2018 -0700

--
 .../placement/metrics/SCMNodeStat.java  |  21 --
 .../hdds/scm/node/SCMNodeStorageStatMXBean.java |   8 +
 .../hdds/scm/node/SCMNodeStorageStatMap.java| 230 +--
 .../hdds/scm/node/StorageReportResult.java  |  87 +++
 .../scm/node/TestSCMNodeStorageStatMap.java | 141 +---
 5 files changed, 356 insertions(+), 131 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0cf6e87f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java
index 4fe72fc..3c871d3 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java
@@ -136,25 +136,4 @@ public class SCMNodeStat implements NodeStat {
   public int hashCode() {
 return Long.hashCode(capacity.get() ^ scmUsed.get() ^ remaining.get());
   }
-
-
-  /**
-   * Truncate to 4 digits since uncontrolled precision is some times
-   * counter intuitive to what users expect.
-   * @param value - double.
-   * @return double.
-   */
-  private double truncateDecimals(double value) {
-final int multiplier = 1;
-return (double) ((long) (value * multiplier)) / multiplier;
-  }
-
-  /**
-   * get the scmUsed ratio
-   */
-  public  double getScmUsedratio() {
-double scmUsedRatio =
-truncateDecimals(getScmUsed().get() / (double) getCapacity().get());
-return scmUsedRatio;
-  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0cf6e87f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java
index f17a970..d81ff0f 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java
@@ -19,7 +19,9 @@
 package org.apache.hadoop.hdds.scm.node;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.ozone.container.common.impl.StorageLocationReport;
 
+import java.util.Set;
 import java.util.UUID;
 
 /**
@@ -66,4 +68,10 @@ public interface SCMNodeStorageStatMXBean {
* @return long
*/
   long getTotalFreeSpace();
+
+  /**
+   * Returns the set of disks for a given Datanode.
+   * @return set of storage volumes
+   */
+  Set getStorageVolumes(UUID datanodeId);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0cf6e87f/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
index 25cb357..f8ad2af 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java
@@ -22,18 +22,18 @@ package org.apache.hadoop.hdds.scm.node;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
-import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMStorageReport;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.metrics2.util.

[43/50] [abbrv] hadoop git commit: YARN-8339. Service AM should localize static/archive resource types to container working directory instead of 'resources'. (Suma Shivaprasad via wangda)

2018-05-29 Thread botong
YARN-8339. Service AM should localize static/archive resource types to 
container working directory instead of 'resources'. (Suma Shivaprasad via 
wangda)

Change-Id: I9f8e8f621650347f6c2f9e3420edee9eb2f356a4


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3061bfcd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3061bfcd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3061bfcd

Branch: refs/heads/YARN-7402
Commit: 3061bfcde53210d2032df3814243498b27a997b7
Parents: 3c75f8e
Author: Wangda Tan 
Authored: Tue May 29 09:23:11 2018 -0700
Committer: Wangda Tan 
Committed: Tue May 29 09:23:11 2018 -0700

--
 .../org/apache/hadoop/yarn/service/provider/ProviderUtils.java | 3 +--
 .../apache/hadoop/yarn/service/provider/TestProviderUtils.java | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3061bfcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
index 1ad5fd8..ac90992 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java
@@ -298,8 +298,7 @@ public class ProviderUtils implements YarnServiceConstants {
 destFile = new Path(staticFile.getDestFile());
   }
 
-  String symlink = APP_RESOURCES_DIR + "/" + destFile.getName();
-  addLocalResource(launcher, symlink, localResource, destFile);
+  addLocalResource(launcher, destFile.getName(), localResource, destFile);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3061bfcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
index 6e8bc43..5d794d2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java
@@ -154,11 +154,11 @@ public class TestProviderUtils {
 
 ProviderUtils.handleStaticFilesForLocalization(launcher, sfs,
 compLaunchCtx);
-
Mockito.verify(launcher).addLocalResource(Mockito.eq("resources/destFile1"),
+Mockito.verify(launcher).addLocalResource(Mockito.eq("destFile1"),
 any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/destFile_2"), any(LocalResource.class));
+Mockito.eq("destFile_2"), any(LocalResource.class));
 Mockito.verify(launcher).addLocalResource(
-Mockito.eq("resources/sourceFile4"), any(LocalResource.class));
+Mockito.eq("sourceFile4"), any(LocalResource.class));
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[04/50] [abbrv] hadoop git commit: YARN-8336. Fix potential connection leak in SchedConfCLI and YarnWebServiceUtils. Contributed by Giovanni Matteo Fumarola.

2018-05-29 Thread botong
YARN-8336. Fix potential connection leak in SchedConfCLI and 
YarnWebServiceUtils. Contributed by Giovanni Matteo Fumarola.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e30938af
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e30938af
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e30938af

Branch: refs/heads/YARN-7402
Commit: e30938af1270e079587e7bc06b755f9e93e660a5
Parents: c13dea8
Author: Inigo Goiri 
Authored: Wed May 23 11:55:31 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 11:55:31 2018 -0700

--
 .../hadoop/yarn/client/cli/SchedConfCLI.java| 42 
 .../yarn/webapp/util/YarnWebServiceUtils.java   | 17 +---
 2 files changed, 38 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e30938af/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java
index 11bfdd7..a5f3b80 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java
@@ -132,25 +132,35 @@ public class SchedConfCLI extends Configured implements 
Tool {
 }
 
 Client webServiceClient = Client.create();
-WebResource webResource = webServiceClient.resource(WebAppUtils.
-getRMWebAppURLWithScheme(getConf()));
-ClientResponse response = webResource.path("ws").path("v1").path("cluster")
-.path("scheduler-conf").accept(MediaType.APPLICATION_JSON)
-.entity(YarnWebServiceUtils.toJson(updateInfo,
-SchedConfUpdateInfo.class), MediaType.APPLICATION_JSON)
-.put(ClientResponse.class);
-if (response != null) {
-  if (response.getStatus() == Status.OK.getStatusCode()) {
-System.out.println("Configuration changed successfully.");
-return 0;
+WebResource webResource = webServiceClient
+.resource(WebAppUtils.getRMWebAppURLWithScheme(getConf()));
+ClientResponse response = null;
+
+try {
+  response =
+  webResource.path("ws").path("v1").path("cluster")
+  .path("scheduler-conf").accept(MediaType.APPLICATION_JSON)
+  .entity(YarnWebServiceUtils.toJson(updateInfo,
+  SchedConfUpdateInfo.class), MediaType.APPLICATION_JSON)
+  .put(ClientResponse.class);
+  if (response != null) {
+if (response.getStatus() == Status.OK.getStatusCode()) {
+  System.out.println("Configuration changed successfully.");
+  return 0;
+} else {
+  System.err.println("Configuration change unsuccessful: "
+  + response.getEntity(String.class));
+}
   } else {
-System.err.println("Configuration change unsuccessful: "
-+ response.getEntity(String.class));
+System.err.println("Configuration change unsuccessful: null response");
   }
-} else {
-  System.err.println("Configuration change unsuccessful: null response");
+  return -1;
+} finally {
+  if (response != null) {
+response.close();
+  }
+  webServiceClient.destroy();
 }
-return -1;
   }
 
   @VisibleForTesting

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e30938af/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/YarnWebServiceUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/YarnWebServiceUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/YarnWebServiceUtils.java
index 1cf1e97..e7bca2c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/YarnWebServiceUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/YarnWebServiceUtils.java
@@ -58,11 +58,18 @@ public final class YarnWebServiceUtils {
 
 WebResource webResource = webServiceClient.resource(webAppAddress);
 
-ClientResponse response = webResource.path("ws").path("v1")
-.path("cluster").path("nodes")
-.path(nodeId).accept(MediaType.APPLICATION_JSON)
-.get(ClientResponse.class);
-return response.getE

[06/50] [abbrv] hadoop git commit: YARN-8344. Missing nm.stop() in TestNodeManagerResync to fix testKillContainersOnResync. Contributed by Giovanni Matteo Fumarola.

2018-05-29 Thread botong
YARN-8344. Missing nm.stop() in TestNodeManagerResync to fix 
testKillContainersOnResync. Contributed by Giovanni Matteo Fumarola.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e99e5bf1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e99e5bf1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e99e5bf1

Branch: refs/heads/YARN-7402
Commit: e99e5bf104e9664bc1b43a2639d87355d47a77e2
Parents: cddbbe5
Author: Inigo Goiri 
Authored: Wed May 23 14:15:26 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 14:15:26 2018 -0700

--
 .../nodemanager/TestNodeManagerResync.java  | 87 +++-
 1 file changed, 48 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e99e5bf1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
index 97e9922..cf33775 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerResync.java
@@ -150,7 +150,6 @@ public class TestNodeManagerResync {
 testContainerPreservationOnResyncImpl(nm, true);
   }
 
-  @SuppressWarnings("unchecked")
   protected void testContainerPreservationOnResyncImpl(TestNodeManager1 nm,
   boolean isWorkPreservingRestartEnabled)
   throws IOException, YarnException, InterruptedException {
@@ -186,32 +185,35 @@ public class TestNodeManagerResync {
 }
   }
 
-  @SuppressWarnings("unchecked")
+  @SuppressWarnings("resource")
   @Test(timeout=1)
   public void testNMshutdownWhenResyncThrowException() throws IOException,
   InterruptedException, YarnException {
 NodeManager nm = new TestNodeManager3();
 YarnConfiguration conf = createNMConfig();
-nm.init(conf);
-nm.start();
-Assert.assertEquals(1, ((TestNodeManager3) nm).getNMRegistrationCount());
-nm.getNMDispatcher().getEventHandler()
-.handle(new NodeManagerEvent(NodeManagerEventType.RESYNC));
-
-synchronized (isNMShutdownCalled) {
-  while (isNMShutdownCalled.get() == false) {
-try {
-  isNMShutdownCalled.wait();
-} catch (InterruptedException e) {
+try {
+  nm.init(conf);
+  nm.start();
+  Assert.assertEquals(1, ((TestNodeManager3) nm).getNMRegistrationCount());
+  nm.getNMDispatcher().getEventHandler()
+  .handle(new NodeManagerEvent(NodeManagerEventType.RESYNC));
+
+  synchronized (isNMShutdownCalled) {
+while (!isNMShutdownCalled.get()) {
+  try {
+isNMShutdownCalled.wait();
+  } catch (InterruptedException e) {
+  }
 }
   }
-}
 
-Assert.assertTrue("NM shutdown not called.",isNMShutdownCalled.get());
-nm.stop();
+  Assert.assertTrue("NM shutdown not called.", isNMShutdownCalled.get());
+} finally {
+  nm.stop();
+}
   }
 
-  @SuppressWarnings("unchecked")
+  @SuppressWarnings("resource")
   @Test(timeout=6)
   public void testContainerResourceIncreaseIsSynchronizedWithRMResync()
   throws IOException, InterruptedException, YarnException {
@@ -219,28 +221,32 @@ public class TestNodeManagerResync {
 YarnConfiguration conf = createNMConfig();
 conf.setBoolean(
 YarnConfiguration.RM_WORK_PRESERVING_RECOVERY_ENABLED, true);
-nm.init(conf);
-nm.start();
-// Start a container and make sure it is in RUNNING state
-((TestNodeManager4)nm).startContainer();
-// Simulate a container resource increase in a separate thread
-((TestNodeManager4)nm).updateContainerResource();
-// Simulate RM restart by sending a RESYNC event
-LOG.info("Sending out RESYNC event");
-nm.getNMDispatcher().getEventHandler().handle(
-new NodeManagerEvent(NodeManagerEventType.RESYNC));
 try {
-  syncBarrier.await();
-} catch (BrokenBarrierException e) {
-  e.printStackTrace();
+  nm.init(conf);
+  nm.start();
+  // Start a container and make sure it is in RUNNING state
+  ((TestNodeManager4) nm).startContainer();
+  // Simulate a container resource increase in a separate thread
+   

[29/50] [abbrv] hadoop git commit: HDFS-13620. Randomize the test directory path for TestHDFSFileSystemContract. Contributed by Anbang Hu.

2018-05-29 Thread botong
HDFS-13620. Randomize the test directory path for TestHDFSFileSystemContract. 
Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8605a385
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8605a385
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8605a385

Branch: refs/heads/YARN-7402
Commit: 8605a38514b4f7a2a549c7ecf8e1421e61bb4d67
Parents: 2a9652e
Author: Inigo Goiri 
Authored: Fri May 25 19:43:33 2018 -0700
Committer: Inigo Goiri 
Committed: Fri May 25 19:43:33 2018 -0700

--
 .../org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8605a385/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
index 50d1e75..6da46de 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSFileSystemContract.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hdfs;
 
+import java.io.File;
 import java.io.IOException;
 
 import org.apache.hadoop.conf.Configuration;
@@ -25,6 +26,7 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.FileSystemContractBaseTest;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -39,7 +41,9 @@ public class TestHDFSFileSystemContract extends 
FileSystemContractBaseTest {
 Configuration conf = new HdfsConfiguration();
 conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
 FileSystemContractBaseTest.TEST_UMASK);
-cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+File basedir = GenericTestUtils.getRandomizedTestDir();
+cluster = new MiniDFSCluster.Builder(conf, basedir).numDataNodes(2)
+.build();
 fs = cluster.getFileSystem();
 defaultWorkingDirectory = "/user/" + 
UserGroupInformation.getCurrentUser().getShortUserName();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[27/50] [abbrv] hadoop git commit: HDFS-13619. TestAuditLoggerWithCommands fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HDFS-13619. TestAuditLoggerWithCommands fails on Windows. Contributed by Anbang 
Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/13d25289
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/13d25289
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/13d25289

Branch: refs/heads/YARN-7402
Commit: 13d25289076b39daf481fb1ee15939dbfe4a6b23
Parents: 8733012
Author: Inigo Goiri 
Authored: Fri May 25 13:32:34 2018 -0700
Committer: Inigo Goiri 
Committed: Fri May 25 13:32:34 2018 -0700

--
 .../hdfs/server/namenode/TestAuditLoggerWithCommands.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/13d25289/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java
index 41ee03f..222a1de 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java
@@ -1264,8 +1264,9 @@ public class TestAuditLoggerWithCommands {
   }
 
   private int verifyAuditLogs(String pattern) {
-int length = auditlog.getOutput().split("\n").length;
-String lastAudit = auditlog.getOutput().split("\n")[length - 1];
+int length = auditlog.getOutput().split(System.lineSeparator()).length;
+String lastAudit = auditlog.getOutput()
+.split(System.lineSeparator())[length - 1];
 assertTrue("Unexpected log!", lastAudit.matches(pattern));
 return length;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: YARN-7707. [GPG] Policy generator framework. Contributed by Young Chen

2018-05-29 Thread botong
YARN-7707. [GPG] Policy generator framework. Contributed by Young Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f5da8ca6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f5da8ca6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f5da8ca6

Branch: refs/heads/YARN-7402
Commit: f5da8ca6f04b7db40fccfd00cc4ff8ca1b2da74b
Parents: 46a4a94
Author: Botong Huang 
Authored: Fri Mar 23 17:07:10 2018 -0700
Committer: Botong Huang 
Committed: Tue May 29 10:48:40 2018 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  36 +-
 .../src/main/resources/yarn-default.xml |  40 +++
 .../utils/FederationStateStoreFacade.java   |  13 +
 .../pom.xml |  18 +
 .../globalpolicygenerator/GPGContext.java   |   4 +
 .../globalpolicygenerator/GPGContextImpl.java   |  10 +
 .../globalpolicygenerator/GPGPolicyFacade.java  | 220 
 .../server/globalpolicygenerator/GPGUtils.java  |  80 +
 .../GlobalPolicyGenerator.java  |  17 +
 .../policygenerator/GlobalPolicy.java   |  76 +
 .../policygenerator/NoOpGlobalPolicy.java   |  36 ++
 .../policygenerator/PolicyGenerator.java| 261 ++
 .../UniformWeightedLocalityGlobalPolicy.java|  71 
 .../policygenerator/package-info.java   |  24 ++
 .../TestGPGPolicyFacade.java| 202 +++
 .../policygenerator/TestPolicyGenerator.java| 338 +++
 .../src/test/resources/schedulerInfo1.json  | 134 
 .../src/test/resources/schedulerInfo2.json  | 196 +++
 18 files changed, 1775 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5da8ca6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 7c78e0d..b224818 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3326,7 +3326,7 @@ public class YarnConfiguration extends Configuration {
   public static final boolean DEFAULT_ROUTER_WEBAPP_PARTIAL_RESULTS_ENABLED =
   false;
 
-  private static final String FEDERATION_GPG_PREFIX =
+  public static final String FEDERATION_GPG_PREFIX =
   FEDERATION_PREFIX + "gpg.";
 
   // The number of threads to use for the GPG scheduled executor service
@@ -3344,6 +3344,40 @@ public class YarnConfiguration extends Configuration {
   FEDERATION_GPG_PREFIX + "subcluster.heartbeat.expiration-ms";
   public static final long DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS = 180;
 
+  public static final String FEDERATION_GPG_POLICY_PREFIX =
+  FEDERATION_GPG_PREFIX + "policy.generator.";
+
+  /** The interval at which the policy generator runs, default is one hour. */
+  public static final String GPG_POLICY_GENERATOR_INTERVAL_MS =
+  FEDERATION_GPG_POLICY_PREFIX + "interval-ms";
+  public static final long DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS = -1;
+
+  /**
+   * The configured policy generator class, runs NoOpGlobalPolicy by
+   * default.
+   */
+  public static final String GPG_GLOBAL_POLICY_CLASS =
+  FEDERATION_GPG_POLICY_PREFIX + "class";
+  public static final String DEFAULT_GPG_GLOBAL_POLICY_CLASS =
+  "org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator."
+  + "NoOpGlobalPolicy";
+
+  /**
+   * Whether or not the policy generator is running in read only (won't modify
+   * policies), default is false.
+   */
+  public static final String GPG_POLICY_GENERATOR_READONLY =
+  FEDERATION_GPG_POLICY_PREFIX + "readonly";
+  public static final boolean DEFAULT_GPG_POLICY_GENERATOR_READONLY =
+  false;
+
+  /**
+   * Which sub-clusters the policy generator should blacklist.
+   */
+  public static final String GPG_POLICY_GENERATOR_BLACKLIST =
+  FEDERATION_GPG_POLICY_PREFIX + "blacklist";
+
+
   
   // Other Configs
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5da8ca6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-c

[47/50] [abbrv] hadoop git commit: YARN-7402. [GPG] Fix potential connection leak in GPGUtils. Contributed by Giovanni Matteo Fumarola.

2018-05-29 Thread botong
YARN-7402. [GPG] Fix potential connection leak in GPGUtils. Contributed by 
Giovanni Matteo Fumarola.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5bf22dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5bf22dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5bf22dc

Branch: refs/heads/YARN-7402
Commit: c5bf22dc13b5bbe57b45fe81dd2d912af3b87602
Parents: f5da8ca
Author: Botong Huang 
Authored: Wed May 23 12:45:32 2018 -0700
Committer: Botong Huang 
Committed: Tue May 29 10:48:40 2018 -0700

--
 .../server/globalpolicygenerator/GPGUtils.java  | 31 +---
 1 file changed, 20 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5bf22dc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
index 429bec4..31cee1c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
@@ -18,21 +18,22 @@
 
 package org.apache.hadoop.yarn.server.globalpolicygenerator;
 
+import static javax.servlet.http.HttpServletResponse.SC_OK;
+
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
 
-import javax.servlet.http.HttpServletResponse;
 import javax.ws.rs.core.MediaType;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
 
 import com.sun.jersey.api.client.Client;
 import com.sun.jersey.api.client.ClientResponse;
 import com.sun.jersey.api.client.WebResource;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
 
 /**
  * GPGUtils contains utility functions for the GPG.
@@ -53,15 +54,23 @@ public final class GPGUtils {
 T obj = null;
 
 WebResource webResource = client.resource(webAddr);
-ClientResponse response = webResource.path("ws/v1/cluster").path(path)
-.accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
-if (response.getStatus() == HttpServletResponse.SC_OK) {
-  obj = response.getEntity(returnType);
-} else {
-  throw new YarnRuntimeException("Bad response from remote web service: "
-  + response.getStatus());
+ClientResponse response = null;
+try {
+  response = webResource.path("ws/v1/cluster").path(path)
+  .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
+  if (response.getStatus() == SC_OK) {
+obj = response.getEntity(returnType);
+  } else {
+throw new YarnRuntimeException(
+"Bad response from remote web service: " + response.getStatus());
+  }
+  return obj;
+} finally {
+  if (response != null) {
+response.close();
+  }
+  client.destroy();
 }
-return obj;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/50] [abbrv] hadoop git commit: HDFS-13591. TestDFSShell#testSetrepLow fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HDFS-13591. TestDFSShell#testSetrepLow fails on Windows. Contributed by Anbang 
Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9dbf4f01
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9dbf4f01
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9dbf4f01

Branch: refs/heads/YARN-7402
Commit: 9dbf4f01665d5480a70395a24519cbab5d4db0c5
Parents: 91d7c74
Author: Inigo Goiri 
Authored: Mon May 28 16:34:02 2018 -0700
Committer: Inigo Goiri 
Committed: Mon May 28 16:34:02 2018 -0700

--
 .../test/java/org/apache/hadoop/hdfs/TestDFSShell.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9dbf4f01/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
index e82863a..c352dc9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
@@ -2829,11 +2829,11 @@ public class TestDFSShell {
 System.setErr(origErr);
   }
 
-  assertEquals("Error message is not the expected error message",
-  "setrep: Requested replication factor of 1 is less than "
-  + "the required minimum of 2 for /tmp/TestDFSShell-"
-  + "testSetrepLow/testFileForSetrepLow\n",
-  bao.toString());
+  assertTrue("Error message is not the expected error message"
+  + bao.toString(), bao.toString().startsWith(
+  "setrep: Requested replication factor of 1 is less than "
+  + "the required minimum of 2 for /tmp/TestDFSShell-"
+  + "testSetrepLow/testFileForSetrepLow"));
 } finally {
   shell.close();
   cluster.shutdown();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[32/50] [abbrv] hadoop git commit: HADOOP-15477. Make unjar in RunJar overrideable

2018-05-29 Thread botong
HADOOP-15477. Make unjar in RunJar overrideable

Signed-off-by: Akira Ajisaka 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d14e26b3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d14e26b3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d14e26b3

Branch: refs/heads/YARN-7402
Commit: d14e26b31fe46fb47a8e99a212c70016fd15a4d9
Parents: 0cf6e87
Author: Johan Gustavsson 
Authored: Mon May 28 17:29:59 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon May 28 17:29:59 2018 +0900

--
 .../java/org/apache/hadoop/util/RunJar.java | 17 ++---
 .../java/org/apache/hadoop/util/TestRunJar.java | 37 ++--
 .../org/apache/hadoop/streaming/StreamJob.java  |  4 ++-
 3 files changed, 51 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d14e26b3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 9dd770c..f1b643c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -76,7 +76,11 @@ public class RunJar {
*/
   public static final String HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES =
   "HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES";
-
+  /**
+   * Environment key for disabling unjar in client code.
+   */
+  public static final String HADOOP_CLIENT_SKIP_UNJAR =
+  "HADOOP_CLIENT_SKIP_UNJAR";
   /**
* Buffer size for copy the content of compressed file to new file.
*/
@@ -93,7 +97,7 @@ public class RunJar {
* @throws IOException if an I/O error has occurred or toDir
* cannot be created and does not already exist
*/
-  public static void unJar(File jarFile, File toDir) throws IOException {
+  public void unJar(File jarFile, File toDir) throws IOException {
 unJar(jarFile, toDir, MATCH_ANY);
   }
 
@@ -292,8 +296,9 @@ public class RunJar {
   }
 }, SHUTDOWN_HOOK_PRIORITY);
 
-
-unJar(file, workDir);
+if (!skipUnjar()) {
+  unJar(file, workDir);
+}
 
 ClassLoader loader = createClassLoader(file, workDir);
 
@@ -364,6 +369,10 @@ public class RunJar {
 return Boolean.parseBoolean(System.getenv(HADOOP_USE_CLIENT_CLASSLOADER));
   }
 
+  boolean skipUnjar() {
+return Boolean.parseBoolean(System.getenv(HADOOP_CLIENT_SKIP_UNJAR));
+  }
+
   String getHadoopClasspath() {
 return System.getenv(HADOOP_CLASSPATH);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d14e26b3/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 19485d6..ea07b97 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -17,10 +17,14 @@
  */
 package org.apache.hadoop.util;
 
+import static org.apache.hadoop.util.RunJar.MATCH_ANY;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
 import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
 import java.io.File;
@@ -99,7 +103,7 @@ public class TestRunJar {
 
 // Unjar everything
 RunJar.unJar(new File(TEST_ROOT_DIR, TEST_JAR_NAME),
- unjarDir);
+ unjarDir, MATCH_ANY);
 assertTrue("foobar unpacked",
new File(unjarDir, TestRunJar.FOOBAR_TXT).exists());
 assertTrue("foobaz unpacked",
@@ -177,7 +181,7 @@ public class TestRunJar {
 
 // Unjar everything
 RunJar.unJar(new File(TEST_ROOT_DIR, TEST_JAR_NAME),
-unjarDir);
+unjarDir, MATCH_ANY);
 
 String failureMessage = "Last modify time was lost during unJar";
 assertEquals(failureMessage, MOCKED_NOW, new File(unjarDir, 
TestRunJar.FOOBAR_TXT).lastModified());
@@ -221,5 +225,34 @@ public class TestRunJar {
 // run RunJar
 runJar.run(args);
 // it should not throw an exception
+verify(runJar, times(1)).unJar(any(File.class), any(File.class));
+  }
+
+  @Test
+  public void testClientCl

[34/50] [abbrv] hadoop git commit: HDFS-13628. Update Archival Storage doc for Provided Storage

2018-05-29 Thread botong
HDFS-13628. Update Archival Storage doc for Provided Storage

Signed-off-by: Akira Ajisaka 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/04757e58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/04757e58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/04757e58

Branch: refs/heads/YARN-7402
Commit: 04757e5864bd4904fd5a59d143fff480814700e4
Parents: 88cbe57
Author: Takanobu Asanuma 
Authored: Mon May 28 19:04:36 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon May 28 19:06:34 2018 +0900

--
 .../hadoop-hdfs/src/site/markdown/ArchivalStorage.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/04757e58/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
index ab7975a..3c49cb1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
@@ -35,7 +35,7 @@ A new storage type *ARCHIVE*, which has high storage density 
(petabyte of storag
 
 Another new storage type *RAM\_DISK* is added for supporting writing single 
replica files in memory.
 
-### Storage Policies: Hot, Warm, Cold, All\_SSD, One\_SSD and Lazy\_Persist
+### Storage Policies: Hot, Warm, Cold, All\_SSD, One\_SSD, Lazy\_Persist and 
Provided
 
 A new concept of storage policies is introduced in order to allow files to be 
stored in different storage types according to the storage policy.
 
@@ -47,6 +47,7 @@ We have the following storage policies:
 * **All\_SSD** - for storing all replicas in SSD.
 * **One\_SSD** - for storing one of the replicas in SSD. The remaining 
replicas are stored in DISK.
 * **Lazy\_Persist** - for writing blocks with single replica in memory. The 
replica is first written in RAM\_DISK and then it is lazily persisted in DISK.
+* **Provided** - for storing data outside HDFS. See also [HDFS Provided 
Storage](./HdfsProvidedStorage.html).
 
 More formally, a storage policy consists of the following fields:
 
@@ -68,6 +69,7 @@ The following is a typical storage policy table.
 | 7 | Hot (default) | DISK: *n* | \ | ARCHIVE |
 | 5 | Warm | DISK: 1, ARCHIVE: *n*-1 | ARCHIVE, DISK | ARCHIVE, DISK |
 | 2 | Cold | ARCHIVE: *n* | \ | \ |
+| 1 | Provided | PROVIDED: 1, DISK: *n*-1 | PROVIDED, DISK | PROVIDED, DISK |
 
 Note 1: The Lazy\_Persist policy is useful only for single replica blocks. For 
blocks with more than one replicas, all the replicas will be written to DISK 
since writing only one of the replicas to RAM\_DISK does not improve the 
overall performance.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[22/50] [abbrv] hadoop git commit: HADOOP-15494. TestRawLocalFileSystemContract fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HADOOP-15494. TestRawLocalFileSystemContract fails on Windows.
Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bddfe796
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bddfe796
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bddfe796

Branch: refs/heads/YARN-7402
Commit: bddfe796f2f992fc1dcc8a1dd44d64ff2b3c9cf4
Parents: 86bc642
Author: Steve Loughran 
Authored: Fri May 25 11:12:47 2018 +0100
Committer: Steve Loughran 
Committed: Fri May 25 11:12:47 2018 +0100

--
 .../java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bddfe796/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
index ebf9ea7..908e330 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
@@ -42,7 +42,7 @@ public class TestRawLocalFileSystemContract extends 
FileSystemContractBaseTest {
   private static final Logger LOG =
   LoggerFactory.getLogger(TestRawLocalFileSystemContract.class);
   private final static Path TEST_BASE_DIR =
-  new Path(GenericTestUtils.getTempPath(""));
+  new Path(GenericTestUtils.getRandomizedTestDir().getAbsolutePath());
 
   @Before
   public void setUp() throws Exception {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[10/50] [abbrv] hadoop git commit: YARN-4599. Set OOM control for memory cgroups. (Miklos Szegedi via Haibo Chen)

2018-05-29 Thread botong
YARN-4599. Set OOM control for memory cgroups. (Miklos Szegedi via Haibo Chen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d9964799
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d9964799
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d9964799

Branch: refs/heads/YARN-7402
Commit: d9964799544eefcf424fcc178d987525f5356cdf
Parents: f09dc73
Author: Haibo Chen 
Authored: Wed May 23 11:29:55 2018 -0700
Committer: Haibo Chen 
Committed: Wed May 23 16:35:37 2018 -0700

--
 .gitignore  |   1 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  26 +-
 .../src/main/resources/yarn-default.xml |  67 ++-
 .../src/CMakeLists.txt  |  19 +
 .../CGroupElasticMemoryController.java  | 476 +++
 .../linux/resources/CGroupsHandler.java |   6 +
 .../linux/resources/CGroupsHandlerImpl.java |   6 +-
 .../CGroupsMemoryResourceHandlerImpl.java   |  15 -
 .../linux/resources/DefaultOOMHandler.java  | 254 ++
 .../monitor/ContainersMonitorImpl.java  |  50 ++
 .../executor/ContainerSignalContext.java|  41 ++
 .../native/oom-listener/impl/oom_listener.c | 171 +++
 .../native/oom-listener/impl/oom_listener.h | 102 
 .../oom-listener/impl/oom_listener_main.c   | 104 
 .../oom-listener/test/oom_listener_test_main.cc | 292 
 .../resources/DummyRunnableWithContext.java |  31 ++
 .../TestCGroupElasticMemoryController.java  | 319 +
 .../TestCGroupsMemoryResourceHandlerImpl.java   |   6 +-
 .../linux/resources/TestDefaultOOMHandler.java  | 307 
 .../monitor/TestContainersMonitor.java  |   1 +
 .../TestContainersMonitorResourceChange.java|   3 +-
 .../site/markdown/NodeManagerCGroupsMemory.md   | 133 ++
 22 files changed, 2391 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9964799/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 934c009..428950b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,6 +17,7 @@
 target
 build
 dependency-reduced-pom.xml
+make-build-debug
 
 # Filesystem contract test options and credentials
 auth-keys.xml

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9964799/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 8e56cb8..6d08831 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1440,6 +1440,25 @@ public class YarnConfiguration extends Configuration {
 NM_PREFIX + "vmem-pmem-ratio";
   public static final float DEFAULT_NM_VMEM_PMEM_RATIO = 2.1f;
 
+  /** Specifies whether to do memory check on overall usage. */
+  public static final String NM_ELASTIC_MEMORY_CONTROL_ENABLED = NM_PREFIX
+  + "elastic-memory-control.enabled";
+  public static final boolean DEFAULT_NM_ELASTIC_MEMORY_CONTROL_ENABLED = 
false;
+
+  /** Specifies the OOM handler code. */
+  public static final String NM_ELASTIC_MEMORY_CONTROL_OOM_HANDLER = NM_PREFIX
+  + "elastic-memory-control.oom-handler";
+
+  /** The path to the OOM listener.*/
+  public static final String NM_ELASTIC_MEMORY_CONTROL_OOM_LISTENER_PATH =
+  NM_PREFIX + "elastic-memory-control.oom-listener.path";
+
+  /** Maximum time in seconds to resolve an OOM situation. */
+  public static final String NM_ELASTIC_MEMORY_CONTROL_OOM_TIMEOUT_SEC =
+  NM_PREFIX + "elastic-memory-control.timeout-sec";
+  public static final Integer
+  DEFAULT_NM_ELASTIC_MEMORY_CONTROL_OOM_TIMEOUT_SEC = 5;
+
   /** Number of Virtual CPU Cores which can be allocated for containers.*/
   public static final String NM_VCORES = NM_PREFIX + "resource.cpu-vcores";
   public static final int DEFAULT_NM_VCORES = 8;
@@ -2006,13 +2025,6 @@ public class YarnConfiguration extends Configuration {
   /** The path to the Linux container executor.*/
   public static final String NM_LINUX_CONTAINER_EXECUTOR_PATH =
 NM_PREFIX + "linux-container-executor.path";
-  
-  /** 
-   * The UNIX group that the linux-container-executor should run as.
-   * This is intended to be set as part of container-executor.cfg. 
-   */
-  public static final String NM_LINUX_CONTAINER_

[01/50] [abbrv] hadoop git commit: HADOOP-15457. Add Security-Related HTTP Response Header in WEBUIs. (kanwaljeets via rkanter) [Forced Update!]

2018-05-29 Thread botong
Repository: hadoop
Updated Branches:
  refs/heads/YARN-7402 db183f2ea -> c5bf22dc1 (forced update)


HADOOP-15457. Add Security-Related HTTP Response Header in WEBUIs. (kanwaljeets 
via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa23d49f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa23d49f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa23d49f

Branch: refs/heads/YARN-7402
Commit: aa23d49fc8b9c2537529dbdc13512000e2ab295a
Parents: bc6d9d4
Author: Robert Kanter 
Authored: Wed May 23 10:23:17 2018 -0700
Committer: Robert Kanter 
Committed: Wed May 23 10:24:09 2018 -0700

--
 .../org/apache/hadoop/http/HttpServer2.java | 79 +++-
 .../org/apache/hadoop/http/TestHttpServer.java  | 61 +++
 2 files changed, 121 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa23d49f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
index 47ca841..c273c78 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
@@ -34,6 +34,8 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
@@ -172,10 +174,16 @@ public final class HttpServer2 implements FilterContainer 
{
   private final SignerSecretProvider secretProvider;
   private XFrameOption xFrameOption;
   private boolean xFrameOptionIsEnabled;
-  private static final String X_FRAME_VALUE = "xFrameOption";
-  private static final String X_FRAME_ENABLED = "X_FRAME_ENABLED";
-
-
+  public static final String HTTP_HEADER_PREFIX = "hadoop.http.header.";
+  private static final String HTTP_HEADER_REGEX =
+  "hadoop\\.http\\.header\\.([a-zA-Z\\-_]+)";
+  static final String X_XSS_PROTECTION  =
+  "X-XSS-Protection:1; mode=block";
+  static final String X_CONTENT_TYPE_OPTIONS =
+  "X-Content-Type-Options:nosniff";
+  private static final String X_FRAME_OPTIONS = "X-FRAME-OPTIONS";
+  private static final Pattern PATTERN_HTTP_HEADER_REGEX =
+  Pattern.compile(HTTP_HEADER_REGEX);
   /**
* Class to construct instances of HTTP server with specific options.
*/
@@ -574,10 +582,7 @@ public final class HttpServer2 implements FilterContainer {
 addDefaultApps(contexts, appDir, conf);
 webServer.setHandler(handlers);
 
-Map xFrameParams = new HashMap<>();
-xFrameParams.put(X_FRAME_ENABLED,
-String.valueOf(this.xFrameOptionIsEnabled));
-xFrameParams.put(X_FRAME_VALUE,  this.xFrameOption.toString());
+Map xFrameParams = setHeaders(conf);
 addGlobalFilter("safety", QuotingInputFilter.class.getName(), 
xFrameParams);
 final FilterInitializer[] initializers = getFilterInitializers(conf);
 if (initializers != null) {
@@ -1475,9 +1480,11 @@ public final class HttpServer2 implements 
FilterContainer {
   public static class QuotingInputFilter implements Filter {
 
 private FilterConfig config;
+private Map headerMap;
 
 public static class RequestQuoter extends HttpServletRequestWrapper {
   private final HttpServletRequest rawRequest;
+
   public RequestQuoter(HttpServletRequest rawRequest) {
 super(rawRequest);
 this.rawRequest = rawRequest;
@@ -1566,6 +1573,7 @@ public final class HttpServer2 implements FilterContainer 
{
 @Override
 public void init(FilterConfig config) throws ServletException {
   this.config = config;
+  initHttpHeaderMap();
 }
 
 @Override
@@ -1593,11 +1601,7 @@ public final class HttpServer2 implements 
FilterContainer {
   } else if (mime.startsWith("application/xml")) {
 httpResponse.setContentType("text/xml; charset=utf-8");
   }
-
-  if(Boolean.valueOf(this.config.getInitParameter(X_FRAME_ENABLED))) {
-httpResponse.addHeader("X-FRAME-OPTIONS",
-this.config.getInitParameter(X_FRAME_VALUE));
-  }
+  headerMap.forEach((k, v) -> httpResponse.addHeader(k, v));
   chain.doFilter(quoted, httpResponse);
 }
 
@@ -1613,14 +1617,25 @@ public final class HttpServer2 implements 
FilterContainer {
   return (mime == null) ? null : mime;
 }
 
+private void initHttpHeaderMap() {
+  Enumeration params = this.config.getInitParameterNames();
+

[42/50] [abbrv] hadoop git commit: HADOOP-15497. TestTrash should use proper test path to avoid failing on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HADOOP-15497. TestTrash should use proper test path to avoid failing on 
Windows. Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c75f8e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c75f8e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c75f8e4

Branch: refs/heads/YARN-7402
Commit: 3c75f8e4933221fa60a87e86a3db5e4727530b6f
Parents: 31ab960
Author: Inigo Goiri 
Authored: Tue May 29 09:11:08 2018 -0700
Committer: Inigo Goiri 
Committed: Tue May 29 09:11:08 2018 -0700

--
 .../src/test/java/org/apache/hadoop/fs/TestTrash.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c75f8e4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
index 12aed29..fa2d21f 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
@@ -49,9 +49,11 @@ import org.apache.hadoop.util.Time;
  */
 public class TestTrash {
 
-  private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
+  private final static File BASE_PATH = new File(GenericTestUtils.getTempPath(
   "testTrash"));
 
+  private final static Path TEST_DIR = new Path(BASE_PATH.getAbsolutePath());
+
   @Before
   public void setUp() throws IOException {
 // ensure each test initiates a FileSystem instance,
@@ -682,7 +684,7 @@ public class TestTrash {
   static class TestLFS extends LocalFileSystem {
 Path home;
 TestLFS() {
-  this(new Path(TEST_DIR, "user/test"));
+  this(TEST_DIR);
 }
 TestLFS(final Path home) {
   super(new RawLocalFileSystem() {
@@ -809,8 +811,8 @@ public class TestTrash {
*/
   public static void verifyTrashPermission(FileSystem fs, Configuration conf)
   throws IOException {
-Path caseRoot = new Path(
-GenericTestUtils.getTempPath("testTrashPermission"));
+Path caseRoot = new Path(BASE_PATH.getPath(),
+"testTrashPermission");
 try (FileSystem fileSystem = fs){
   Trash trash = new Trash(fileSystem, conf);
   FileSystemTestWrapper wrapper =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[24/50] [abbrv] hadoop git commit: HADOOP-15473. Configure serialFilter in KeyProvider to avoid UnrecoverableKeyException caused by JDK-8189997. Contributed by Gabor Bota.

2018-05-29 Thread botong
HADOOP-15473. Configure serialFilter in KeyProvider to avoid 
UnrecoverableKeyException caused by JDK-8189997. Contributed by Gabor Bota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02322de3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02322de3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02322de3

Branch: refs/heads/YARN-7402
Commit: 02322de3f95ba78a22c057037ef61aa3ab1d3824
Parents: 8d5509c
Author: Xiao Chen 
Authored: Fri May 25 09:08:15 2018 -0700
Committer: Xiao Chen 
Committed: Fri May 25 09:10:51 2018 -0700

--
 .../apache/hadoop/crypto/key/KeyProvider.java   | 18 +++
 .../fs/CommonConfigurationKeysPublic.java   |  7 ++
 .../src/main/resources/core-default.xml | 23 
 3 files changed, 48 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02322de3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
index 5d670e5..050540b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
@@ -42,6 +42,8 @@ import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 
 import javax.crypto.KeyGenerator;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_CRYPTO_JCEKS_KEY_SERIALFILTER;
+
 /**
  * A provider of secret key material for Hadoop applications. Provides an
  * abstraction to separate key storage from users of encryption. It
@@ -61,6 +63,14 @@ public abstract class KeyProvider {
   CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_DEFAULT_BITLENGTH_KEY;
   public static final int DEFAULT_BITLENGTH = CommonConfigurationKeysPublic.
   HADOOP_SECURITY_KEY_DEFAULT_BITLENGTH_DEFAULT;
+  public static final String JCEKS_KEY_SERIALFILTER_DEFAULT =
+  "java.lang.Enum;"
+  + "java.security.KeyRep;"
+  + "java.security.KeyRep$Type;"
+  + "javax.crypto.spec.SecretKeySpec;"
+  + "org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata;"
+  + "!*";
+  public static final String JCEKS_KEY_SERIAL_FILTER = 
"jceks.key.serialFilter";
 
   private final Configuration conf;
 
@@ -394,6 +404,14 @@ public abstract class KeyProvider {
*/
   public KeyProvider(Configuration conf) {
 this.conf = new Configuration(conf);
+// Added for HADOOP-15473. Configured serialFilter property fixes
+// java.security.UnrecoverableKeyException in JDK 8u171.
+if(System.getProperty(JCEKS_KEY_SERIAL_FILTER) == null) {
+  String serialFilter =
+  conf.get(HADOOP_SECURITY_CRYPTO_JCEKS_KEY_SERIALFILTER,
+  JCEKS_KEY_SERIALFILTER_DEFAULT);
+  System.setProperty(JCEKS_KEY_SERIAL_FILTER, serialFilter);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02322de3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 8837cfb..9e0ba20 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -662,6 +662,13 @@ public class CommonConfigurationKeysPublic {
* 
* core-default.xml
*/
+  public static final String HADOOP_SECURITY_CRYPTO_JCEKS_KEY_SERIALFILTER =
+  "hadoop.security.crypto.jceks.key.serialfilter";
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
   public static final String HADOOP_SECURITY_CRYPTO_BUFFER_SIZE_KEY = 
 "hadoop.security.crypto.buffer.size";
   /** Defalt value for HADOOP_SECURITY_CRYPTO_BUFFER_SIZE_KEY */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02322de3/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index fad2985..9564587 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core

[13/50] [abbrv] hadoop git commit: YARN-8319. More YARN pages need to honor yarn.resourcemanager.display.per-user-apps. Contributed by Sunil G.

2018-05-29 Thread botong
YARN-8319. More YARN pages need to honor 
yarn.resourcemanager.display.per-user-apps. Contributed by Sunil G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c05b5d42
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c05b5d42
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c05b5d42

Branch: refs/heads/YARN-7402
Commit: c05b5d424b000bab766f57e88a07f2b4e9a56647
Parents: 4cc0c9b
Author: Rohith Sharma K S 
Authored: Thu May 24 14:19:46 2018 +0530
Committer: Rohith Sharma K S 
Committed: Thu May 24 14:19:46 2018 +0530

--
 .../hadoop/yarn/conf/YarnConfiguration.java | 11 +++-
 .../yarn/conf/TestYarnConfigurationFields.java  |  2 +
 .../src/main/resources/yarn-default.xml |  2 +-
 .../nodemanager/webapp/NMWebServices.java   | 63 +-
 .../webapp/TestNMWebServicesApps.java   | 68 +---
 .../server/resourcemanager/ClientRMService.java | 10 +--
 .../resourcemanager/webapp/RMWebServices.java   |  8 +--
 .../reader/TimelineReaderWebServices.java   | 33 ++
 8 files changed, 175 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c05b5d42/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 6d08831..004a59f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -121,6 +121,10 @@ public class YarnConfiguration extends Configuration {
 new DeprecationDelta(RM_ZK_RETRY_INTERVAL_MS,
 CommonConfigurationKeys.ZK_RETRY_INTERVAL_MS),
 });
+Configuration.addDeprecations(new DeprecationDelta[] {
+new DeprecationDelta("yarn.resourcemanager.display.per-user-apps",
+FILTER_ENTITY_LIST_BY_USER)
+});
   }
 
   //Configurations
@@ -3569,11 +3573,16 @@ public class YarnConfiguration extends Configuration {
   public static final String NM_SCRIPT_BASED_NODE_LABELS_PROVIDER_SCRIPT_OPTS =
   NM_SCRIPT_BASED_NODE_LABELS_PROVIDER_PREFIX + "opts";
 
-  /*
+  /**
* Support to view apps for given user in secure cluster.
+   * @deprecated This field is deprecated for {@link 
#FILTER_ENTITY_LIST_BY_USER}
*/
+  @Deprecated
   public static final String DISPLAY_APPS_FOR_LOGGED_IN_USER =
   RM_PREFIX + "display.per-user-apps";
+
+  public static final String FILTER_ENTITY_LIST_BY_USER =
+  "yarn.webapp.filter-entity-list-by-user";
   public static final boolean DEFAULT_DISPLAY_APPS_FOR_LOGGED_IN_USER =
   false;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c05b5d42/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index f4d1ac0..b9ba543 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -182,6 +182,8 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 // Ignore deprecated properties
 configurationPrefixToSkipCompare
 .add(YarnConfiguration.YARN_CLIENT_APP_SUBMISSION_POLL_INTERVAL_MS);
+configurationPrefixToSkipCompare
+.add(YarnConfiguration.DISPLAY_APPS_FOR_LOGGED_IN_USER);
 
 // Allocate for usage
 xmlPropsToSkipCompare = new HashSet();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c05b5d42/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index da44ccb..c82474c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-y

[39/50] [abbrv] hadoop git commit: HADOOP-15498. TestHadoopArchiveLogs (#testGenerateScript, #testPrepareWorkingDir) fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HADOOP-15498. TestHadoopArchiveLogs (#testGenerateScript, 
#testPrepareWorkingDir) fails on Windows. Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8fdc993a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8fdc993a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8fdc993a

Branch: refs/heads/YARN-7402
Commit: 8fdc993a993728c65084d7dc3ac469059cb1f603
Parents: 9dbf4f0
Author: Inigo Goiri 
Authored: Mon May 28 16:45:42 2018 -0700
Committer: Inigo Goiri 
Committed: Mon May 28 16:45:42 2018 -0700

--
 .../org/apache/hadoop/tools/TestHadoopArchiveLogs.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8fdc993a/hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
--
diff --git 
a/hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
 
b/hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
index 2ddd4c5..a1b662c 100644
--- 
a/hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
+++ 
b/hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.util.Shell;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationReport;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
@@ -278,7 +279,7 @@ public class TestHadoopArchiveLogs {
 hal.generateScript(localScript);
 Assert.assertTrue(localScript.exists());
 String script = IOUtils.toString(localScript.toURI());
-String[] lines = script.split(System.lineSeparator());
+String[] lines = script.split("\n");
 Assert.assertEquals(22, lines.length);
 Assert.assertEquals("#!/bin/bash", lines[0]);
 Assert.assertEquals("set -e", lines[1]);
@@ -368,7 +369,8 @@ public class TestHadoopArchiveLogs {
 Assert.assertTrue(dirPrepared);
 Assert.assertTrue(fs.exists(workingDir));
 Assert.assertEquals(
-new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL, true),
+new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL,
+!Shell.WINDOWS),
 fs.getFileStatus(workingDir).getPermission());
 // Throw a file in the dir
 Path dummyFile = new Path(workingDir, "dummy.txt");
@@ -381,7 +383,8 @@ public class TestHadoopArchiveLogs {
 Assert.assertTrue(fs.exists(workingDir));
 Assert.assertTrue(fs.exists(dummyFile));
 Assert.assertEquals(
-new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL, true),
+new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL,
+!Shell.WINDOWS),
 fs.getFileStatus(workingDir).getPermission());
 // -force is true and the dir exists, so it will recreate it and the dummy
 // won't exist anymore
@@ -390,7 +393,8 @@ public class TestHadoopArchiveLogs {
 Assert.assertTrue(dirPrepared);
 Assert.assertTrue(fs.exists(workingDir));
 Assert.assertEquals(
-new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL, true),
+new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL,
+!Shell.WINDOWS),
 fs.getFileStatus(workingDir).getPermission());
 Assert.assertFalse(fs.exists(dummyFile));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[20/50] [abbrv] hadoop git commit: YARN-8357. Fixed NPE when YARN service is saved and not deployed. Contributed by Chandni Singh

2018-05-29 Thread botong
YARN-8357.  Fixed NPE when YARN service is saved and not deployed.
Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d9852eb5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d9852eb5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d9852eb5

Branch: refs/heads/YARN-7402
Commit: d9852eb5897a25323ab0302c2c0decb61d310e5e
Parents: 7ff5a40
Author: Eric Yang 
Authored: Thu May 24 16:32:13 2018 -0400
Committer: Eric Yang 
Committed: Thu May 24 16:32:13 2018 -0400

--
 .../java/org/apache/hadoop/yarn/service/client/ServiceClient.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9852eb5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
index 93a74e3..0ab3322 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -1198,6 +1198,7 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
 ServiceApiUtil.validateNameFormat(serviceName, getConfig());
 Service appSpec = new Service();
 appSpec.setName(serviceName);
+appSpec.setState(ServiceState.STOPPED);
 ApplicationId currentAppId = getAppId(serviceName);
 if (currentAppId == null) {
   LOG.info("Service {} does not have an application ID", serviceName);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[18/50] [abbrv] hadoop git commit: HDDS-80. Remove SendContainerCommand from SCM. Contributed by Nanda Kumar.

2018-05-29 Thread botong
HDDS-80. Remove SendContainerCommand from SCM. Contributed by Nanda Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d19e7d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d19e7d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d19e7d0

Branch: refs/heads/YARN-7402
Commit: 2d19e7d08f031341078a36fee74860c58de02993
Parents: c9b63de
Author: Xiaoyu Yao 
Authored: Thu May 24 11:10:30 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu May 24 11:10:30 2018 -0700

--
 .../statemachine/DatanodeStateMachine.java  |   3 -
 .../commandhandler/ContainerReportHandler.java  | 114 ---
 .../states/endpoint/HeartbeatEndpointTask.java  |   5 -
 .../protocol/commands/SendContainerCommand.java |  80 -
 .../StorageContainerDatanodeProtocol.proto  |  16 ++-
 .../container/replication/InProgressPool.java   |  57 --
 .../scm/server/SCMDatanodeProtocolServer.java   |   7 --
 7 files changed, 7 insertions(+), 275 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d19e7d0/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
index a16bfdc..a8fe494 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
@@ -26,8 +26,6 @@ import 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler
 import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
 .CommandDispatcher;
 import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
-.ContainerReportHandler;
-import org.apache.hadoop.ozone.container.common.statemachine.commandhandler
 .DeleteBlocksCommandHandler;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
@@ -88,7 +86,6 @@ public class DatanodeStateMachine implements Closeable {
  // When we add new handlers just adding a new handler here should do the
  // trick.
 commandDispatcher = CommandDispatcher.newBuilder()
-.addHandler(new ContainerReportHandler())
 .addHandler(new CloseContainerHandler())
 .addHandler(new DeleteBlocksCommandHandler(
 container.getContainerManager(), conf))

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d19e7d0/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ContainerReportHandler.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ContainerReportHandler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ContainerReportHandler.java
deleted file mode 100644
index fbea290..000
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ContainerReportHandler.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
-
-import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
-import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.SCMCmdType;
-import org.apache.hadoop.ozone.container.common.statemachine
-.EndpointStateMach

[30/50] [abbrv] hadoop git commit: YARN-8213. Add Capacity Scheduler performance metrics. (Weiwei Yang via wangda)

2018-05-29 Thread botong
YARN-8213. Add Capacity Scheduler performance metrics. (Weiwei Yang via wangda)

Change-Id: Ieea6f3eeb83c90cd74233fea896f0fcd0f325d5f


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f24c842d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f24c842d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f24c842d

Branch: refs/heads/YARN-7402
Commit: f24c842d52e166e8566337ef93c96438f1c870d8
Parents: 8605a38
Author: Wangda Tan 
Authored: Fri May 25 21:53:20 2018 -0700
Committer: Wangda Tan 
Committed: Fri May 25 21:53:20 2018 -0700

--
 .../server/resourcemanager/ResourceManager.java |   1 +
 .../scheduler/AbstractYarnScheduler.java|   5 +
 .../scheduler/ResourceScheduler.java|   5 +
 .../scheduler/capacity/CapacityScheduler.java   |  31 -
 .../capacity/CapacitySchedulerMetrics.java  | 119 +++
 .../TestCapacitySchedulerMetrics.java   | 110 +
 6 files changed, 269 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f24c842d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index 05745ec..c533111 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -1216,6 +1216,7 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
   void reinitialize(boolean initialize) {
 ClusterMetrics.destroy();
 QueueMetrics.clearQueueMetrics();
+getResourceScheduler().resetSchedulerMetrics();
 if (initialize) {
   resetRMContext();
   createAndInitActiveServices(true);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f24c842d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index b2747f7..18c7b4e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -1464,4 +1464,9 @@ public abstract class AbstractYarnScheduler
   SchedulingRequest schedulingRequest, SchedulerNode schedulerNode) {
 return false;
   }
+
+  @Override
+  public void resetSchedulerMetrics() {
+// reset scheduler metrics
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f24c842d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceScheduler.java
index 5a56ac7..dcb6edd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resour

[03/50] [abbrv] hadoop git commit: HDFS-13587. TestQuorumJournalManager fails on Windows. Contributed by Anbang Hu.

2018-05-29 Thread botong
HDFS-13587. TestQuorumJournalManager fails on Windows. Contributed by Anbang Hu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c13dea87
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c13dea87
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c13dea87

Branch: refs/heads/YARN-7402
Commit: c13dea87d9de7a9872fc8b0c939b41b1666a61e5
Parents: 51ce02b
Author: Inigo Goiri 
Authored: Wed May 23 11:36:03 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 11:36:03 2018 -0700

--
 .../org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java | 5 +
 .../hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java   | 3 ++-
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c13dea87/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
index 2314e22..f936d75 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager;
 import org.apache.hadoop.hdfs.qjournal.server.JournalNode;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.net.NetUtils;
 
 import com.google.common.base.Joiner;
@@ -50,6 +51,10 @@ public class MiniJournalCluster {
 private int numJournalNodes = 3;
 private boolean format = true;
 private final Configuration conf;
+
+static {
+  DefaultMetricsSystem.setMiniClusterMode(true);
+}
 
 public Builder(Configuration conf) {
   this.conf = conf;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c13dea87/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
index 34a0348..69856ae 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
@@ -93,7 +93,8 @@ public class TestQuorumJournalManager {
 
conf.setInt(CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY, 
0);
 
 cluster = new MiniJournalCluster.Builder(conf)
-  .build();
+.baseDir(GenericTestUtils.getRandomizedTestDir().getAbsolutePath())
+.build();
 cluster.waitActive();
 
 qjm = createSpyingQJM();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[28/50] [abbrv] hadoop git commit: HDDS-113. Rest and Rpc Client should verify resource name using HddsClientUtils. Contributed by Lokesh Jain.

2018-05-29 Thread botong
HDDS-113. Rest and Rpc Client should verify resource name using HddsClientUtils.
Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2a9652e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2a9652e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2a9652e6

Branch: refs/heads/YARN-7402
Commit: 2a9652e69650973f6158b60ff131215827738db6
Parents: 13d2528
Author: Anu Engineer 
Authored: Fri May 25 15:40:46 2018 -0700
Committer: Anu Engineer 
Committed: Fri May 25 15:45:50 2018 -0700

--
 .../hadoop/hdds/scm/client/HddsClientUtils.java | 23 +
 .../apache/hadoop/ozone/client/ObjectStore.java |  9 
 .../apache/hadoop/ozone/client/OzoneBucket.java | 24 +
 .../apache/hadoop/ozone/client/OzoneVolume.java | 18 +--
 .../hadoop/ozone/client/rest/RestClient.java| 52 
 .../hadoop/ozone/client/rpc/RpcClient.java  | 46 +++--
 6 files changed, 64 insertions(+), 108 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a9652e6/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
index bc5f8d6..a6813eb 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
@@ -170,6 +170,29 @@ public final class HddsClientUtils {
   }
 
   /**
+   * verifies that bucket / volume name is a valid DNS name.
+   *
+   * @param resourceNames Array of bucket / volume names to be verified.
+   */
+  public static void verifyResourceName(String... resourceNames) {
+for (String resourceName : resourceNames) {
+  HddsClientUtils.verifyResourceName(resourceName);
+}
+  }
+
+  /**
+   * Checks that object parameters passed as reference is not null.
+   *
+   * @param references Array of object references to be checked.
+   * @param 
+   */
+  public static  void checkNotNull(T... references) {
+for (T ref: references) {
+  Preconditions.checkNotNull(ref);
+}
+  }
+
+  /**
* Returns the cache value to be used for list calls.
* @param conf Configuration object
* @return list cache size

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a9652e6/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
index d8b3011..c5f0689 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
@@ -63,8 +63,6 @@ public class ObjectStore {
* @throws IOException
*/
   public void createVolume(String volumeName) throws IOException {
-Preconditions.checkNotNull(volumeName);
-HddsClientUtils.verifyResourceName(volumeName);
 proxy.createVolume(volumeName);
   }
 
@@ -76,9 +74,6 @@ public class ObjectStore {
*/
   public void createVolume(String volumeName, VolumeArgs volumeArgs)
   throws IOException {
-Preconditions.checkNotNull(volumeName);
-Preconditions.checkNotNull(volumeArgs);
-HddsClientUtils.verifyResourceName(volumeName);
 proxy.createVolume(volumeName, volumeArgs);
   }
 
@@ -89,8 +84,6 @@ public class ObjectStore {
* @throws IOException
*/
   public OzoneVolume getVolume(String volumeName) throws IOException {
-Preconditions.checkNotNull(volumeName);
-HddsClientUtils.verifyResourceName(volumeName);
 OzoneVolume volume = proxy.getVolumeDetails(volumeName);
 return volume;
   }
@@ -150,8 +143,6 @@ public class ObjectStore {
* @throws IOException
*/
   public void deleteVolume(String volumeName) throws IOException {
-Preconditions.checkNotNull(volumeName);
-HddsClientUtils.verifyResourceName(volumeName);
 proxy.deleteVolume(volumeName);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a9652e6/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
index 5df0254..2f3cff6 100644
--- 
a/hadoop-ozone/client/src

[12/50] [abbrv] hadoop git commit: YARN-8346. Upgrading to 3.1 kills running containers with error 'Opportunistic container queue is full'. Contributed by Jason Lowe.

2018-05-29 Thread botong
YARN-8346. Upgrading to 3.1 kills running containers with error 'Opportunistic 
container queue is full'. Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4cc0c9b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4cc0c9b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4cc0c9b0

Branch: refs/heads/YARN-7402
Commit: 4cc0c9b0baa93f5a1c0623eee353874e858a7caa
Parents: 7a87add
Author: Rohith Sharma K S 
Authored: Thu May 24 12:23:47 2018 +0530
Committer: Rohith Sharma K S 
Committed: Thu May 24 12:23:47 2018 +0530

--
 .../yarn/security/ContainerTokenIdentifier.java |  4 ++--
 .../yarn/security/TestYARNTokenIdentifier.java  | 25 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cc0c9b0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
index 37c74b8..8dea65f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java
@@ -292,7 +292,7 @@ public class ContainerTokenIdentifier extends 
TokenIdentifier {
*/
   public ContainerType getContainerType(){
 if (!proto.hasContainerType()) {
-  return null;
+  return ContainerType.TASK;
 }
 return convertFromProtoFormat(proto.getContainerType());
   }
@@ -303,7 +303,7 @@ public class ContainerTokenIdentifier extends 
TokenIdentifier {
*/
   public ExecutionType getExecutionType(){
 if (!proto.hasExecutionType()) {
-  return null;
+  return ExecutionType.GUARANTEED;
 }
 return convertFromProtoFormat(proto.getExecutionType());
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cc0c9b0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
index 51fbe9a..8109b5e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.impl.pb.LogAggregationContextPBImpl;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
+import 
org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.ContainerTokenIdentifierProto;
 import 
org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.YARNDelegationTokenIdentifierProto;
 import org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier;
 import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier;
@@ -170,6 +171,30 @@ public class TestYARNTokenIdentifier {
   }
 
   @Test
+  public void testContainerTokenIdentifierProtoMissingFields()
+  throws IOException {
+ContainerTokenIdentifierProto.Builder builder =
+ContainerTokenIdentifierProto.newBuilder();
+ContainerTokenIdentifierProto proto = builder.build();
+Assert.assertFalse(proto.hasContainerType());
+Assert.assertFalse(proto.hasExecutionType());
+Assert.assertFalse(proto.hasNodeLabelExpression());
+
+byte[] tokenData = proto.toByteArray();
+DataInputBuffer dib = new DataInputBuffer();
+dib.reset(tokenData, tokenData.length);
+ContainerTokenIdentifier tid = new ContainerTokenIdentifier();
+tid.readFields(dib);
+
+Assert.assertEquals("container type",
+ContainerType.TASK, tid.getContainerType());
+Assert.assertEquals("execution type",
+ExecutionType.GUARANTEED, tid.getExecutionType());
+Assert.assertEquals("node label expression",
+CommonNodeLabelsManager.NO_LABEL, tid.getNodeLabelExpression());
+  }
+
+  @Test
   public void testContainerTokenIdentifier() throws IOException {
 testContainerToken

[05/50] [abbrv] hadoop git commit: HDFS-13493. Reduce the HttpServer2 thread count on DataNodes. Contributed by Erik Krogen.

2018-05-29 Thread botong
HDFS-13493. Reduce the HttpServer2 thread count on DataNodes. Contributed by 
Erik Krogen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cddbbe5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cddbbe5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cddbbe5f

Branch: refs/heads/YARN-7402
Commit: cddbbe5f690e4617413f6e986adc6fa900629f03
Parents: e30938a
Author: Inigo Goiri 
Authored: Wed May 23 12:12:08 2018 -0700
Committer: Inigo Goiri 
Committed: Wed May 23 12:12:08 2018 -0700

--
 .../hdfs/server/datanode/web/DatanodeHttpServer.java  | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cddbbe5f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
index 0ce327a..4349c26 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
@@ -89,6 +89,13 @@ public class DatanodeHttpServer implements Closeable {
   private InetSocketAddress httpsAddress;
   static final Log LOG = LogFactory.getLog(DatanodeHttpServer.class);
 
+  // HttpServer threads are only used for the web UI and basic servlets, so
+  // set them to the minimum possible
+  private static final int HTTP_SELECTOR_THREADS = 1;
+  private static final int HTTP_ACCEPTOR_THREADS = 1;
+  private static final int HTTP_MAX_THREADS =
+  HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1;
+
   public DatanodeHttpServer(final Configuration conf,
   final DataNode datanode,
   final ServerSocketChannel externalHttpChannel)
@@ -97,7 +104,12 @@ public class DatanodeHttpServer implements Closeable {
 this.conf = conf;
 
 Configuration confForInfoServer = new Configuration(conf);
-confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS_KEY, 10);
+confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS_KEY,
+HTTP_MAX_THREADS);
+confForInfoServer.setInt(HttpServer2.HTTP_SELECTOR_COUNT_KEY,
+HTTP_SELECTOR_THREADS);
+confForInfoServer.setInt(HttpServer2.HTTP_ACCEPTOR_COUNT_KEY,
+HTTP_ACCEPTOR_THREADS);
 int proxyPort =
 confForInfoServer.getInt(DFS_DATANODE_HTTP_INTERNAL_PROXY_PORT, 0);
 HttpServer2.Builder builder = new HttpServer2.Builder()


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[35/50] [abbrv] hadoop git commit: HADOOP-15449. Increase default timeout of ZK session to avoid frequent NameNode failover

2018-05-29 Thread botong
HADOOP-15449. Increase default timeout of ZK session to avoid frequent NameNode 
failover

Signed-off-by: Akira Ajisaka 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/61df174e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/61df174e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/61df174e

Branch: refs/heads/YARN-7402
Commit: 61df174e8b3d582183306cabfa2347c8b96322ff
Parents: 04757e5
Author: Karthik Palanisamy 
Authored: Mon May 28 19:41:07 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon May 28 19:41:07 2018 +0900

--
 .../src/main/java/org/apache/hadoop/ha/ZKFailoverController.java   | 2 +-
 .../hadoop-common/src/main/resources/core-default.xml  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/61df174e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
index a8c19ab..9295288 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
@@ -63,7 +63,7 @@ public abstract class ZKFailoverController {
   
   public static final String ZK_QUORUM_KEY = "ha.zookeeper.quorum";
   private static final String ZK_SESSION_TIMEOUT_KEY = 
"ha.zookeeper.session-timeout.ms";
-  private static final int ZK_SESSION_TIMEOUT_DEFAULT = 5*1000;
+  private static final int ZK_SESSION_TIMEOUT_DEFAULT = 10*1000;
   private static final String ZK_PARENT_ZNODE_KEY = 
"ha.zookeeper.parent-znode";
   public static final String ZK_ACL_KEY = "ha.zookeeper.acl";
   private static final String ZK_ACL_DEFAULT = "world:anyone:rwcda";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/61df174e/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 9564587..75acf48 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -2168,7 +2168,7 @@
 
 
   ha.zookeeper.session-timeout.ms
-  5000
+  1
   
 The session timeout to use when the ZKFC connects to ZooKeeper.
 Setting this value to a lower value implies that server crashes


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9502b47bd -> e3236a968


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e3236a96
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e3236a96
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e3236a96

Branch: refs/heads/trunk
Commit: e3236a9680709de7a95ffbc11b20e1bdc95a8605
Parents: 9502b47
Author: Kihwal Lee 
Authored: Tue May 29 14:15:12 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:15:12 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java | 10 +
 .../java/org/apache/hadoop/util/TestRunJar.java | 42 
 2 files changed, 52 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e3236a96/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index f1b643c..4c94dbc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -117,12 +117,17 @@ public class RunJar {
   throws IOException {
 try (JarInputStream jar = new JarInputStream(inputStream)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   for (JarEntry entry = jar.getNextJarEntry();
entry != null;
entry = jar.getNextJarEntry()) {
 if (!entry.isDirectory() &&
 unpackRegex.matcher(entry.getName()).matches()) {
   File file = new File(toDir, entry.getName());
+  if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+throw new IOException("expanding " + entry.getName()
++ " would create file outside of " + toDir);
+  }
   ensureDirectory(file.getParentFile());
   try (OutputStream out = new FileOutputStream(file)) {
 IOUtils.copyBytes(jar, out, BUFFER_SIZE);
@@ -182,6 +187,7 @@ public class RunJar {
   throws IOException {
 try (JarFile jar = new JarFile(jarFile)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -189,6 +195,10 @@ public class RunJar {
 unpackRegex.matcher(entry.getName()).matches()) {
   try (InputStream in = jar.getInputStream(entry)) {
 File file = new File(toDir, entry.getName());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 ensureDirectory(file.getParentFile());
 try (OutputStream out = new FileOutputStream(file)) {
   IOUtils.copyBytes(in, out, BUFFER_SIZE);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e3236a96/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index ea07b97..a8c27d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -21,6 +21,7 @@ import static org.apache.hadoop.util.RunJar.MATCH_ANY;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 import static org.mockito.Matchers.any;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.times;
@@ -32,6 +33,7 @@ import java.io.FileInputStream;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
 import java.util.Random;
 import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
@@ -255,4 +257,44 @@ public class TestRunJar {
 // it should not throw an exception
 verify(runJar, times(0)).unJar(any(File.class), any(File.class));
   }
+
+  @Test
+  public void testUnJar2() throws IOException 

hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 d5708bbcd -> 65e55097d


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65e55097
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65e55097
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65e55097

Branch: refs/heads/branch-3.0
Commit: 65e55097da2bb3f2fbdf9ba1946da25fe58bec98
Parents: d5708bb
Author: Kihwal Lee 
Authored: Tue May 29 14:30:29 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:30:29 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java |  5 +++
 .../java/org/apache/hadoop/util/TestRunJar.java | 36 
 2 files changed, 41 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65e55097/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 19b51ad..678e459 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -109,6 +109,7 @@ public class RunJar {
   throws IOException {
 try (JarFile jar = new JarFile(jarFile)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -117,6 +118,10 @@ public class RunJar {
   try (InputStream in = jar.getInputStream(entry)) {
 File file = new File(toDir, entry.getName());
 ensureDirectory(file.getParentFile());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 try (OutputStream out = new FileOutputStream(file)) {
   IOUtils.copyBytes(in, out, BUFFER_SIZE);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65e55097/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 7b61b32..cb2cfa8 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -20,12 +20,15 @@ package org.apache.hadoop.util;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
 import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 import java.util.regex.Pattern;
 import java.util.zip.ZipEntry;
@@ -165,4 +168,37 @@ public class TestRunJar {
 runJar.run(args);
 // it should not throw an exception
   }
+
+  @Test
+  public void testUnJar2() throws IOException {
+// make a simple zip
+File jarFile = new File(TEST_ROOT_DIR, TEST_JAR_NAME);
+JarOutputStream jstream =
+new JarOutputStream(new FileOutputStream(jarFile));
+JarEntry je = new JarEntry("META-INF/MANIFEST.MF");
+byte[] data = "Manifest-Version: 1.0\nCreated-By: 1.8.0_1 (Manual)"
+.getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+je = new JarEntry("../outside.path");
+data = "any data here".getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+jstream.close();
+
+File unjarDir = getUnjarDir("unjar-path");
+
+// Unjar everything
+try {
+  RunJar.unJar(jarFile, unjarDir);
+  fail("unJar should throw IOException.");
+} catch (IOException e) {
+  GenericTestUtils.assertExceptionContains(
+  "would create file outside of", e);
+}
+  }
 }
\ No newline at end of file


-

hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 09fbbff69 -> 6d7d192e4


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

(cherry picked from commit 65e55097da2bb3f2fbdf9ba1946da25fe58bec98)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6d7d192e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6d7d192e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6d7d192e

Branch: refs/heads/branch-2
Commit: 6d7d192e4799b51931e55217e02baec14d49607b
Parents: 09fbbff
Author: Kihwal Lee 
Authored: Tue May 29 14:32:58 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:33:31 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java |  5 +++
 .../java/org/apache/hadoop/util/TestRunJar.java | 36 
 2 files changed, 41 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d7d192e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 19b51ad..678e459 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -109,6 +109,7 @@ public class RunJar {
   throws IOException {
 try (JarFile jar = new JarFile(jarFile)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -117,6 +118,10 @@ public class RunJar {
   try (InputStream in = jar.getInputStream(entry)) {
 File file = new File(toDir, entry.getName());
 ensureDirectory(file.getParentFile());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 try (OutputStream out = new FileOutputStream(file)) {
   IOUtils.copyBytes(in, out, BUFFER_SIZE);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d7d192e/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 7b61b32..cb2cfa8 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -20,12 +20,15 @@ package org.apache.hadoop.util;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
 import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 import java.util.regex.Pattern;
 import java.util.zip.ZipEntry;
@@ -165,4 +168,37 @@ public class TestRunJar {
 runJar.run(args);
 // it should not throw an exception
   }
+
+  @Test
+  public void testUnJar2() throws IOException {
+// make a simple zip
+File jarFile = new File(TEST_ROOT_DIR, TEST_JAR_NAME);
+JarOutputStream jstream =
+new JarOutputStream(new FileOutputStream(jarFile));
+JarEntry je = new JarEntry("META-INF/MANIFEST.MF");
+byte[] data = "Manifest-Version: 1.0\nCreated-By: 1.8.0_1 (Manual)"
+.getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+je = new JarEntry("../outside.path");
+data = "any data here".getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+jstream.close();
+
+File unjarDir = getUnjarDir("unjar-path");
+
+// Unjar everything
+try {
+  RunJar.unJar(jarFile, unjarDir);
+  fail("unJar should throw IOException.");
+} catch (IOException e) {
+  GenericTestUtils.assertExceptionContains(
+  "would create file outside of", e);
+   

hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 c3dce2620 -> 6a4ae6f6e


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

(cherry picked from commit 65e55097da2bb3f2fbdf9ba1946da25fe58bec98)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a4ae6f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a4ae6f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a4ae6f6

Branch: refs/heads/branch-2.9
Commit: 6a4ae6f6eeed1392a4828a5721fa1499f65bdde4
Parents: c3dce26
Author: Kihwal Lee 
Authored: Tue May 29 14:35:28 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:35:28 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java |  5 +++
 .../java/org/apache/hadoop/util/TestRunJar.java | 36 
 2 files changed, 41 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a4ae6f6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 19b51ad..678e459 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -109,6 +109,7 @@ public class RunJar {
   throws IOException {
 try (JarFile jar = new JarFile(jarFile)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -117,6 +118,10 @@ public class RunJar {
   try (InputStream in = jar.getInputStream(entry)) {
 File file = new File(toDir, entry.getName());
 ensureDirectory(file.getParentFile());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 try (OutputStream out = new FileOutputStream(file)) {
   IOUtils.copyBytes(in, out, BUFFER_SIZE);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a4ae6f6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 7b61b32..cb2cfa8 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -20,12 +20,15 @@ package org.apache.hadoop.util;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
 import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 import java.util.regex.Pattern;
 import java.util.zip.ZipEntry;
@@ -165,4 +168,37 @@ public class TestRunJar {
 runJar.run(args);
 // it should not throw an exception
   }
+
+  @Test
+  public void testUnJar2() throws IOException {
+// make a simple zip
+File jarFile = new File(TEST_ROOT_DIR, TEST_JAR_NAME);
+JarOutputStream jstream =
+new JarOutputStream(new FileOutputStream(jarFile));
+JarEntry je = new JarEntry("META-INF/MANIFEST.MF");
+byte[] data = "Manifest-Version: 1.0\nCreated-By: 1.8.0_1 (Manual)"
+.getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+je = new JarEntry("../outside.path");
+data = "any data here".getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+jstream.close();
+
+File unjarDir = getUnjarDir("unjar-path");
+
+// Unjar everything
+try {
+  RunJar.unJar(jarFile, unjarDir);
+  fail("unJar should throw IOException.");
+} catch (IOException e) {
+  GenericTestUtils.assertExceptionContains(
+  "would create file outside of", e);

hadoop git commit: dditional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 74c7024cc -> 3808e5d62


dditional check when unpacking archives. Contributed by Wilfred Spiegelenburg.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3808e5d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3808e5d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3808e5d6

Branch: refs/heads/branch-2.8
Commit: 3808e5d62aa21d7393d98fbc9d54b9ad1e79ab99
Parents: 74c7024
Author: Kihwal Lee 
Authored: Tue May 29 14:39:53 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:39:53 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java |  5 +++
 .../java/org/apache/hadoop/util/TestRunJar.java | 38 +++-
 2 files changed, 42 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3808e5d6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 52cf05c..a56f6ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -98,6 +98,7 @@ public class RunJar {
 JarFile jar = new JarFile(jarFile);
 try {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -107,6 +108,10 @@ public class RunJar {
   try {
 File file = new File(toDir, entry.getName());
 ensureDirectory(file.getParentFile());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 OutputStream out = new FileOutputStream(file);
 try {
   IOUtils.copyBytes(in, out, 8192);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3808e5d6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 7262534..20650c0 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.util;
 
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
@@ -25,6 +26,8 @@ import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 import java.util.regex.Pattern;
 import java.util.zip.ZipEntry;
@@ -186,4 +189,37 @@ public class TestRunJar extends TestCase {
 
 return jarFile;
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testUnJar2() throws IOException {
+// make a simple zip
+File jarFile = new File(TEST_ROOT_DIR, TEST_JAR_NAME);
+JarOutputStream jstream =
+new JarOutputStream(new FileOutputStream(jarFile));
+JarEntry je = new JarEntry("META-INF/MANIFEST.MF");
+byte[] data = "Manifest-Version: 1.0\nCreated-By: 1.8.0_1 (Manual)"
+.getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+je = new JarEntry("../outside.path");
+data = "any data here".getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+jstream.close();
+
+File unjarDir = new File(TEST_ROOT_DIR, "unjar-path");
+
+// Unjar everything
+try {
+  RunJar.unJar(jarFile, unjarDir);
+  fail("unJar should throw IOException.");
+} catch (IOException e) {
+  GenericTestUtils.assertExceptionContains(
+  "would create file outside of", e);
+}
+  }
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commi

hadoop git commit: YARN-8329. Docker client configuration can still be set incorrectly. Contributed by Shane Kumpf

2018-05-29 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk e3236a968 -> 4827e9a90


YARN-8329. Docker client configuration can still be set incorrectly. 
Contributed by Shane Kumpf


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4827e9a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4827e9a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4827e9a9

Branch: refs/heads/trunk
Commit: 4827e9a9085b306bc379cb6e0b1fe4b92326edcd
Parents: e3236a9
Author: Jason Lowe 
Authored: Tue May 29 14:43:17 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 14:43:17 2018 -0500

--
 .../yarn/util/DockerClientConfigHandler.java| 23 +++-
 .../security/TestDockerClientConfigHandler.java |  4 ++--
 .../runtime/DockerLinuxContainerRuntime.java|  7 +++---
 3 files changed, 19 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4827e9a9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
index 5522cf4..8ec4deb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
@@ -154,14 +154,15 @@ public final class DockerClientConfigHandler {
* @param outConfigFile the File to write the Docker client configuration to.
* @param credentials the populated Credentials object.
* @throws IOException if the write fails.
+   * @return true if a Docker credential is found in the supplied credentials.
*/
-  public static void writeDockerCredentialsToPath(File outConfigFile,
+  public static boolean writeDockerCredentialsToPath(File outConfigFile,
   Credentials credentials) throws IOException {
-ObjectMapper mapper = new ObjectMapper();
-ObjectNode rootNode = mapper.createObjectNode();
-ObjectNode registryUrlNode = mapper.createObjectNode();
 boolean foundDockerCred = false;
 if (credentials.numberOfTokens() > 0) {
+  ObjectMapper mapper = new ObjectMapper();
+  ObjectNode rootNode = mapper.createObjectNode();
+  ObjectNode registryUrlNode = mapper.createObjectNode();
   for (Token tk : credentials.getAllTokens()) {
 if (tk.getKind().equals(DockerCredentialTokenIdentifier.KIND)) {
   foundDockerCred = true;
@@ -176,12 +177,14 @@ public final class DockerClientConfigHandler {
   }
 }
   }
+  if (foundDockerCred) {
+rootNode.put(CONFIG_AUTHS_KEY, registryUrlNode);
+String json = mapper.writerWithDefaultPrettyPrinter()
+.writeValueAsString(rootNode);
+FileUtils.writeStringToFile(
+outConfigFile, json, StandardCharsets.UTF_8);
+  }
 }
-if (foundDockerCred) {
-  rootNode.put(CONFIG_AUTHS_KEY, registryUrlNode);
-  String json =
-  mapper.writerWithDefaultPrettyPrinter().writeValueAsString(rootNode);
-  FileUtils.writeStringToFile(outConfigFile, json, StandardCharsets.UTF_8);
-}
+return foundDockerCred;
   }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4827e9a9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
index c4cbe45..cfe5a45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
@@ -116,8 +116,8 @@ public class TestDockerClientConfigHandler {
 Credentials credentials =
 DockerClientConfigHandler.readCredentialsFromConfigFile(
 new Path(file.toURI()), conf, APPLICATION_ID);
-DockerClientConfigHandler.writeDockerCredentialsToPath(outFile,
-credentials);
+assertTrue(DockerClientConfigHandler.writeDockerCredentialsToPath(outFile,
+credentials))

[2/3] hadoop git commit: HDDS-81. Moving ContainerReport inside Datanode heartbeat. Contributed by Nanda Kumar.

2018-05-29 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index 2d88621..f5fe46a 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -20,6 +20,7 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.primitives.Longs;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.closer.ContainerCloser;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
@@ -33,7 +34,7 @@ import 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
 import org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
+.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.lease.Lease;
 import org.apache.hadoop.ozone.lease.LeaseException;
@@ -368,11 +369,12 @@ public class ContainerMapping implements Mapping {
* @param reports Container report
*/
   @Override
-  public void processContainerReports(ContainerReportsRequestProto reports)
+  public void processContainerReports(DatanodeDetails datanodeDetails,
+  ContainerReportsProto reports)
   throws IOException {
 List
 containerInfos = reports.getReportsList();
-containerSupervisor.handleContainerReport(reports);
+containerSupervisor.handleContainerReport(datanodeDetails, reports);
 for (StorageContainerDatanodeProtocolProtos.ContainerInfo datanodeState :
 containerInfos) {
   byte[] dbKey = Longs.toByteArray(datanodeState.getContainerID());
@@ -402,7 +404,7 @@ public class ContainerMapping implements Mapping {
   // Container not found in our container db.
   LOG.error("Error while processing container report from datanode :" +
   " {}, for container: {}, reason: container doesn't exist in" 
+
-  "container database.", reports.getDatanodeDetails(),
+  "container database.", datanodeDetails,
   datanodeState.getContainerID());
 }
   } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
index f560174..ee8e344 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
@@ -16,10 +16,11 @@
  */
 package org.apache.hadoop.hdds.scm.container;
 
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
+.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
 
 import java.io.Closeable;
 import java.io.IOException;
@@ -98,7 +99,8 @@ public interface Mapping extends Closeable {
*
* @param reports Container report
*/
-  void processContainerReports(ContainerReportsRequestProto reports)
+  void processContainerReports(DatanodeDetails datanodeDetails,
+   ContainerReportsProto reports)
   throws IOException;
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ContainerSupervisor.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ContainerSupervisor.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ContainerSupervisor.java
index c14303f..5bd0574 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/Container

[1/3] hadoop git commit: HDDS-81. Moving ContainerReport inside Datanode heartbeat. Contributed by Nanda Kumar.

2018-05-29 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4827e9a90 -> 201440b98


http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationDatanodeStateManager.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationDatanodeStateManager.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationDatanodeStateManager.java
deleted file mode 100644
index 50fd18f..000
--- 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationDatanodeStateManager.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership.  The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations 
under
- * the License.
- */
-package org.apache.hadoop.ozone.container.testutils;
-
-import com.google.common.primitives.Longs;
-import org.apache.commons.codec.digest.DigestUtils;
-import org.apache.hadoop.hdds.scm.node.NodeManager;
-import org.apache.hadoop.hdds.scm.node.NodePoolManager;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerInfo;
-import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
-
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Random;
-
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState
-.HEALTHY;
-
-/**
- * This class  manages the state of datanode
- * in conjunction with the node pool and node managers.
- */
-public class ReplicationDatanodeStateManager {
-  private final NodeManager nodeManager;
-  private final NodePoolManager poolManager;
-  private final Random r;
-
-  /**
-   * The datanode state Manager.
-   *
-   * @param nodeManager
-   * @param poolManager
-   */
-  public ReplicationDatanodeStateManager(NodeManager nodeManager,
-  NodePoolManager poolManager) {
-this.nodeManager = nodeManager;
-this.poolManager = poolManager;
-r = new Random();
-  }
-
-  /**
-   * Get Container Report as if it is from a datanode in the cluster.
-   * @param containerID - Container ID.
-   * @param poolName - Pool Name.
-   * @param dataNodeCount - Datanode Count.
-   * @return List of Container Reports.
-   */
-  public List getContainerReport(
-  long containerID, String poolName, int dataNodeCount) {
-List containerList = new LinkedList<>();
-List nodesInPool = poolManager.getNodes(poolName);
-
-if (nodesInPool == null) {
-  return containerList;
-}
-
-if (nodesInPool.size() < dataNodeCount) {
-  throw new IllegalStateException("Not enough datanodes to create " +
-  "required container reports");
-}
-
-while (containerList.size() < dataNodeCount && nodesInPool.size() > 0) {
-  DatanodeDetails id = nodesInPool.get(r.nextInt(nodesInPool.size()));
-  nodesInPool.remove(id);
-  containerID++;
-  // We return container reports only for nodes that are healthy.
-  if (nodeManager.getNodeState(id) == HEALTHY) {
-ContainerInfo info = ContainerInfo.newBuilder()
-.setContainerID(containerID)
-.setFinalhash(DigestUtils.sha256Hex(
-Longs.toByteArray(containerID)))
-.setContainerID(containerID)
-.build();
-ContainerReportsRequestProto containerReport =
-ContainerReportsRequestProto.newBuilder().addReports(info)
-.setDatanodeDetails(id.getProtoBufMessage())
-.setType(ContainerReportsRequestProto.reportType.fullReport)
-.build();
-containerList.add(containerReport);
-  }
-}
-return containerList;
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
 
b/hadoop-hdds/server-scm/src/test/java

[3/3] hadoop git commit: HDDS-81. Moving ContainerReport inside Datanode heartbeat. Contributed by Nanda Kumar.

2018-05-29 Thread aengineer
HDDS-81. Moving ContainerReport inside Datanode heartbeat.
Contributed by Nanda Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/201440b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/201440b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/201440b9

Branch: refs/heads/trunk
Commit: 201440b987d5ef3910c2045b2411c213ed6eec1f
Parents: 4827e9a
Author: Anu Engineer 
Authored: Tue May 29 12:40:27 2018 -0700
Committer: Anu Engineer 
Committed: Tue May 29 12:48:50 2018 -0700

--
 .../common/impl/ContainerManagerImpl.java   |  22 +-
 .../common/impl/StorageLocationReport.java  |   8 +-
 .../common/interfaces/ContainerManager.java |   8 +-
 .../statemachine/DatanodeStateMachine.java  |   7 +-
 .../common/statemachine/StateContext.java   |  16 +-
 .../CloseContainerCommandHandler.java   | 113 
 .../commandhandler/CloseContainerHandler.java   | 113 
 .../commandhandler/CommandDispatcher.java   |   5 +-
 .../commandhandler/CommandHandler.java  |   8 +-
 .../DeleteBlocksCommandHandler.java |  12 +-
 .../states/endpoint/HeartbeatEndpointTask.java  |  30 +-
 .../states/endpoint/RegisterEndpointTask.java   |  12 +-
 .../container/ozoneimpl/OzoneContainer.java |  10 +-
 .../StorageContainerDatanodeProtocol.java   |  30 +-
 .../protocol/StorageContainerNodeProtocol.java  |  15 +-
 .../commands/CloseContainerCommand.java |  18 +-
 .../protocol/commands/DeleteBlocksCommand.java  |  18 +-
 .../protocol/commands/RegisteredCommand.java|  26 +-
 .../protocol/commands/ReregisterCommand.java|  16 +-
 .../ozone/protocol/commands/SCMCommand.java |   4 +-
 ...rDatanodeProtocolClientSideTranslatorPB.java |  50 +---
 ...rDatanodeProtocolServerSideTranslatorPB.java |  53 ++--
 .../StorageContainerDatanodeProtocol.proto  | 256 -
 .../ozone/container/common/ScmTestMock.java |  78 ++
 .../hdds/scm/container/ContainerMapping.java|  10 +-
 .../hadoop/hdds/scm/container/Mapping.java  |   6 +-
 .../replication/ContainerSupervisor.java|  13 +-
 .../container/replication/InProgressPool.java   |  15 +-
 .../hdds/scm/node/HeartbeatQueueItem.java   |  14 +-
 .../hadoop/hdds/scm/node/SCMNodeManager.java|  58 ++--
 .../hdds/scm/node/SCMNodeStorageStatMap.java|  14 +-
 .../scm/server/SCMDatanodeProtocolServer.java   | 195 +++--
 .../org/apache/hadoop/hdds/scm/TestUtils.java   |  19 +-
 .../hdds/scm/container/MockNodeManager.java |  26 +-
 .../scm/container/TestContainerMapping.java |  24 +-
 .../container/closer/TestContainerCloser.java   |  12 +-
 .../hdds/scm/node/TestContainerPlacement.java   |   6 +-
 .../hadoop/hdds/scm/node/TestNodeManager.java   |  83 +++---
 .../scm/node/TestSCMNodeStorageStatMap.java |  16 +-
 .../ozone/container/common/TestEndPoint.java| 113 ++--
 .../replication/TestContainerSupervisor.java| 275 ---
 .../ReplicationDatanodeStateManager.java| 101 ---
 .../testutils/ReplicationNodeManagerMock.java   |  14 +-
 .../ozone/TestStorageContainerManager.java  |  11 +-
 .../apache/hadoop/ozone/scm/TestSCMMetrics.java |  68 ++---
 45 files changed, 706 insertions(+), 1315 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/201440b9/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
index 9355364..af47015 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
@@ -35,11 +35,11 @@ import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 import org.apache.hadoop.hdds.protocol.proto
 .StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.ContainerReportsRequestProto;
+.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
 import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.SCMNodeReport;
+.StorageContainerDatanodeProtocolProtos.NodeReportProto;
 import org.apache.hadoop.hdds.protocol.proto
-.StorageContainerDatanodeProtocolProtos.SCMStorageReport;
+.StorageContainerDatanodeProtocolProtos.StorageReportProto;
 import org.apache.hadoop.io.IOUtils;
 import

hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 5b57f9cae -> eaa2b8035


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eaa2b803
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eaa2b803
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eaa2b803

Branch: refs/heads/branch-2.7
Commit: eaa2b8035b584dfcf7c79a33484eb2dffd3fdb11
Parents: 5b57f9c
Author: Kihwal Lee 
Authored: Tue May 29 14:47:55 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 14:48:46 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java |  5 +++
 .../java/org/apache/hadoop/util/TestRunJar.java | 39 +++-
 2 files changed, 43 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eaa2b803/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 4b26b76..a3b5b0b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -93,6 +93,7 @@ public class RunJar {
 throws IOException {
 JarFile jar = new JarFile(jarFile);
 try {
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -102,6 +103,10 @@ public class RunJar {
   try {
 File file = new File(toDir, entry.getName());
 ensureDirectory(file.getParentFile());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 OutputStream out = new FileOutputStream(file);
 try {
   IOUtils.copyBytes(in, out, 8192);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eaa2b803/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index f592d04..b2a6537 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.util;
 
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
@@ -25,6 +26,8 @@ import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 import java.util.regex.Pattern;
 import java.util.zip.ZipEntry;
@@ -32,6 +35,7 @@ import java.util.zip.ZipEntry;
 import junit.framework.TestCase;
 
 import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -169,4 +173,37 @@ public class TestRunJar extends TestCase {
 
 return jarFile;
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testUnJar2() throws IOException {
+// make a simple zip
+File jarFile = new File(TEST_ROOT_DIR, TEST_JAR_NAME);
+JarOutputStream jstream =
+new JarOutputStream(new FileOutputStream(jarFile));
+JarEntry je = new JarEntry("META-INF/MANIFEST.MF");
+byte[] data = "Manifest-Version: 1.0\nCreated-By: 1.8.0_1 (Manual)"
+.getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+je = new JarEntry("../outside.path");
+data = "any data here".getBytes(StandardCharsets.UTF_8);
+je.setSize(data.length);
+jstream.putNextEntry(je);
+jstream.write(data);
+jstream.closeEntry();
+jstream.close();
+
+File unjarDir = new File(TEST_ROOT_DIR, "unjar-path");
+
+// Unjar everything
+try {
+  RunJar.unJar(jarFile, unjarDir);
+  fail("unJar should throw IOException.");
+} catch (IOException e) {
+  GenericTestUtils.assertExceptionContains(
+  "woul

hadoop git commit: YARN-8329. Docker client configuration can still be set incorrectly. Contributed by Shane Kumpf

2018-05-29 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 3eb1cb18c -> a1fd04c4f


YARN-8329. Docker client configuration can still be set incorrectly. 
Contributed by Shane Kumpf

(cherry picked from commit 4827e9a9085b306bc379cb6e0b1fe4b92326edcd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1fd04c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1fd04c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1fd04c4

Branch: refs/heads/branch-3.1
Commit: a1fd04c4f472fdc0835f491a719220684ee1f255
Parents: 3eb1cb1
Author: Jason Lowe 
Authored: Tue May 29 14:43:17 2018 -0500
Committer: Jason Lowe 
Committed: Tue May 29 14:48:01 2018 -0500

--
 .../yarn/util/DockerClientConfigHandler.java| 23 +++-
 .../security/TestDockerClientConfigHandler.java |  4 ++--
 .../runtime/DockerLinuxContainerRuntime.java|  7 +++---
 3 files changed, 19 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1fd04c4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
index 5522cf4..8ec4deb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java
@@ -154,14 +154,15 @@ public final class DockerClientConfigHandler {
* @param outConfigFile the File to write the Docker client configuration to.
* @param credentials the populated Credentials object.
* @throws IOException if the write fails.
+   * @return true if a Docker credential is found in the supplied credentials.
*/
-  public static void writeDockerCredentialsToPath(File outConfigFile,
+  public static boolean writeDockerCredentialsToPath(File outConfigFile,
   Credentials credentials) throws IOException {
-ObjectMapper mapper = new ObjectMapper();
-ObjectNode rootNode = mapper.createObjectNode();
-ObjectNode registryUrlNode = mapper.createObjectNode();
 boolean foundDockerCred = false;
 if (credentials.numberOfTokens() > 0) {
+  ObjectMapper mapper = new ObjectMapper();
+  ObjectNode rootNode = mapper.createObjectNode();
+  ObjectNode registryUrlNode = mapper.createObjectNode();
   for (Token tk : credentials.getAllTokens()) {
 if (tk.getKind().equals(DockerCredentialTokenIdentifier.KIND)) {
   foundDockerCred = true;
@@ -176,12 +177,14 @@ public final class DockerClientConfigHandler {
   }
 }
   }
+  if (foundDockerCred) {
+rootNode.put(CONFIG_AUTHS_KEY, registryUrlNode);
+String json = mapper.writerWithDefaultPrettyPrinter()
+.writeValueAsString(rootNode);
+FileUtils.writeStringToFile(
+outConfigFile, json, StandardCharsets.UTF_8);
+  }
 }
-if (foundDockerCred) {
-  rootNode.put(CONFIG_AUTHS_KEY, registryUrlNode);
-  String json =
-  mapper.writerWithDefaultPrettyPrinter().writeValueAsString(rootNode);
-  FileUtils.writeStringToFile(outConfigFile, json, StandardCharsets.UTF_8);
-}
+return foundDockerCred;
   }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1fd04c4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
index c4cbe45..cfe5a45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java
@@ -116,8 +116,8 @@ public class TestDockerClientConfigHandler {
 Credentials credentials =
 DockerClientConfigHandler.readCredentialsFromConfigFile(
 new Path(file.toURI()), conf, APPLICATION_ID);
-DockerClientConfigHandler.writeDockerCredentialsToPath(outFile,
-credentials);
+assertTrue(Docker

hadoop git commit: Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

2018-05-29 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 a1fd04c4f -> 11a425d11


Additional check when unpacking archives. Contributed by Wilfred Spiegelenburg.

(cherry picked from commit e3236a9680709de7a95ffbc11b20e1bdc95a8605)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/11a425d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/11a425d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/11a425d1

Branch: refs/heads/branch-3.1
Commit: 11a425d11a329010d0ff8255ecbcd1eb51b642e3
Parents: a1fd04c
Author: Kihwal Lee 
Authored: Tue May 29 15:02:33 2018 -0500
Committer: Kihwal Lee 
Committed: Tue May 29 15:02:33 2018 -0500

--
 .../java/org/apache/hadoop/util/RunJar.java | 10 +
 .../java/org/apache/hadoop/util/TestRunJar.java | 46 +++-
 2 files changed, 55 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/11a425d1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 9dd770c..239d464 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -113,12 +113,17 @@ public class RunJar {
   throws IOException {
 try (JarInputStream jar = new JarInputStream(inputStream)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   for (JarEntry entry = jar.getNextJarEntry();
entry != null;
entry = jar.getNextJarEntry()) {
 if (!entry.isDirectory() &&
 unpackRegex.matcher(entry.getName()).matches()) {
   File file = new File(toDir, entry.getName());
+  if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+throw new IOException("expanding " + entry.getName()
++ " would create file outside of " + toDir);
+  }
   ensureDirectory(file.getParentFile());
   try (OutputStream out = new FileOutputStream(file)) {
 IOUtils.copyBytes(jar, out, BUFFER_SIZE);
@@ -178,6 +183,7 @@ public class RunJar {
   throws IOException {
 try (JarFile jar = new JarFile(jarFile)) {
   int numOfFailedLastModifiedSet = 0;
+  String targetDirPath = toDir.getCanonicalPath() + File.separator;
   Enumeration entries = jar.entries();
   while (entries.hasMoreElements()) {
 final JarEntry entry = entries.nextElement();
@@ -185,6 +191,10 @@ public class RunJar {
 unpackRegex.matcher(entry.getName()).matches()) {
   try (InputStream in = jar.getInputStream(entry)) {
 File file = new File(toDir, entry.getName());
+if (!file.getCanonicalPath().startsWith(targetDirPath)) {
+  throw new IOException("expanding " + entry.getName()
+  + " would create file outside of " + toDir);
+}
 ensureDirectory(file.getParentFile());
 try (OutputStream out = new FileOutputStream(file)) {
   IOUtils.copyBytes(in, out, BUFFER_SIZE);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/11a425d1/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
index 19485d6..237751c 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java
@@ -17,9 +17,12 @@
  */
 package org.apache.hadoop.util;
 
+import static org.apache.hadoop.util.RunJar.MATCH_ANY;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.any;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.when;
 
@@ -28,6 +31,7 @@ import java.io.FileInputStream;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
 import java.util.Random;
 import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
@@ -222,4 +226,44 @@ public class TestRunJar {
 runJar.run(args);
 // it sho

hadoop git commit: HADOOP-14946 S3Guard testPruneCommandCLI can fail. Contributed by Gabor Bota.

2018-05-29 Thread fabbri
Repository: hadoop
Updated Branches:
  refs/heads/trunk 201440b98 -> 30284d020


HADOOP-14946 S3Guard testPruneCommandCLI can fail. Contributed by Gabor Bota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30284d02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30284d02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30284d02

Branch: refs/heads/trunk
Commit: 30284d020d36c502dad5bdbae61ec48e9dfe9f8c
Parents: 201440b
Author: Aaron Fabbri 
Authored: Tue May 29 13:38:15 2018 -0700
Committer: Aaron Fabbri 
Committed: Tue May 29 13:38:15 2018 -0700

--
 .../s3guard/AbstractS3GuardToolTestBase.java| 52 +---
 1 file changed, 44 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/30284d02/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
index 4381749..2b43810 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
@@ -31,6 +31,7 @@ import java.util.Set;
 import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
+import org.apache.hadoop.util.StopWatch;
 import org.junit.Assume;
 import org.junit.Test;
 
@@ -61,6 +62,8 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
   protected static final String S3A_THIS_BUCKET_DOES_NOT_EXIST
   = "s3a://this-bucket-does-not-exist-000";
 
+  private static final int PRUNE_MAX_AGE_SECS = 2;
+
   private MetadataStore ms;
 
   protected static void expectResult(int expected,
@@ -186,24 +189,57 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
 }
   }
 
+  /**
+   * Attempt to test prune() with sleep() without having flaky tests
+   * when things run slowly. Test is basically:
+   * 1. Set max path age to X seconds
+   * 2. Create some files (which writes entries to MetadataStore)
+   * 3. Sleep X+2 seconds (all files from above are now "stale")
+   * 4. Create some other files (these are "fresh").
+   * 5. Run prune on MetadataStore.
+   * 6. Assert that only files that were created before the sleep() were 
pruned.
+   *
+   * Problem is: #6 can fail if X seconds elapse between steps 4 and 5, since
+   * the newer files also become stale and get pruned.  This is easy to
+   * reproduce by running all integration tests in parallel with a ton of
+   * threads, or anything else that slows down execution a lot.
+   *
+   * Solution: Keep track of time elapsed between #4 and #5, and if it
+   * exceeds X, just print a warn() message instead of failing.
+   *
+   * @param cmdConf configuration for command
+   * @param parent path
+   * @param args command args
+   * @throws Exception
+   */
   private void testPruneCommand(Configuration cmdConf, Path parent,
   String...args) throws Exception {
 Path keepParent = path("prune-cli-keep");
+StopWatch timer = new StopWatch();
 try {
-  getFileSystem().mkdirs(parent);
-  getFileSystem().mkdirs(keepParent);
-
   S3GuardTool.Prune cmd = new S3GuardTool.Prune(cmdConf);
   cmd.setMetadataStore(ms);
 
+  getFileSystem().mkdirs(parent);
+  getFileSystem().mkdirs(keepParent);
   createFile(new Path(parent, "stale"), true, true);
   createFile(new Path(keepParent, "stale-to-keep"), true, true);
-  Thread.sleep(TimeUnit.SECONDS.toMillis(2));
+
+  Thread.sleep(TimeUnit.SECONDS.toMillis(PRUNE_MAX_AGE_SECS + 2));
+
+  timer.start();
   createFile(new Path(parent, "fresh"), true, true);
 
   assertMetastoreListingCount(parent, "Children count before pruning", 2);
   exec(cmd, args);
-  assertMetastoreListingCount(parent, "Pruned children count", 1);
+  long msecElapsed = timer.now(TimeUnit.MILLISECONDS);
+  if (msecElapsed >= PRUNE_MAX_AGE_SECS * 1000) {
+LOG.warn("Skipping an assertion: Test running too slowly ({} msec)",
+msecElapsed);
+  } else {
+assertMetastoreListingCount(parent, "Pruned children count remaining",
+1);
+  }
   assertMetastoreListingCount(keepParent,
   "This child should have been kept (prefix restriction).", 1);
 } finally {
@@ -224,13 +260,14 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
   public void testPruneCommandCLI() throws Exception {
 

hadoop git commit: HDDS-114. Ozone Datanode mbean registration fails for StorageLocation. Contributed by Elek, Marton.

2018-05-29 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 30284d020 -> 24169062e


HDDS-114. Ozone Datanode mbean registration fails for StorageLocation.
Contributed by Elek, Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24169062
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24169062
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24169062

Branch: refs/heads/trunk
Commit: 24169062e5f4e7798a47c5e6e3e94504cba73092
Parents: 30284d0
Author: Anu Engineer 
Authored: Tue May 29 13:23:58 2018 -0700
Committer: Anu Engineer 
Committed: Tue May 29 13:48:55 2018 -0700

--
 .../common/impl/StorageLocationReport.java  | 52 +++-
 .../ContainerLocationManagerMXBean.java |  4 +-
 .../interfaces/StorageLocationReportMXBean.java | 40 +++
 3 files changed, 71 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24169062/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/StorageLocationReport.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/StorageLocationReport.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/StorageLocationReport.java
index 87b9656..061d09b 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/StorageLocationReport.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/StorageLocationReport.java
@@ -23,6 +23,8 @@ import org.apache.hadoop.hdds.protocol.proto.
 StorageContainerDatanodeProtocolProtos.StorageReportProto;
 import org.apache.hadoop.hdds.protocol.proto.
 StorageContainerDatanodeProtocolProtos.StorageTypeProto;
+import org.apache.hadoop.ozone.container.common.interfaces
+.StorageLocationReportMXBean;
 
 import java.io.IOException;
 
@@ -30,7 +32,8 @@ import java.io.IOException;
  * Storage location stats of datanodes that provide back store for containers.
  *
  */
-public class StorageLocationReport {
+public final class StorageLocationReport implements
+StorageLocationReportMXBean {
 
   private final String id;
   private final boolean failed;
@@ -76,6 +79,11 @@ public class StorageLocationReport {
 return storageLocation;
   }
 
+  @Override
+  public String getStorageTypeName() {
+return storageType.name();
+  }
+
   public StorageType getStorageType() {
 return storageType;
   }
@@ -204,76 +212,76 @@ public class StorageLocationReport {
 /**
  * Sets the storageId.
  *
- * @param id storageId
+ * @param idValue storageId
  * @return StorageLocationReport.Builder
  */
-public Builder setId(String id) {
-  this.id = id;
+public Builder setId(String idValue) {
+  this.id = idValue;
   return this;
 }
 
 /**
  * Sets whether the volume failed or not.
  *
- * @param failed whether volume failed or not
+ * @param failedValue whether volume failed or not
  * @return StorageLocationReport.Builder
  */
-public Builder setFailed(boolean failed) {
-  this.failed = failed;
+public Builder setFailed(boolean failedValue) {
+  this.failed = failedValue;
   return this;
 }
 
 /**
  * Sets the capacity of volume.
  *
- * @param capacity capacity
+ * @param capacityValue capacity
  * @return StorageLocationReport.Builder
  */
-public Builder setCapacity(long capacity) {
-  this.capacity = capacity;
+public Builder setCapacity(long capacityValue) {
+  this.capacity = capacityValue;
   return this;
 }
 /**
  * Sets the scmUsed Value.
  *
- * @param scmUsed storage space used by scm
+ * @param scmUsedValue storage space used by scm
  * @return StorageLocationReport.Builder
  */
-public Builder setScmUsed(long scmUsed) {
-  this.scmUsed = scmUsed;
+public Builder setScmUsed(long scmUsedValue) {
+  this.scmUsed = scmUsedValue;
   return this;
 }
 
 /**
  * Sets the remaining free space value.
  *
- * @param remaining remaining free space
+ * @param remainingValue remaining free space
  * @return StorageLocationReport.Builder
  */
-public Builder setRemaining(long remaining) {
-  this.remaining = remaining;
+public Builder setRemaining(long remainingValue) {
+  this.remaining = remainingValue;
   return this;
 }
 
 /**
  * Sets the storageType.
  *
- * @param storageType type of the storage used
+ * @param storageTypeValue type of the storage used
  * @return StorageLocationReport.Builder
   

hadoop git commit: YARN-8362. Bugfix logic in container retries in node manager. Contributed by Chandni Singh

2018-05-29 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 24169062e -> 135941e00


YARN-8362.  Bugfix logic in container retries in node manager.
Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/135941e0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/135941e0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/135941e0

Branch: refs/heads/trunk
Commit: 135941e00d762a417c3b4cc524cdc59b0d1810b1
Parents: 2416906
Author: Eric Yang 
Authored: Tue May 29 16:56:58 2018 -0400
Committer: Eric Yang 
Committed: Tue May 29 16:56:58 2018 -0400

--
 .../container/ContainerImpl.java|  4 +-
 .../container/SlidingWindowRetryPolicy.java | 62 +++-
 .../container/TestSlidingWindowRetryPolicy.java |  6 ++
 3 files changed, 44 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/135941e0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index c09c7f1..5527ac4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1602,8 +1602,10 @@ public class ContainerImpl implements Container {
 }
 container.addDiagnostics(exitEvent.getDiagnosticInfo() + "\n");
   }
-
   if (container.shouldRetry(container.exitCode)) {
+// Updates to the retry context should  be protected from concurrent
+// writes. It should only be called from this transition.
+container.retryPolicy.updateRetryContext(container.windowRetryContext);
 container.storeRetryContext();
 doRelaunch(container,
 container.windowRetryContext.getRemainingRetries(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/135941e0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
index 0208879..36a8b91 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
@@ -42,49 +42,40 @@ public class SlidingWindowRetryPolicy {
 
   public boolean shouldRetry(RetryContext retryContext,
   int errorCode) {
-ContainerRetryContext containerRC = retryContext
-.containerRetryContext;
+ContainerRetryContext containerRC = retryContext.containerRetryContext;
 Preconditions.checkNotNull(containerRC, "container retry context null");
 ContainerRetryPolicy retryPolicy = containerRC.getRetryPolicy();
 if (retryPolicy == ContainerRetryPolicy.RETRY_ON_ALL_ERRORS
 || (retryPolicy == ContainerRetryPolicy.RETRY_ON_SPECIFIC_ERROR_CODES
 && containerRC.getErrorCodes() != null
 && containerRC.getErrorCodes().contains(errorCode))) {
-  if (containerRC.getMaxRetries() == ContainerRetryContext.RETRY_FOREVER) {
-return true;
-  }
-  int pendingRetries = calculatePendingRetries(retryContext);
-  updateRetryContext(retryContext, pendingRetries);
-  return pendingRetries > 0;
+  return containerRC.getMaxRetries() == ContainerRetryContext.RETRY_FOREVER
+  || calc

hadoop git commit: YARN-8362. Bugfix logic in container retries in node manager. Contributed by Chandni Singh

2018-05-29 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 11a425d11 -> 03209e896


YARN-8362.  Bugfix logic in container retries in node manager.
Contributed by Chandni Singh

(cherry picked from commit 135941e00d762a417c3b4cc524cdc59b0d1810b1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03209e89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03209e89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03209e89

Branch: refs/heads/branch-3.1
Commit: 03209e8966a80500f700a2f3a3c12e5975dc127a
Parents: 11a425d
Author: Eric Yang 
Authored: Tue May 29 16:56:58 2018 -0400
Committer: Eric Yang 
Committed: Tue May 29 17:04:01 2018 -0400

--
 .../container/ContainerImpl.java|  4 +-
 .../container/SlidingWindowRetryPolicy.java | 62 +++-
 .../container/TestSlidingWindowRetryPolicy.java |  6 ++
 3 files changed, 44 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03209e89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index c09c7f1..5527ac4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1602,8 +1602,10 @@ public class ContainerImpl implements Container {
 }
 container.addDiagnostics(exitEvent.getDiagnosticInfo() + "\n");
   }
-
   if (container.shouldRetry(container.exitCode)) {
+// Updates to the retry context should  be protected from concurrent
+// writes. It should only be called from this transition.
+container.retryPolicy.updateRetryContext(container.windowRetryContext);
 container.storeRetryContext();
 doRelaunch(container,
 container.windowRetryContext.getRemainingRetries(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/03209e89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
index 0208879..36a8b91 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java
@@ -42,49 +42,40 @@ public class SlidingWindowRetryPolicy {
 
   public boolean shouldRetry(RetryContext retryContext,
   int errorCode) {
-ContainerRetryContext containerRC = retryContext
-.containerRetryContext;
+ContainerRetryContext containerRC = retryContext.containerRetryContext;
 Preconditions.checkNotNull(containerRC, "container retry context null");
 ContainerRetryPolicy retryPolicy = containerRC.getRetryPolicy();
 if (retryPolicy == ContainerRetryPolicy.RETRY_ON_ALL_ERRORS
 || (retryPolicy == ContainerRetryPolicy.RETRY_ON_SPECIFIC_ERROR_CODES
 && containerRC.getErrorCodes() != null
 && containerRC.getErrorCodes().contains(errorCode))) {
-  if (containerRC.getMaxRetries() == ContainerRetryContext.RETRY_FOREVER) {
-return true;
-  }
-  int pendingRetries = calculatePendingRetries(retryContext);
-  updateRetryContext(retryContext, pendingRetries);
-  return pendingRetries > 0;
+  return conta

hadoop git commit: HADOOP-15480 AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo (Gabor Bota)

2018-05-29 Thread fabbri
Repository: hadoop
Updated Branches:
  refs/heads/trunk 135941e00 -> 5f6769f79


HADOOP-15480 AbstractS3GuardToolTestBase.testDiffCommand fails when using 
dynamo (Gabor Bota)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5f6769f7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5f6769f7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5f6769f7

Branch: refs/heads/trunk
Commit: 5f6769f7964ff002b6c04a95893b5baeb424b6db
Parents: 135941e
Author: Aaron Fabbri 
Authored: Tue May 29 19:20:22 2018 -0700
Committer: Aaron Fabbri 
Committed: Tue May 29 19:20:22 2018 -0700

--
 .../s3guard/AbstractS3GuardToolTestBase.java| 37 +---
 .../s3a/s3guard/ITestS3GuardToolDynamoDB.java   |  5 ---
 .../fs/s3a/s3guard/ITestS3GuardToolLocal.java   |  5 ---
 3 files changed, 25 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5f6769f7/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
index 2b43810..7d75f52 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
@@ -25,6 +25,7 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStreamReader;
 import java.io.PrintStream;
+import java.net.URI;
 import java.util.Collection;
 import java.util.HashSet;
 import java.util.Set;
@@ -32,6 +33,8 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.util.StopWatch;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.FileSystem;
 import org.junit.Assume;
 import org.junit.Test;
 
@@ -48,6 +51,8 @@ import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.ExitUtil;
 import org.apache.hadoop.util.StringUtils;
 
+import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL;
+import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL;
 import static org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.E_BAD_STATE;
 import static org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.SUCCESS;
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
@@ -65,6 +70,7 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
   private static final int PRUNE_MAX_AGE_SECS = 2;
 
   private MetadataStore ms;
+  private S3AFileSystem rawFs;
 
   protected static void expectResult(int expected,
   String message,
@@ -129,28 +135,34 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
 return ms;
   }
 
-  protected abstract MetadataStore newMetadataStore();
-
   @Override
   public void setup() throws Exception {
 super.setup();
 S3ATestUtils.assumeS3GuardState(true, getConfiguration());
-ms = newMetadataStore();
-ms.initialize(getFileSystem());
+ms = getFileSystem().getMetadataStore();
+
+// Also create a "raw" fs without any MetadataStore configured
+Configuration conf = new Configuration(getConfiguration());
+conf.set(S3_METADATA_STORE_IMPL, S3GUARD_METASTORE_NULL);
+URI fsUri = getFileSystem().getUri();
+rawFs = (S3AFileSystem) FileSystem.newInstance(fsUri, conf);
   }
 
   @Override
   public void teardown() throws Exception {
 super.teardown();
 IOUtils.cleanupWithLogger(LOG, ms);
+IOUtils.closeStream(rawFs);
   }
 
   protected void mkdirs(Path path, boolean onS3, boolean onMetadataStore)
   throws IOException {
+Preconditions.checkArgument(onS3 || onMetadataStore);
+// getFileSystem() returns an fs with MetadataStore configured
+S3AFileSystem fs = onMetadataStore ? getFileSystem() : rawFs;
 if (onS3) {
-  getFileSystem().mkdirs(path);
-}
-if (onMetadataStore) {
+  fs.mkdirs(path);
+} else if (onMetadataStore) {
   S3AFileStatus status = new S3AFileStatus(true, path, OWNER);
   ms.put(new PathMetadata(status));
 }
@@ -178,13 +190,14 @@ public abstract class AbstractS3GuardToolTestBase extends 
AbstractS3ATestBase {
*/
   protected void createFile(Path path, boolean onS3, boolean onMetadataStore)
   throws IOException {
+Preconditions.checkArgument(onS3 || onMetadataStore);
+// getFileSystem() returns an fs with MetadataStore configured
+S3AFileSystem fs = onMetadataStore ? getFileSystem() : rawFs;
 if (onS3) {
-  ContractTes

[hadoop] Git Push Summary

2018-05-29 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0.3 [created] 65e55097d

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Preparing for 3.0.4 development

2018-05-29 Thread yjzhangal
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 65e55097d -> f0de11ba9


Preparing for 3.0.4 development


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0de11ba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0de11ba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0de11ba

Branch: refs/heads/branch-3.0
Commit: f0de11ba988e0c511974e74c9cb40ca44ac95023
Parents: 65e5509
Author: Yongjun Zhang 
Authored: Tue May 29 23:40:26 2018 -0700
Committer: Yongjun Zhang 
Committed: Tue May 29 23:40:26 2018 -0700

--
 hadoop-assemblies/pom.xml| 4 ++--
 hadoop-build-tools/pom.xml   | 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml  | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml | 4 ++--
 .../hadoop-client-check-test-invariants/pom.xml  | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml| 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml  | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml  | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml  | 4 ++--
 hadoop-client-modules/pom.xml| 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml| 4 ++--
 hadoop-cloud-storage-project/pom.xml | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml| 4 ++--
 hadoop-common-project/hadoop-common/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml | 4 ++--
 hadoop-common-project/pom.xml| 4 ++--
 hadoop-dist/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-common/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml   | 4 ++--
 hadoop-mapreduce-project/pom.xml | 4 ++--
 hadoop-maven-plugins/pom.xml | 2 +-
 hadoop-minicluster/pom.xml   | 4 ++--
 hadoop-project-dist/pom.xml  | 4 ++--
 hadoop-project/pom.xml   | 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml   | 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml  | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml   | 2 +-
 hadoop-tools/hadoop-azure/pom.xml| 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml   | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml   | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml  | 4 ++--
 hadoop-tools/hadoop-kafka/pom.xml| 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml| 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml| 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml| 2 +-
 hadoop-tools/hadoop-rumen/pom.xml