hadoop git commit: HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. Contributed by Mingliang Liu.

2016-10-12 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 ac395be01 -> 8eb0b6f39


HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. 
Contributed by Mingliang Liu.

(cherry picked from commit 901eca004d0e7e413b109a93128892176c808d61)
(cherry picked from commit 43cf0b29732ebafb8b5b23b06d6015fe4cb4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8eb0b6f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8eb0b6f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8eb0b6f3

Branch: refs/heads/branch-2.7
Commit: 8eb0b6f39cb3fcfb93ef7b82700a95e0dc2ef37d
Parents: ac395be
Author: Akira Ajisaka 
Authored: Thu Oct 13 14:29:30 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 13 14:32:47 2016 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../hadoop-hdfs/src/site/markdown/ExtendedAttributes.md  | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8eb0b6f3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e4004ce..277efe1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -151,6 +151,9 @@ Release 2.7.4 - UNRELEASED
 HDFS-10991. Export hdfsTruncateFile symbol in libhdfs.
 (Surendra Singh Lilhore via wang)
 
+HDFS-11002. Fix broken attr/getfattr/setfattr links in
+ExtendedAttributes.md. (Mingliang Liu via aajisaka)
+
 Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8eb0b6f3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
index 5a20986..eb527ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
@@ -30,7 +30,7 @@ Overview
 
 ### HDFS extended attributes
 
-Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://www.bestbits.at/acl/man/man5/attr.txt) and [related 
documentation](http://www.bestbits.at/acl/)). An extended attribute is a 
*name-value pair*, with a string name and binary value. Xattrs names must also 
be prefixed with a *namespace*. For example, an xattr named *myXattr* in the 
*user* namespace would be specified as **user.myXattr**. Multiple xattrs can be 
associated with a single inode.
+Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://man7.org/linux/man-pages/man5/attr.5.html)). An extended 
attribute is a *name-value pair*, with a string name and binary value. Xattrs 
names must also be prefixed with a *namespace*. For example, an xattr named 
*myXattr* in the *user* namespace would be specified as **user.myXattr**. 
Multiple xattrs can be associated with a single inode.
 
 ### Namespaces and Permissions
 
@@ -49,7 +49,7 @@ The `raw` namespace is reserved for internal system 
attributes that sometimes ne
 Interacting with extended attributes
 
 
-The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux [getfattr(1)](http://www.bestbits.at/acl/man/man1/getfattr.txt) 
and [setfattr(1)](http://www.bestbits.at/acl/man/man1/setfattr.txt) commands.
+The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux 
[getfattr(1)](http://man7.org/linux/man-pages/man1/getfattr.1.html) and 
[setfattr(1)](http://man7.org/linux/man-pages/man1/setfattr.1.html) commands.
 
 ### getfattr
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. Contributed by Mingliang Liu.

2016-10-12 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 9b2a71903 -> c7780b066


HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. 
Contributed by Mingliang Liu.

(cherry picked from commit 901eca004d0e7e413b109a93128892176c808d61)
(cherry picked from commit 43cf0b29732ebafb8b5b23b06d6015fe4cb4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7780b06
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7780b06
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7780b06

Branch: refs/heads/branch-2.8
Commit: c7780b066d0ebea6f678937fab7126ab572c9305
Parents: 9b2a719
Author: Akira Ajisaka 
Authored: Thu Oct 13 14:29:30 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 13 14:31:09 2016 +0900

--
 .../hadoop-hdfs/src/site/markdown/ExtendedAttributes.md  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7780b06/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
index 5a20986..eb527ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
@@ -30,7 +30,7 @@ Overview
 
 ### HDFS extended attributes
 
-Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://www.bestbits.at/acl/man/man5/attr.txt) and [related 
documentation](http://www.bestbits.at/acl/)). An extended attribute is a 
*name-value pair*, with a string name and binary value. Xattrs names must also 
be prefixed with a *namespace*. For example, an xattr named *myXattr* in the 
*user* namespace would be specified as **user.myXattr**. Multiple xattrs can be 
associated with a single inode.
+Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://man7.org/linux/man-pages/man5/attr.5.html)). An extended 
attribute is a *name-value pair*, with a string name and binary value. Xattrs 
names must also be prefixed with a *namespace*. For example, an xattr named 
*myXattr* in the *user* namespace would be specified as **user.myXattr**. 
Multiple xattrs can be associated with a single inode.
 
 ### Namespaces and Permissions
 
@@ -49,7 +49,7 @@ The `raw` namespace is reserved for internal system 
attributes that sometimes ne
 Interacting with extended attributes
 
 
-The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux [getfattr(1)](http://www.bestbits.at/acl/man/man1/getfattr.txt) 
and [setfattr(1)](http://www.bestbits.at/acl/man/man1/setfattr.txt) commands.
+The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux 
[getfattr(1)](http://man7.org/linux/man-pages/man1/getfattr.1.html) and 
[setfattr(1)](http://man7.org/linux/man-pages/man1/setfattr.1.html) commands.
 
 ### getfattr
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. Contributed by Mingliang Liu.

2016-10-12 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b36aaa913 -> 43cf0b297


HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. 
Contributed by Mingliang Liu.

(cherry picked from commit 901eca004d0e7e413b109a93128892176c808d61)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43cf0b29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43cf0b29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43cf0b29

Branch: refs/heads/branch-2
Commit: 43cf0b29732ebafb8b5b23b06d6015fe4cb4
Parents: b36aaa9
Author: Akira Ajisaka 
Authored: Thu Oct 13 14:29:30 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 13 14:30:45 2016 +0900

--
 .../hadoop-hdfs/src/site/markdown/ExtendedAttributes.md  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43cf0b29/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
index 5a20986..eb527ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
@@ -30,7 +30,7 @@ Overview
 
 ### HDFS extended attributes
 
-Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://www.bestbits.at/acl/man/man5/attr.txt) and [related 
documentation](http://www.bestbits.at/acl/)). An extended attribute is a 
*name-value pair*, with a string name and binary value. Xattrs names must also 
be prefixed with a *namespace*. For example, an xattr named *myXattr* in the 
*user* namespace would be specified as **user.myXattr**. Multiple xattrs can be 
associated with a single inode.
+Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://man7.org/linux/man-pages/man5/attr.5.html)). An extended 
attribute is a *name-value pair*, with a string name and binary value. Xattrs 
names must also be prefixed with a *namespace*. For example, an xattr named 
*myXattr* in the *user* namespace would be specified as **user.myXattr**. 
Multiple xattrs can be associated with a single inode.
 
 ### Namespaces and Permissions
 
@@ -49,7 +49,7 @@ The `raw` namespace is reserved for internal system 
attributes that sometimes ne
 Interacting with extended attributes
 
 
-The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux [getfattr(1)](http://www.bestbits.at/acl/man/man1/getfattr.txt) 
and [setfattr(1)](http://www.bestbits.at/acl/man/man1/setfattr.txt) commands.
+The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux 
[getfattr(1)](http://man7.org/linux/man-pages/man1/getfattr.1.html) and 
[setfattr(1)](http://man7.org/linux/man-pages/man1/setfattr.1.html) commands.
 
 ### getfattr
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. Contributed by Mingliang Liu.

2016-10-12 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 12d739a34 -> 901eca004


HDFS-11002. Fix broken attr/getfattr/setfattr links in ExtendedAttributes.md. 
Contributed by Mingliang Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/901eca00
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/901eca00
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/901eca00

Branch: refs/heads/trunk
Commit: 901eca004d0e7e413b109a93128892176c808d61
Parents: 12d739a
Author: Akira Ajisaka 
Authored: Thu Oct 13 14:29:30 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 13 14:29:30 2016 +0900

--
 .../hadoop-hdfs/src/site/markdown/ExtendedAttributes.md  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/901eca00/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
index 5a20986..eb527ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ExtendedAttributes.md
@@ -30,7 +30,7 @@ Overview
 
 ### HDFS extended attributes
 
-Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://www.bestbits.at/acl/man/man5/attr.txt) and [related 
documentation](http://www.bestbits.at/acl/)). An extended attribute is a 
*name-value pair*, with a string name and binary value. Xattrs names must also 
be prefixed with a *namespace*. For example, an xattr named *myXattr* in the 
*user* namespace would be specified as **user.myXattr**. Multiple xattrs can be 
associated with a single inode.
+Extended attributes in HDFS are modeled after extended attributes in Linux 
(see the Linux manpage for 
[attr(5)](http://man7.org/linux/man-pages/man5/attr.5.html)). An extended 
attribute is a *name-value pair*, with a string name and binary value. Xattrs 
names must also be prefixed with a *namespace*. For example, an xattr named 
*myXattr* in the *user* namespace would be specified as **user.myXattr**. 
Multiple xattrs can be associated with a single inode.
 
 ### Namespaces and Permissions
 
@@ -49,7 +49,7 @@ The `raw` namespace is reserved for internal system 
attributes that sometimes ne
 Interacting with extended attributes
 
 
-The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux [getfattr(1)](http://www.bestbits.at/acl/man/man1/getfattr.txt) 
and [setfattr(1)](http://www.bestbits.at/acl/man/man1/setfattr.txt) commands.
+The Hadoop shell has support for interacting with extended attributes via 
`hadoop fs -getfattr` and `hadoop fs -setfattr`. These commands are styled 
after the Linux 
[getfattr(1)](http://man7.org/linux/man-pages/man1/getfattr.1.html) and 
[setfattr(1)](http://man7.org/linux/man-pages/man1/setfattr.1.html) commands.
 
 ### getfattr
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10903. Replace config key literal strings with config key names II: hadoop hdfs. Contributed by Chen Liang

2016-10-12 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f131d61ff -> b36aaa913


HDFS-10903. Replace config key literal strings with config key names II: hadoop 
hdfs. Contributed by Chen Liang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b36aaa91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b36aaa91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b36aaa91

Branch: refs/heads/branch-2
Commit: b36aaa913c54829adbf5fa852af7f45be4db0f07
Parents: f131d61
Author: Mingliang Liu 
Authored: Wed Oct 12 17:26:11 2016 -0700
Committer: Mingliang Liu 
Committed: Wed Oct 12 17:26:11 2016 -0700

--
 .../java/org/apache/hadoop/fs/http/server/FSOperations.java | 9 +++--
 .../hadoop/lib/service/hadoop/FileSystemAccessService.java  | 6 --
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | 4 
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml | 7 +++
 .../test/java/org/apache/hadoop/hdfs/TestFileAppend4.java   | 3 ++-
 .../hdfs/server/blockmanagement/TestBlockTokenWithDFS.java  | 3 ++-
 6 files changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b36aaa91/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 46948f9..001bc92 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -48,6 +48,9 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTPFS_BUFFER_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTP_BUFFER_SIZE_DEFAULT;
+
 /**
  * FileSystem operation executors used by {@link HttpFSServer}.
  */
@@ -462,7 +465,8 @@ public class FSOperations {
 blockSize = fs.getDefaultBlockSize(path);
   }
   FsPermission fsPermission = new FsPermission(permission);
-  int bufferSize = fs.getConf().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = fs.getConf().getInt(HTTPFS_BUFFER_SIZE_KEY,
+  HTTP_BUFFER_SIZE_DEFAULT);
   OutputStream os = fs.create(path, fsPermission, override, bufferSize, 
replication, blockSize, null);
   IOUtils.copyBytes(is, os, bufferSize, true);
   os.close();
@@ -752,7 +756,8 @@ public class FSOperations {
  */
 @Override
 public InputStream execute(FileSystem fs) throws IOException {
-  int bufferSize = 
HttpFSServerWebApp.get().getConfig().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = HttpFSServerWebApp.get().getConfig().getInt(
+  HTTPFS_BUFFER_SIZE_KEY, HTTP_BUFFER_SIZE_DEFAULT);
   return fs.open(path, bufferSize);
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b36aaa91/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
index 0b767be..61d3b45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
@@ -50,6 +50,8 @@ import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
+
 @InterfaceAudience.Private
 public class FileSystemAccessService extends BaseService implements 
FileSystemAccess {
   private static final Logger LOG = 
LoggerFactory.getLogger(FileSystemAccessService.class);
@@ -159,7 +161,7 @@ public class FileSystemAccessService extends BaseService 
implements FileSystemAc
 throw new ServiceException(FileSystemAccessException.ERROR.H01, 
KERBEROS_PRINCIPAL);
   }
   Configuration conf = new Configuration();
-  conf.set("hadoop.security.authentication", "kerberos");
+  conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
   

hadoop git commit: HDFS-10903. Replace config key literal strings with config key names II: hadoop hdfs. Contributed by Chen Liang

2016-10-12 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 9bde45d2f -> 9b2a71903


HDFS-10903. Replace config key literal strings with config key names II: hadoop 
hdfs. Contributed by Chen Liang

(cherry picked from commit b36aaa913c54829adbf5fa852af7f45be4db0f07)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9b2a7190
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9b2a7190
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9b2a7190

Branch: refs/heads/branch-2.8
Commit: 9b2a719031f1e2b097afb19e0994ae22ac7c2288
Parents: 9bde45d
Author: Mingliang Liu 
Authored: Wed Oct 12 17:26:11 2016 -0700
Committer: Mingliang Liu 
Committed: Wed Oct 12 17:41:07 2016 -0700

--
 .../java/org/apache/hadoop/fs/http/server/FSOperations.java | 9 +++--
 .../hadoop/lib/service/hadoop/FileSystemAccessService.java  | 6 --
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | 4 
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml | 7 +++
 .../test/java/org/apache/hadoop/hdfs/TestFileAppend4.java   | 3 ++-
 .../hdfs/server/blockmanagement/TestBlockTokenWithDFS.java  | 3 ++-
 6 files changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9b2a7190/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 39597eb..2d17b72 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -46,6 +46,9 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTPFS_BUFFER_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTP_BUFFER_SIZE_DEFAULT;
+
 /**
  * FileSystem operation executors used by {@link HttpFSServer}.
  */
@@ -439,7 +442,8 @@ public class FSOperations {
 blockSize = fs.getDefaultBlockSize(path);
   }
   FsPermission fsPermission = new FsPermission(permission);
-  int bufferSize = fs.getConf().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = fs.getConf().getInt(HTTPFS_BUFFER_SIZE_KEY,
+  HTTP_BUFFER_SIZE_DEFAULT);
   OutputStream os = fs.create(path, fsPermission, override, bufferSize, 
replication, blockSize, null);
   IOUtils.copyBytes(is, os, bufferSize, true);
   os.close();
@@ -690,7 +694,8 @@ public class FSOperations {
  */
 @Override
 public InputStream execute(FileSystem fs) throws IOException {
-  int bufferSize = 
HttpFSServerWebApp.get().getConfig().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = HttpFSServerWebApp.get().getConfig().getInt(
+  HTTPFS_BUFFER_SIZE_KEY, HTTP_BUFFER_SIZE_DEFAULT);
   return fs.open(path, bufferSize);
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9b2a7190/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
index 88780cb..53f64d6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
@@ -50,6 +50,8 @@ import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
+
 @InterfaceAudience.Private
 public class FileSystemAccessService extends BaseService implements 
FileSystemAccess {
   private static final Logger LOG = 
LoggerFactory.getLogger(FileSystemAccessService.class);
@@ -159,7 +161,7 @@ public class FileSystemAccessService extends BaseService 
implements FileSystemAc
 throw new ServiceException(FileSystemAccessException.ERROR.H01, 
KERBEROS_PRINCIPAL);
   }
   Configuration conf = new Configuration();
-  conf.set("hadoop.security.authentication", "kerberos");
+  

[2/2] hadoop git commit: HDFS-10995. Ozone: Move ozone XceiverClient to hdfs-client. Contributed by Chen Liang.

2016-10-12 Thread aengineer
HDFS-10995. Ozone: Move ozone XceiverClient to hdfs-client. Contributed by Chen 
Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef84ac46
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef84ac46
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef84ac46

Branch: refs/heads/HDFS-7240
Commit: ef84ac46990338a10d62c07c0ce9a58774cec312
Parents: 4217f85
Author: Anu Engineer 
Authored: Wed Oct 12 17:37:14 2016 -0700
Committer: Anu Engineer 
Committed: Wed Oct 12 17:37:14 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml  |   5 +
 .../org/apache/hadoop/scm/ScmConfigKeys.java|  32 ++
 .../org/apache/hadoop/scm/XceiverClient.java| 134 
 .../apache/hadoop/scm/XceiverClientHandler.java | 112 +++
 .../hadoop/scm/XceiverClientInitializer.java|  68 
 .../apache/hadoop/scm/XceiverClientManager.java |  84 +
 .../scm/container/common/helpers/Pipeline.java  | 128 
 .../container/common/helpers/package-info.java  |  22 ++
 .../org/apache/hadoop/scm/package-info.java |  24 ++
 .../main/proto/DatanodeContainerProtocol.proto  | 320 +++
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   1 -
 .../container/common/helpers/ChunkUtils.java|   1 +
 .../container/common/helpers/Pipeline.java  | 128 
 .../container/common/impl/ChunkManagerImpl.java |   2 +-
 .../common/impl/ContainerManagerImpl.java   |   2 +-
 .../ozone/container/common/impl/Dispatcher.java |   2 +-
 .../container/common/impl/KeyManagerImpl.java   |   2 +-
 .../common/interfaces/ChunkManager.java |   2 +-
 .../common/interfaces/ContainerManager.java |   2 +-
 .../container/common/interfaces/KeyManager.java |   2 +-
 .../common/transport/client/XceiverClient.java  | 135 
 .../transport/client/XceiverClientHandler.java  | 112 ---
 .../client/XceiverClientInitializer.java|  68 
 .../transport/client/XceiverClientManager.java  |  83 -
 .../common/transport/client/package-info.java   |  24 --
 .../ozone/storage/StorageContainerManager.java  |   6 +-
 .../ozone/web/storage/ChunkInputStream.java |   4 +-
 .../ozone/web/storage/ChunkOutputStream.java|   4 +-
 .../web/storage/ContainerProtocolCalls.java |   2 +-
 .../web/storage/DistributedStorageHandler.java  |   6 +-
 .../main/proto/DatanodeContainerProtocol.proto  | 320 ---
 .../ozone/container/ContainerTestHelper.java|   2 +-
 .../common/impl/TestContainerPersistence.java   |   2 +-
 .../container/ozoneimpl/TestOzoneContainer.java |   4 +-
 .../transport/server/TestContainerServer.java   |   4 +-
 35 files changed, 954 insertions(+), 895 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef84ac46/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index 1e38019..47692e2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -111,6 +111,10 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   com.fasterxml.jackson.core
   jackson-annotations
   
+  
+  io.netty
+  netty-all
+  
   
 
   
@@ -158,6 +162,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   inotify.proto
   erasurecoding.proto
   ReconfigurationProtocol.proto
+  DatanodeContainerProtocol.proto
 
   
   
${project.build.directory}/generated-sources/java

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef84ac46/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java
new file mode 100644
index 000..a1b2393
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0

[1/2] hadoop git commit: HDFS-10995. Ozone: Move ozone XceiverClient to hdfs-client. Contributed by Chen Liang.

2016-10-12 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7240 4217f8520 -> ef84ac469


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef84ac46/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeContainerProtocol.proto
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeContainerProtocol.proto
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeContainerProtocol.proto
deleted file mode 100644
index 04d77db..000
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeContainerProtocol.proto
+++ /dev/null
@@ -1,320 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
- * These .proto interfaces are private and Unstable.
- * Please see 
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/InterfaceClassification.html
- * for what changes are allowed for a *Unstable* .proto interface.
- */
-
-// This file contains protocol buffers that are used to transfer data
-// to and from the datanode.
-option java_package = "org.apache.hadoop.hdfs.ozone.protocol.proto";
-option java_outer_classname = "ContainerProtos";
-option java_generate_equals_and_hash = true;
-package hadoop.hdfs.ozone;
-import "hdfs.proto";
-
-/**
- * Commands that are used to manipulate the state of containers on a datanode.
- *
- * These commands allow us to work against the datanode - from
- * StorageContainer Manager as well as clients.
- *
- *  1. CreateContainer - This call is usually made by Storage Container
- * manager, when we need to create a new container on a given datanode.
- *
- *  2. ReadContainer - Allows end user to stat a container. For example
- * this allows us to return the metadata of a container.
- *
- *  3. UpdateContainer - Updates a container metadata.
-
- *  4. DeleteContainer - This call is made to delete a container.
- *
- *  5. ListContainer - Returns the list of containers on this
- * datanode. This will be used by tests and tools.
- *
- *  6. PutKey - Given a valid container, creates a key.
- *
- *  7. GetKey - Allows user to read the metadata of a Key.
- *
- *  8. DeleteKey - Deletes a given key.
- *
- *  9. ListKey - Returns a list of keys that are present inside
- *  a given container.
- *
- *  10. ReadChunk - Allows us to read a chunk.
- *
- *  11. DeleteChunk - Delete an unused chunk.
- *
- *  12. WriteChunk - Allows us to write a chunk
- *
- *  13. ListChunk - Given a Container/Key returns the list of Chunks.
- *
- *  14. CompactChunk - Re-writes a chunk based on Offsets.
- */
-
-enum Type {
-   CreateContainer = 1;
-   ReadContainer = 2;
-   UpdateContainer = 3;
-   DeleteContainer = 4;
-   ListContainer = 5;
-
-   PutKey = 6;
-   GetKey = 7;
-   DeleteKey = 8;
-   ListKey = 9;
-
-   ReadChunk = 10;
-   DeleteChunk = 11;
-   WriteChunk = 12;
-   ListChunk = 13;
-   CompactChunk = 14;
-}
-
-
-enum Result {
-  SUCCESS = 1;
-  UNSUPPORTED_REQUEST = 2;
-  MALFORMED_REQUEST = 3;
-  CONTAINER_INTERNAL_ERROR = 4;
-}
-
-message ContainerCommandRequestProto {
-  required Type cmdType = 1; // Type of the command
-
-  // A string that identifies this command, we generate  Trace ID in Ozone
-  // frontend and this allows us to trace that command all over ozone.
-  optional string traceID = 2;
-
-  // One of the following command is available when the corresponding
-  // cmdType is set. At the protocol level we allow only
-  // one command in each packet.
-  // TODO : Upgrade to Protobuf 2.6 or later.
-  optional   CreateContainerRequestProto createContainer = 3;
-  optional   ReadContainerRequestProto readContainer = 4;
-  optional   UpdateContainerRequestProto updateContainer = 5;
-  optional   DeleteContainerRequestProto deleteContainer = 6;
-  optional   ListContainerRequestProto listContainer = 7;
-
-  optional   PutKeyRequestProto putKey = 8;
-  optional   GetKeyRequestProto getKey = 9;
-  optional   DeleteKeyRequestProto deleteKey = 10;
-  optional   ListKeyRequestProto listKey = 11;
-
-  optional   ReadChunkRequestProto readChunk = 12;
-  optional   WriteChunkRequestProto writeChunk = 13;
-  optional   DeleteChunkRequestProto deleteChunk = 14;
-  optional   ListChunkRequestProto listChunk = 15;
-}
-

hadoop git commit: HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures.

2016-10-12 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 85cd06f66 -> 12d739a34


HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and 
#getInstance signatures.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12d739a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12d739a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12d739a3

Branch: refs/heads/trunk
Commit: 12d739a34ba868b3f7f5adf7f37a60d4aca9061b
Parents: 85cd06f
Author: Andrew Wang 
Authored: Wed Oct 12 15:19:52 2016 -0700
Committer: Andrew Wang 
Committed: Wed Oct 12 15:19:52 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/TrashPolicy.java| 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12d739a3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
index 157b9ab..2fe3fd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
@@ -53,9 +53,8 @@ public abstract class TrashPolicy extends Configured {
* not assume trash always under /user/$USER due to HDFS encryption zone.
* @param conf the configuration to be used
* @param fs the filesystem to be used
-   * @throws IOException
*/
-  public void initialize(Configuration conf, FileSystem fs) throws IOException{
+  public void initialize(Configuration conf, FileSystem fs) {
 throw new UnsupportedOperationException();
   }
 
@@ -137,8 +136,7 @@ public abstract class TrashPolicy extends Configured {
* @param fs the file system to be used
* @return an instance of TrashPolicy
*/
-  public static TrashPolicy getInstance(Configuration conf, FileSystem fs)
-  throws IOException {
+  public static TrashPolicy getInstance(Configuration conf, FileSystem fs) {
 Class trashClass = conf.getClass(
 "fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
 TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures.

2016-10-12 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5305a392c -> f131d61ff


HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and 
#getInstance signatures.

(cherry picked from commit 12d739a34ba868b3f7f5adf7f37a60d4aca9061b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f131d61f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f131d61f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f131d61f

Branch: refs/heads/branch-2
Commit: f131d61ff8bc87e74c67fec3ae2b50ee50dd0d2c
Parents: 5305a39
Author: Andrew Wang 
Authored: Wed Oct 12 15:19:52 2016 -0700
Committer: Andrew Wang 
Committed: Wed Oct 12 15:20:07 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/TrashPolicy.java| 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f131d61f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
index 157b9ab..2fe3fd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
@@ -53,9 +53,8 @@ public abstract class TrashPolicy extends Configured {
* not assume trash always under /user/$USER due to HDFS encryption zone.
* @param conf the configuration to be used
* @param fs the filesystem to be used
-   * @throws IOException
*/
-  public void initialize(Configuration conf, FileSystem fs) throws IOException{
+  public void initialize(Configuration conf, FileSystem fs) {
 throw new UnsupportedOperationException();
   }
 
@@ -137,8 +136,7 @@ public abstract class TrashPolicy extends Configured {
* @param fs the file system to be used
* @return an instance of TrashPolicy
*/
-  public static TrashPolicy getInstance(Configuration conf, FileSystem fs)
-  throws IOException {
+  public static TrashPolicy getInstance(Configuration conf, FileSystem fs) {
 Class trashClass = conf.getClass(
 "fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
 TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures.

2016-10-12 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 136c6f6f7 -> 9bde45d2f


HADOOP-13700. Remove unthrown IOException from TrashPolicy#initialize and 
#getInstance signatures.

(cherry picked from commit 12d739a34ba868b3f7f5adf7f37a60d4aca9061b)
(cherry picked from commit f131d61ff8bc87e74c67fec3ae2b50ee50dd0d2c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9bde45d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9bde45d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9bde45d2

Branch: refs/heads/branch-2.8
Commit: 9bde45d2fe0f830813c6db2c7ae827937d92ab67
Parents: 136c6f6
Author: Andrew Wang 
Authored: Wed Oct 12 15:19:52 2016 -0700
Committer: Andrew Wang 
Committed: Wed Oct 12 15:20:10 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/TrashPolicy.java| 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bde45d2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
index 157b9ab..2fe3fd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
@@ -53,9 +53,8 @@ public abstract class TrashPolicy extends Configured {
* not assume trash always under /user/$USER due to HDFS encryption zone.
* @param conf the configuration to be used
* @param fs the filesystem to be used
-   * @throws IOException
*/
-  public void initialize(Configuration conf, FileSystem fs) throws IOException{
+  public void initialize(Configuration conf, FileSystem fs) {
 throw new UnsupportedOperationException();
   }
 
@@ -137,8 +136,7 @@ public abstract class TrashPolicy extends Configured {
* @param fs the file system to be used
* @return an instance of TrashPolicy
*/
-  public static TrashPolicy getInstance(Configuration conf, FileSystem fs)
-  throws IOException {
+  public static TrashPolicy getInstance(Configuration conf, FileSystem fs) {
 Class trashClass = conf.getClass(
 "fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
 TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn Sharp and Rushabh S Shah.

2016-10-12 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 0f481e202 -> 136c6f6f7


HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn 
Sharp and Rushabh S Shah.

(cherry picked from commit 5305a392c39d298ecf38ca2dfd2526adeee9cd38)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/136c6f6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/136c6f6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/136c6f6f

Branch: refs/heads/branch-2.8
Commit: 136c6f6f7d069a9242173e7981d995f97da8f257
Parents: 0f481e2
Author: Kihwal Lee 
Authored: Wed Oct 12 15:38:34 2016 -0500
Committer: Kihwal Lee 
Committed: Wed Oct 12 15:38:34 2016 -0500

--
 .../org/apache/hadoop/ipc/ExternalCall.java |   9 +-
 .../java/org/apache/hadoop/ipc/TestRPC.java |   6 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  15 +-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  12 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java |   6 +-
 .../web/resources/NamenodeWebHdfsMethods.java   | 150 +++
 .../src/main/resources/hdfs-default.xml |   8 +
 .../server/namenode/TestNamenodeRetryCache.java |  25 +++-
 .../web/resources/TestWebHdfsDataLocality.java  |  25 +++-
 10 files changed, 161 insertions(+), 98 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/136c6f6f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
index 9b4cbcf..5566136 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ipc;
 
 import java.io.IOException;
 import java.security.PrivilegedExceptionAction;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.hadoop.ipc.Server.Call;
@@ -37,14 +38,10 @@ public abstract class ExternalCall extends Call {
 
   public abstract UserGroupInformation getRemoteUser();
 
-  public final T get() throws IOException, InterruptedException {
+  public final T get() throws InterruptedException, ExecutionException {
 waitForCompletion();
 if (error != null) {
-  if (error instanceof IOException) {
-throw (IOException)error;
-  } else {
-throw new IOException(error);
-  }
+  throw new ExecutionException(error);
 }
 return result;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/136c6f6f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
index 78f283e..538f5db 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
@@ -72,6 +72,7 @@ import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.CyclicBarrier;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
@@ -1001,8 +1002,9 @@ public class TestRPC extends TestRpcBase {
   try {
 exceptionCall.get();
 fail("didn't throw");
-  } catch (IOException ioe) {
-assertEquals(expectedIOE.getMessage(), ioe.getMessage());
+  } catch (ExecutionException ee) {
+assertTrue((ee.getCause()) instanceof IOException);
+assertEquals(expectedIOE.getMessage(), ee.getCause().getMessage());
   }
 } finally {
   server.stop();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/136c6f6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java

hadoop git commit: Addendum patch for YARN-5610. Contributed by Gour Saha

2016-10-12 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/yarn-native-services 5be4c0735 -> 42083da04


Addendum patch for YARN-5610. Contributed by Gour Saha


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42083da0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42083da0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42083da0

Branch: refs/heads/yarn-native-services
Commit: 42083da0465a5cfd23e265e58e532b0a8ba541cd
Parents: 5be4c07
Author: Jian He 
Authored: Wed Oct 12 13:33:09 2016 -0700
Committer: Jian He 
Committed: Wed Oct 12 13:33:09 2016 -0700

--
 .../yarn/services/resource/Application.java | 44 ++--
 .../services/resource/ApplicationState.java |  5 +++
 .../services/resource/ApplicationStatus.java|  8 ++--
 .../hadoop/yarn/services/resource/Artifact.java |  4 +-
 .../yarn/services/resource/Component.java   | 16 +++
 .../yarn/services/resource/Container.java   | 15 ---
 .../yarn/services/resource/ReadinessCheck.java  |  6 +--
 7 files changed, 54 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42083da0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/Application.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/Application.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/Application.java
index cfcae95..719bf95 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/Application.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/Application.java
@@ -48,8 +48,8 @@ import com.fasterxml.jackson.annotation.JsonPropertyOrder;
 public class Application extends BaseResource {
   private static final long serialVersionUID = -4491694636566094885L;
 
-  private String id = null;
   private String name = null;
+  private String id = null;
   private Artifact artifact = null;
   private Resource resource = null;
   private String launchCommand = null;
@@ -63,25 +63,7 @@ public class Application extends BaseResource {
   private List containers = new ArrayList<>();
   private ApplicationState state = null;
   private Map quicklinks = null;
-  private String queue;
-
-  /**
-   * A unique application id.
-   **/
-  public Application id(String id) {
-this.id = id;
-return this;
-  }
-
-  @ApiModelProperty(example = "null", required = true, value = "A unique 
application id.")
-  @JsonProperty("id")
-  public String getId() {
-return id;
-  }
-
-  public void setId(String id) {
-this.id = id;
-  }
+  private String queue = null;
 
   /**
* A unique application name.
@@ -102,6 +84,24 @@ public class Application extends BaseResource {
   }
 
   /**
+   * A unique application id.
+   **/
+  public Application id(String id) {
+this.id = id;
+return this;
+  }
+
+  @ApiModelProperty(example = "null", value = "A unique application id.")
+  @JsonProperty("id")
+  public String getId() {
+return id;
+  }
+
+  public void setId(String id) {
+this.id = id;
+  }
+
+  /**
* Artifact of single-component applications. Mandatory if components
* attribute is not specified.
**/
@@ -423,8 +423,8 @@ public class Application extends BaseResource {
 sb.append("numberOfRunningContainers: ")
 .append(toIndentedString(numberOfRunningContainers)).append("\n");
 sb.append("lifetime: 
").append(toIndentedString(lifetime)).append("\n");
-sb.append("placementPolicy: ")
-.append(toIndentedString(placementPolicy)).append("\n");
+sb.append("placementPolicy: 
").append(toIndentedString(placementPolicy))
+.append("\n");
 sb.append("components: ").append(toIndentedString(components))
 .append("\n");
 sb.append("configuration: ").append(toIndentedString(configuration))

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42083da0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/ApplicationState.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/services/resource/ApplicationState.java
 

hadoop git commit: YARN-5675. Swagger definition for YARN service API. Contributed by Gour Saha

2016-10-12 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/yarn-native-services 7224bdbcc -> 5be4c0735


YARN-5675. Swagger definition for YARN service API. Contributed by Gour Saha


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5be4c073
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5be4c073
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5be4c073

Branch: refs/heads/yarn-native-services
Commit: 5be4c0735ba433c2a2873437efebabab2239e518
Parents: 7224bdb
Author: Jian He 
Authored: Wed Oct 12 13:27:53 2016 -0700
Committer: Jian He 
Committed: Wed Oct 12 13:27:53 2016 -0700

--
 ...RN-Simplified-V1-API-Layer-For-Services.yaml | 416 +++
 1 file changed, 416 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5be4c073/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
new file mode 100644
index 000..6169fcd
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Simplified-V1-API-Layer-For-Services.yaml
@@ -0,0 +1,416 @@
+# Hadoop YARN REST APIs for services v1 spec in YAML
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+swagger: '2.0'
+info:
+  title: "[YARN-4793] Simplified API layer for services and beyond"
+  description: |
+Bringing a new service on YARN today is not a simple experience. The APIs 
of existing frameworks are either too low level (native YARN), require writing 
new code (for frameworks with programmatic APIs) or writing a complex spec (for 
declarative frameworks). In addition to building critical building blocks 
inside YARN (as part of other efforts at 
link:https://issues.apache.org/jira/browse/YARN-4692[YARN-4692]), there is a 
need for simplifying the user facing story for building services. Experience of 
projects like Apache Slider running real-life services like HBase, Storm, 
Accumulo, Solr etc, gives us some very good insights on how simplified APIs for 
services should look like.
+
+
+To this end, we should look at a new simple-services API layer backed by 
REST interfaces. This API can be used to create and manage the lifecycle of 
YARN services. Services here can range from simple single-component apps to 
complex multi-component assemblies needing orchestration.
+
+
+We should also look at making this a unified REST based entry point for 
other important features like resource-profile management 
(link:https://issues.apache.org/jira/browse/YARN-3926[YARN-3926]), 
package-definitions' lifecycle-management and service-discovery 
(link:https://issues.apache.org/jira/browse/YARN-913[YARN-913]/link:https://issues.apache.org/jira/browse/YARN-4757[YARN-4757]).
 We also need to flesh out its relation to our present much lower level REST 
APIs (link:https://issues.apache.org/jira/browse/YARN-1695[YARN-1695]) in YARN 
for application-submission and management.
+
+
+This document spotlights on this specification. In most of the cases, the 
application owner will not be forced to make any changes to their application. 
This is primarily true if the application is packaged with containerization 
technologies like docker. Irrespective of how complex the application is, there 
will be hooks provided at appropriate layers to allow pluggable and 
customizable application behavior.
+
+  version: "1.0.0"
+  license:
+name: Apache 2.0
+url: http://www.apache.org/licenses/LICENSE-2.0.html
+# the domain of the service
+host: host.mycompany.com
+# array of all 

hadoop git commit: YARN-5698. [YARN-3368] Launch new YARN UI under hadoop web app port. (Sunil G via wangda)

2016-10-12 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/YARN-3368 1e4751815 -> 60c881007


YARN-5698. [YARN-3368] Launch new YARN UI under hadoop web app port. (Sunil G 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60c88100
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60c88100
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60c88100

Branch: refs/heads/YARN-3368
Commit: 60c88100779446f2b16d242bdffc7263a2b632f2
Parents: 1e47518
Author: Wangda Tan 
Authored: Wed Oct 12 13:22:20 2016 -0700
Committer: Wangda Tan 
Committed: Wed Oct 12 13:22:20 2016 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java | 21 ++
 .../org/apache/hadoop/yarn/webapp/WebApps.java  |  8 +++
 .../src/main/resources/yarn-default.xml | 20 ++
 .../server/resourcemanager/ResourceManager.java | 68 +++-
 .../src/main/webapp/config/default-config.js|  4 +-
 5 files changed, 55 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/60c88100/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 8d4c14a..7cd8bd5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -266,25 +266,12 @@ public class YarnConfiguration extends Configuration {
   /**
* Enable YARN WebApp V2.
*/
-  public static final String RM_WEBAPP_UI2_ENABLE = RM_PREFIX
+  public static final String YARN_WEBAPP_UI2_ENABLE = "yarn."
   + "webapp.ui2.enable";
-  public static final boolean DEFAULT_RM_WEBAPP_UI2_ENABLE = false;
+  public static final boolean DEFAULT_YARN_WEBAPP_UI2_ENABLE = false;
 
-  /** The address of the RM web ui2 application. */
-  public static final String RM_WEBAPP_UI2_ADDRESS = RM_PREFIX
-  + "webapp.ui2.address";
-
-  public static final int DEFAULT_RM_WEBAPP_UI2_PORT = 8288;
-  public static final String DEFAULT_RM_WEBAPP_UI2_ADDRESS = "0.0.0.0:" +
-  DEFAULT_RM_WEBAPP_UI2_PORT;
-  
-  /** The https address of the RM web ui2 application.*/
-  public static final String RM_WEBAPP_UI2_HTTPS_ADDRESS =
-  RM_PREFIX + "webapp.ui2.https.address";
-
-  public static final int DEFAULT_RM_WEBAPP_UI2_HTTPS_PORT = 8290;
-  public static final String DEFAULT_RM_WEBAPP_UI2_HTTPS_ADDRESS = "0.0.0.0:"
-  + DEFAULT_RM_WEBAPP_UI2_HTTPS_PORT;
+  public static final String YARN_WEBAPP_UI2_WARFILE_PATH = "yarn."
+  + "webapp.ui2.war-file-path";
 
   public static final String RM_RESOURCE_TRACKER_ADDRESS =
 RM_PREFIX + "resource-tracker.address";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/60c88100/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
index 53cb3ee..d3b37d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.security.http.RestCsrfPreventionFilter;
 import org.apache.hadoop.security.http.XFrameOptionsFilter;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
+import org.mortbay.jetty.webapp.WebAppContext;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -369,8 +370,15 @@ public class WebApps {
 }
 
 public WebApp start(WebApp webapp) {
+  return start(webapp, null);
+}
+
+public WebApp start(WebApp webapp, WebAppContext ui2Context) {
   WebApp webApp = build(webapp);
   HttpServer2 httpServer = webApp.httpServer();
+  if (ui2Context != null) {
+httpServer.addContext(ui2Context, true);
+  }
   try {
 httpServer.start();
 LOG.info("Web app " + name + " started at "


hadoop git commit: HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn Sharp and Rushabh S Shah.

2016-10-12 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e341e5151 -> 5305a392c


HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn 
Sharp and Rushabh S Shah.

(cherry picked from commit 85cd06f6636f295ad1f3bf2a90063f4714c9cca7)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5305a392
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5305a392
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5305a392

Branch: refs/heads/branch-2
Commit: 5305a392c39d298ecf38ca2dfd2526adeee9cd38
Parents: e341e51
Author: Kihwal Lee 
Authored: Wed Oct 12 15:22:51 2016 -0500
Committer: Kihwal Lee 
Committed: Wed Oct 12 15:22:51 2016 -0500

--
 .../org/apache/hadoop/ipc/ExternalCall.java |   9 +-
 .../java/org/apache/hadoop/ipc/TestRPC.java |   6 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  15 +-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  12 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java |   6 +-
 .../web/resources/NamenodeWebHdfsMethods.java   | 150 +++
 .../src/main/resources/hdfs-default.xml |   8 +
 .../server/namenode/TestNamenodeRetryCache.java |  25 +++-
 .../web/resources/TestWebHdfsDataLocality.java  |  25 +++-
 10 files changed, 161 insertions(+), 98 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5305a392/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
index 9b4cbcf..5566136 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ipc;
 
 import java.io.IOException;
 import java.security.PrivilegedExceptionAction;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.hadoop.ipc.Server.Call;
@@ -37,14 +38,10 @@ public abstract class ExternalCall extends Call {
 
   public abstract UserGroupInformation getRemoteUser();
 
-  public final T get() throws IOException, InterruptedException {
+  public final T get() throws InterruptedException, ExecutionException {
 waitForCompletion();
 if (error != null) {
-  if (error instanceof IOException) {
-throw (IOException)error;
-  } else {
-throw new IOException(error);
-  }
+  throw new ExecutionException(error);
 }
 return result;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5305a392/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
index 7a57e5a..287f0e5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
@@ -72,6 +72,7 @@ import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.CyclicBarrier;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
@@ -1001,8 +1002,9 @@ public class TestRPC extends TestRpcBase {
   try {
 exceptionCall.get();
 fail("didn't throw");
-  } catch (IOException ioe) {
-assertEquals(expectedIOE.getMessage(), ioe.getMessage());
+  } catch (ExecutionException ee) {
+assertTrue((ee.getCause()) instanceof IOException);
+assertEquals(expectedIOE.getMessage(), ee.getCause().getMessage());
   }
 } finally {
   server.stop();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5305a392/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 

hadoop git commit: HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn Sharp and Rushabh S Shah.

2016-10-12 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6476934ae -> 85cd06f66


HDFS-10789. Route webhdfs through the RPC call queue. Contributed by Daryn 
Sharp and Rushabh S Shah.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85cd06f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85cd06f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85cd06f6

Branch: refs/heads/trunk
Commit: 85cd06f6636f295ad1f3bf2a90063f4714c9cca7
Parents: 6476934
Author: Kihwal Lee 
Authored: Wed Oct 12 15:11:42 2016 -0500
Committer: Kihwal Lee 
Committed: Wed Oct 12 15:11:42 2016 -0500

--
 .../org/apache/hadoop/ipc/ExternalCall.java |   9 +-
 .../java/org/apache/hadoop/ipc/TestRPC.java |   6 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  15 +-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  12 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java |   6 +-
 .../web/resources/NamenodeWebHdfsMethods.java   | 150 +++
 .../src/main/resources/hdfs-default.xml |   7 +
 .../server/namenode/TestNamenodeRetryCache.java |  25 +++-
 .../web/resources/TestWebHdfsDataLocality.java  |  25 +++-
 10 files changed, 160 insertions(+), 98 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85cd06f6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
index 9b4cbcf..5566136 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ExternalCall.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ipc;
 
 import java.io.IOException;
 import java.security.PrivilegedExceptionAction;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.hadoop.ipc.Server.Call;
@@ -37,14 +38,10 @@ public abstract class ExternalCall extends Call {
 
   public abstract UserGroupInformation getRemoteUser();
 
-  public final T get() throws IOException, InterruptedException {
+  public final T get() throws InterruptedException, ExecutionException {
 waitForCompletion();
 if (error != null) {
-  if (error instanceof IOException) {
-throw (IOException)error;
-  } else {
-throw new IOException(error);
-  }
+  throw new ExecutionException(error);
 }
 return result;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85cd06f6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
index 92d9183..72b603a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
@@ -72,6 +72,7 @@ import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.CyclicBarrier;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
@@ -989,8 +990,9 @@ public class TestRPC extends TestRpcBase {
   try {
 exceptionCall.get();
 fail("didn't throw");
-  } catch (IOException ioe) {
-assertEquals(expectedIOE.getMessage(), ioe.getMessage());
+  } catch (ExecutionException ee) {
+assertTrue((ee.getCause()) instanceof IOException);
+assertEquals(expectedIOE.getMessage(), ee.getCause().getMessage());
   }
 } finally {
   server.stop();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85cd06f6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 18209ae..10c0ad6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 

[21/52] [abbrv] hadoop git commit: MAPREDUCE-6776. yarn.app.mapreduce.client.job.max-retries should have a more useful default (miklos.szeg...@cloudera.com via rkanter)

2016-10-12 Thread cnauroth
MAPREDUCE-6776. yarn.app.mapreduce.client.job.max-retries should have a more 
useful default (miklos.szeg...@cloudera.com via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3f37e6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3f37e6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3f37e6f

Branch: refs/heads/HADOOP-13037
Commit: f3f37e6fb8172f6434e06eb9a137c0c155b3952e
Parents: 2e853be
Author: Robert Kanter 
Authored: Fri Oct 7 14:47:06 2016 -0700
Committer: Robert Kanter 
Committed: Fri Oct 7 14:47:06 2016 -0700

--
 .../apache/hadoop/mapreduce/MRJobConfig.java|  2 +-
 .../src/main/resources/mapred-default.xml   | 10 +++---
 .../apache/hadoop/mapred/JobClientUnitTest.java | 34 
 3 files changed, 34 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f37e6f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index 5716404..1325b74 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
@@ -505,7 +505,7 @@ public interface MRJobConfig {
*/
   public static final String MR_CLIENT_JOB_MAX_RETRIES =
   MR_PREFIX + "client.job.max-retries";
-  public static final int DEFAULT_MR_CLIENT_JOB_MAX_RETRIES = 0;
+  public static final int DEFAULT_MR_CLIENT_JOB_MAX_RETRIES = 3;
 
   /**
* How long to wait between jobclient retries on failure

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f37e6f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index 73aaa7a..fe29212 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -1505,12 +1505,12 @@
 
 
   yarn.app.mapreduce.client.job.max-retries
-  0
+  3
   The number of retries the client will make for getJob and
-  dependent calls.  The default is 0 as this is generally only needed for
-  non-HDFS DFS where additional, high level retries are required to avoid
-  spurious failures during the getJob call.  30 is a good value for
-  WASB
+dependent calls.
+This is needed for non-HDFS DFS where additional, high level
+retries are required to avoid spurious failures during the getJob call.
+30 is a good value for WASB
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f37e6f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
index 4895a5b..e02232d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
@@ -225,10 +225,10 @@ public class JobClientUnitTest {
 
 //To prevent the test from running for a very long time, lower the retry
 JobConf conf = new JobConf();
-conf.set(MRJobConfig.MR_CLIENT_JOB_MAX_RETRIES, "3");
+conf.setInt(MRJobConfig.MR_CLIENT_JOB_MAX_RETRIES, 2);
 
 TestJobClientGetJob client = new TestJobClientGetJob(conf);
-JobID id = new JobID("ajob",1);
+JobID id = new JobID("ajob", 1);
  

[27/52] [abbrv] hadoop git commit: HADOOP-12579. Deprecate WriteableRPCEngine. Contributed by Wei Zhou

2016-10-12 Thread cnauroth
HADOOP-12579. Deprecate WriteableRPCEngine. Contributed by Wei Zhou


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec0b7071
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec0b7071
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec0b7071

Branch: refs/heads/HADOOP-13037
Commit: ec0b70716c8e6509654a3975d3ca139a0144cc8e
Parents: 4d10621
Author: Kai Zheng 
Authored: Sun Oct 9 15:07:03 2016 +0600
Committer: Kai Zheng 
Committed: Sun Oct 9 15:07:03 2016 +0600

--
 .../src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java  | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec0b7071/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
index a9dbb41..3d6d461 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
@@ -46,6 +46,7 @@ import org.apache.htrace.core.Tracer;
 
 /** An RpcEngine implementation for Writable data. */
 @InterfaceStability.Evolving
+@Deprecated
 public class WritableRpcEngine implements RpcEngine {
   private static final Log LOG = LogFactory.getLog(RPC.class);
   
@@ -331,6 +332,7 @@ public class WritableRpcEngine implements RpcEngine {
 
 
   /** An RPC Server. */
+  @Deprecated
   public static class Server extends RPC.Server {
 /** 
  * Construct an RPC server.
@@ -443,7 +445,8 @@ public class WritableRpcEngine implements RpcEngine {
 value = value.substring(0, 55)+"...";
   LOG.info(value);
 }
-
+
+@Deprecated
 static class WritableRpcInvoker implements RpcInvoker {
 
  @Override


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/52] [abbrv] hadoop git commit: HDFS-10979. Pass IIP for FSDirDeleteOp methods. Contributed by Daryn Sharp.

2016-10-12 Thread cnauroth
HDFS-10979. Pass IIP for FSDirDeleteOp methods. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3565c9af
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3565c9af
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3565c9af

Branch: refs/heads/HADOOP-13037
Commit: 3565c9af17ab05bf9e7f68b71b6c6850df772bb9
Parents: 69620f95
Author: Kihwal Lee 
Authored: Fri Oct 7 14:14:47 2016 -0500
Committer: Kihwal Lee 
Committed: Fri Oct 7 14:15:59 2016 -0500

--
 .../hdfs/server/namenode/FSDirDeleteOp.java | 63 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   | 11 ++--
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 3 files changed, 38 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3565c9af/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
index 21ee3ce..328ce79 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
@@ -55,7 +55,7 @@ class FSDirDeleteOp {
 FSNamesystem fsn = fsd.getFSNamesystem();
 fsd.writeLock();
 try {
-  if (deleteAllowed(iip, iip.getPath()) ) {
+  if (deleteAllowed(iip)) {
 List snapshottableDirs = new ArrayList<>();
 FSDirSnapshotOp.checkSnapshot(fsd, iip, snapshottableDirs);
 ReclaimContext context = new ReclaimContext(
@@ -98,20 +98,24 @@ class FSDirDeleteOp {
 FSDirectory fsd = fsn.getFSDirectory();
 FSPermissionChecker pc = fsd.getPermissionChecker();
 
-final INodesInPath iip = fsd.resolvePathForWrite(pc, src, false);
-src = iip.getPath();
-if (!recursive && fsd.isNonEmptyDirectory(iip)) {
-  throw new PathIsNotEmptyDirectoryException(src + " is non empty");
+if (FSDirectory.isExactReservedName(src)) {
+  throw new InvalidPathException(src);
 }
+
+final INodesInPath iip = fsd.resolvePathForWrite(pc, src, false);
 if (fsd.isPermissionEnabled()) {
   fsd.checkPermission(pc, iip, false, null, FsAction.WRITE, null,
   FsAction.ALL, true);
 }
-if (recursive && fsd.isNonEmptyDirectory(iip)) {
-  checkProtectedDescendants(fsd, src);
+if (fsd.isNonEmptyDirectory(iip)) {
+  if (!recursive) {
+throw new PathIsNotEmptyDirectoryException(
+iip.getPath() + " is non empty");
+  }
+  checkProtectedDescendants(fsd, iip);
 }
 
-return deleteInternal(fsn, src, iip, logRetryCache);
+return deleteInternal(fsn, iip, logRetryCache);
   }
 
   /**
@@ -126,17 +130,14 @@ class FSDirDeleteOp {
* @param src a string representation of a path to an inode
* @param mtime the time the inode is removed
*/
-  static void deleteForEditLog(FSDirectory fsd, String src, long mtime)
+  static void deleteForEditLog(FSDirectory fsd, INodesInPath iip, long mtime)
   throws IOException {
 assert fsd.hasWriteLock();
 FSNamesystem fsn = fsd.getFSNamesystem();
 BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
 List removedINodes = new ChunkedArrayList<>();
 List removedUCFiles = new ChunkedArrayList<>();
-
-final INodesInPath iip = fsd.getINodesInPath4Write(
-FSDirectory.normalizePath(src), false);
-if (!deleteAllowed(iip, src)) {
+if (!deleteAllowed(iip)) {
   return;
 }
 List snapshottableDirs = new ArrayList<>();
@@ -162,7 +163,6 @@ class FSDirDeleteOp {
* 
* For small directory or file the deletion is done in one shot.
* @param fsn namespace
-   * @param src path name to be deleted
* @param iip the INodesInPath instance containing all the INodes for the 
path
* @param logRetryCache whether to record RPC ids in editlog for retry cache
*  rebuilding
@@ -170,15 +170,11 @@ class FSDirDeleteOp {
* @throws IOException
*/
   static BlocksMapUpdateInfo deleteInternal(
-  FSNamesystem fsn, String src, INodesInPath iip, boolean logRetryCache)
+  FSNamesystem fsn, INodesInPath iip, boolean logRetryCache)
   throws IOException {
 assert fsn.hasWriteLock();
 if (NameNode.stateChangeLog.isDebugEnabled()) {
-  NameNode.stateChangeLog.debug("DIR* NameSystem.delete: " + src);
-}
-
-if (FSDirectory.isExactReservedName(src)) {
-  throw new 

[30/52] [abbrv] hadoop git commit: HDFS-10895. Update HDFS Erasure Coding doc to add how to use ISA-L based coder. Contributed by Sammi Chen

2016-10-12 Thread cnauroth
HDFS-10895. Update HDFS Erasure Coding doc to add how to use ISA-L based coder. 
Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af50da32
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af50da32
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af50da32

Branch: refs/heads/HADOOP-13037
Commit: af50da3298f92a52cc20d5f6aab6f6ad8134efbd
Parents: 3d59b18
Author: Kai Zheng 
Authored: Mon Oct 10 11:55:49 2016 +0600
Committer: Kai Zheng 
Committed: Mon Oct 10 11:55:49 2016 +0600

--
 .../src/site/markdown/HDFSErasureCoding.md   | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af50da32/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
index 18b3a25..627260f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
@@ -22,6 +22,7 @@ HDFS Erasure Coding
 * [Deployment](#Deployment)
 * [Cluster and hardware 
configuration](#Cluster_and_hardware_configuration)
 * [Configuration keys](#Configuration_keys)
+* [Enable Intel ISA-L](#Enable_Intel_ISA-L)
 * [Administrative commands](#Administrative_commands)
 
 Purpose
@@ -73,6 +74,9 @@ Architecture
 
 There are three policies currently being supported: RS-DEFAULT-3-2-64k, 
RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. All with default cell size of 64KB. 
The system default policy is RS-DEFAULT-6-3-64k which use the default schema 
RS_6_3_SCHEMA with a cell size of 64KB.
 
+ *  **Intel ISA-L**
+Intel ISA-L stands for Intel Intelligent Storage Acceleration Library. 
ISA-L is a collection of optimized low-level functions used primarily in 
storage applications. It includes a fast block Reed-Solomon type erasure codes 
optimized for Intel AVX and AVX2 instruction sets.
+HDFS EC can leverage this open-source library to accelerate encoding and 
decoding calculation. ISA-L supports most of major operating systems, including 
Linux and Windows. By default, ISA-L is not enabled in HDFS.
 
 Deployment
 --
@@ -98,7 +102,7 @@ Deployment
   `io.erasurecode.codec.rs-default.rawcoder` for the default RS codec,
   `io.erasurecode.codec.rs-legacy.rawcoder` for the legacy RS codec,
   `io.erasurecode.codec.xor.rawcoder` for the XOR codec.
-  The default implementations for all of these codecs are pure Java.
+  The default implementations for all of these codecs are pure Java. For 
default RS codec, there is also a native implementation which leverages Intel 
ISA-L library to improve the encoding and decoding calculation. Please refer to 
section "Enable Intel ISA-L" for more detail information.
 
   Erasure coding background recovery work on the DataNodes can also be tuned 
via the following configuration parameters:
 
@@ -106,6 +110,15 @@ Deployment
   1. `dfs.datanode.stripedread.threads` - Number of concurrent reader threads. 
Default value is 20 threads.
   1. `dfs.datanode.stripedread.buffer.size` - Buffer size for reader service. 
Default value is 256KB.
 
+### Enable Intel ISA-L
+
+  HDFS native implementation of default RS codec leverages Intel ISA-L library 
to improve the encoding and decoding calculation. To enable and use Intel 
ISA-L, there are three steps.
+  1. Build ISA-L library. Please refer to the offical site 
"https://github.com/01org/isa-l/; for detail information.
+  2. Build Hadoop with ISA-L support. Please refer to "Intel ISA-L build 
options" section in "Build instructions for Hadoop"(BUILDING.txt) document. Use 
-Dbundle.isal to copy the contents of the isal.lib directory into the final tar 
file. Deploy hadoop with the tar file. Make sure ISA-L library is available on 
both HDFS client and DataNodes.
+  3. Configure the `io.erasurecode.codec.rs-default.rawcoder` key with value 
`org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory` on 
HDFS client and DataNodes.
+
+  To check ISA-L library enable state, try "Hadoop checknative" command. It 
will tell you if ISA-L library is enabled or not.
+
 ### Administrative commands
 
   HDFS provides an `erasurecode` subcommand to perform administrative commands 
related to erasure coding.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[43/52] [abbrv] hadoop git commit: HADOOP-13697. LogLevel#main should not throw exception if no arguments. Contributed by Mingliang Liu

2016-10-12 Thread cnauroth
HADOOP-13697. LogLevel#main should not throw exception if no arguments. 
Contributed by Mingliang Liu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2fb392a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2fb392a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2fb392a5

Branch: refs/heads/HADOOP-13037
Commit: 2fb392a587d288b628936ca6d18fabad04afc585
Parents: 809cfd2
Author: Mingliang Liu 
Authored: Fri Oct 7 14:05:40 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Oct 11 10:57:08 2016 -0700

--
 .../src/main/java/org/apache/hadoop/log/LogLevel.java   | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2fb392a5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
index 4fa839f..79eae12 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
@@ -47,15 +47,17 @@ import org.apache.hadoop.http.HttpServer2;
 import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
 import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
 import org.apache.hadoop.security.ssl.SSLFactory;
+import org.apache.hadoop.util.GenericOptionsParser;
 import org.apache.hadoop.util.ServletUtil;
 import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
 
 /**
  * Change log level in runtime.
  */
 @InterfaceStability.Evolving
 public class LogLevel {
-  public static final String USAGES = "\nUsage: General options are:\n"
+  public static final String USAGES = "\nUsage: Command options are:\n"
   + "\t[-getlevel   [-protocol (http|https)]\n"
   + "\t[-setlevel"
   + "[-protocol (http|https)]\n";
@@ -67,7 +69,7 @@ public class LogLevel {
*/
   public static void main(String[] args) throws Exception {
 CLI cli = new CLI(new Configuration());
-System.exit(cli.run(args));
+System.exit(ToolRunner.run(cli, args));
   }
 
   /**
@@ -81,6 +83,7 @@ public class LogLevel {
 
   private static void printUsage() {
 System.err.println(USAGES);
+GenericOptionsParser.printGenericCommandUsage(System.err);
   }
 
   public static boolean isValidProtocol(String protocol) {
@@ -107,7 +110,7 @@ public class LogLevel {
 sendLogLevelRequest();
   } catch (HadoopIllegalArgumentException e) {
 printUsage();
-throw e;
+return -1;
   }
   return 0;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/52] [abbrv] hadoop git commit: HADOOP-13678 Update jackson from 1.9.13 to 2.x in hadoop-tools. Contributed by Akira Ajisaka.

2016-10-12 Thread cnauroth
HADOOP-13678 Update jackson from 1.9.13 to 2.x in hadoop-tools. Contributed by 
Akira Ajisaka.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cc841f1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cc841f1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cc841f1

Branch: refs/heads/HADOOP-13037
Commit: 2cc841f16ec9aa5336495fc20ee781a1276fddc5
Parents: 4d2f380
Author: Steve Loughran 
Authored: Thu Oct 6 16:30:26 2016 +0100
Committer: Steve Loughran 
Committed: Thu Oct 6 16:31:00 2016 +0100

--
 hadoop-tools/hadoop-azure-datalake/pom.xml  |  4 +++
 ...ClientCredentialBasedAccesTokenProvider.java |  5 +--
 hadoop-tools/hadoop-azure/pom.xml   |  6 +++-
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 -
 hadoop-tools/hadoop-openstack/pom.xml   | 18 +-
 .../swift/auth/ApiKeyAuthenticationRequest.java |  2 +-
 .../fs/swift/auth/entities/AccessToken.java |  2 +-
 .../hadoop/fs/swift/auth/entities/Catalog.java  |  2 +-
 .../hadoop/fs/swift/auth/entities/Endpoint.java |  2 +-
 .../hadoop/fs/swift/auth/entities/Tenant.java   |  2 +-
 .../hadoop/fs/swift/auth/entities/User.java |  2 +-
 .../snative/SwiftNativeFileSystemStore.java |  3 +-
 .../apache/hadoop/fs/swift/util/JSONUtil.java   | 24 +
 hadoop-tools/hadoop-rumen/pom.xml   |  9 +
 .../apache/hadoop/tools/rumen/Anonymizer.java   | 23 ++---
 .../hadoop/tools/rumen/HadoopLogsAnalyzer.java  |  3 +-
 .../tools/rumen/JsonObjectMapperParser.java | 17 -
 .../tools/rumen/JsonObjectMapperWriter.java | 21 +---
 .../apache/hadoop/tools/rumen/LoggedJob.java|  2 +-
 .../hadoop/tools/rumen/LoggedLocation.java  |  2 +-
 .../tools/rumen/LoggedNetworkTopology.java  |  2 +-
 .../rumen/LoggedSingleRelativeRanking.java  |  4 +--
 .../apache/hadoop/tools/rumen/LoggedTask.java   |  2 +-
 .../hadoop/tools/rumen/LoggedTaskAttempt.java   |  2 +-
 .../hadoop/tools/rumen/datatypes/NodeName.java  |  2 +-
 .../rumen/serializers/BlockingSerializer.java   | 10 +++---
 .../DefaultAnonymizingRumenSerializer.java  |  8 ++---
 .../serializers/DefaultRumenSerializer.java |  9 ++---
 .../serializers/ObjectStringSerializer.java | 10 +++---
 .../apache/hadoop/tools/rumen/state/State.java  |  2 +-
 .../tools/rumen/state/StateDeserializer.java| 14 
 .../hadoop/tools/rumen/state/StatePool.java | 36 
 .../hadoop/tools/rumen/TestHistograms.java  | 13 +++
 hadoop-tools/hadoop-sls/pom.xml |  4 +++
 .../hadoop/yarn/sls/RumenToSLSConverter.java|  8 ++---
 .../org/apache/hadoop/yarn/sls/SLSRunner.java   |  7 ++--
 .../apache/hadoop/yarn/sls/utils/SLSUtils.java  | 10 +++---
 37 files changed, 151 insertions(+), 157 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cc841f1/hadoop-tools/hadoop-azure-datalake/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml 
b/hadoop-tools/hadoop-azure-datalake/pom.xml
index c07a1d7..e1a0bfe 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -181,5 +181,9 @@
   2.4.0
   test
 
+
+  com.fasterxml.jackson.core
+  jackson-databind
+
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cc841f1/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
--
diff --git 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
index 6dfc593..11d07e7 100644
--- 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
+++ 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
@@ -18,6 +18,9 @@
  */
 package org.apache.hadoop.hdfs.web.oauth2;
 
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import com.fasterxml.jackson.databind.ObjectReader;
 import com.squareup.okhttp.OkHttpClient;
 import com.squareup.okhttp.Request;
 import com.squareup.okhttp.RequestBody;
@@ -29,8 +32,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.web.URLConnectionFactory;
 import org.apache.hadoop.util.Timer;
 import org.apache.http.HttpStatus;
-import org.codehaus.jackson.map.ObjectMapper;
-import 

[14/52] [abbrv] hadoop git commit: HADOOP-12611. TestZKSignerSecretProvider#testMultipleInit occasionally fail (ebadger via rkanter)

2016-10-12 Thread cnauroth
HADOOP-12611. TestZKSignerSecretProvider#testMultipleInit occasionally fail 
(ebadger via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c183b9de
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c183b9de
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c183b9de

Branch: refs/heads/HADOOP-13037
Commit: c183b9de8d072a35dcde96a20b1550981f886e86
Parents: 459a483
Author: Robert Kanter 
Authored: Fri Oct 7 09:33:24 2016 -0700
Committer: Robert Kanter 
Committed: Fri Oct 7 09:33:31 2016 -0700

--
 .../util/RolloverSignerSecretProvider.java  |   2 +-
 .../util/TestZKSignerSecretProvider.java| 221 +--
 2 files changed, 100 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c183b9de/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
index fda5572..66b2fde 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java
@@ -38,7 +38,7 @@ import org.slf4j.LoggerFactory;
 public abstract class RolloverSignerSecretProvider
 extends SignerSecretProvider {
 
-  private static Logger LOG = LoggerFactory.getLogger(
+  static Logger LOG = LoggerFactory.getLogger(
 RolloverSignerSecretProvider.class);
   /**
* Stores the currently valid secrets.  The current secret is the 0th element

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c183b9de/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
index 8211314..5e640bb 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestZKSignerSecretProvider.java
@@ -17,7 +17,12 @@ import java.util.Arrays;
 import java.util.Properties;
 import java.util.Random;
 import javax.servlet.ServletContext;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.curator.test.TestingServer;
+import org.apache.log4j.Level;
+import org.apache.log4j.LogManager;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -25,7 +30,6 @@ import org.junit.Test;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.spy;
 import static org.mockito.Mockito.timeout;
-import static org.mockito.Mockito.times;
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
@@ -34,9 +38,14 @@ public class TestZKSignerSecretProvider {
   private TestingServer zkServer;
 
   // rollover every 2 sec
-  private final int timeout = 4000;
+  private final int timeout = 100;
   private final long rolloverFrequency = timeout / 2;
 
+  static final Log LOG = LogFactory.getLog(TestZKSignerSecretProvider.class);
+  {
+LogManager.getLogger( RolloverSignerSecretProvider.LOG.getName() 
).setLevel(Level.DEBUG);
+  }
+
   @Before
   public void setup() throws Exception {
 zkServer = new TestingServer();
@@ -60,8 +69,8 @@ public class TestZKSignerSecretProvider {
 byte[] secret2 = Long.toString(rand.nextLong()).getBytes();
 byte[] secret1 = Long.toString(rand.nextLong()).getBytes();
 byte[] secret3 = Long.toString(rand.nextLong()).getBytes();
-ZKSignerSecretProvider secretProvider =
-spy(new ZKSignerSecretProvider(seed));
+MockZKSignerSecretProvider secretProvider =
+spy(new MockZKSignerSecretProvider(seed));
 Properties config = new Properties();
 config.setProperty(
 ZKSignerSecretProvider.ZOOKEEPER_CONNECTION_STRING,
@@ -77,7 +86,8 @@ public class TestZKSignerSecretProvider {
   Assert.assertEquals(2, allSecrets.length);
   Assert.assertArrayEquals(secret1, allSecrets[0]);
   

[12/52] [abbrv] hadoop git commit: HADOOP-12977 s3a to handle delete("/", true) robustly. Contributed by Steve Loughran.

2016-10-12 Thread cnauroth
HADOOP-12977 s3a to handle delete("/", true) robustly. Contributed by Steve 
Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ebd4f39a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ebd4f39a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ebd4f39a

Branch: refs/heads/HADOOP-13037
Commit: ebd4f39a393e5fa9a810c6a36b749549229a53df
Parents: bf37217
Author: Steve Loughran 
Authored: Fri Oct 7 12:51:40 2016 +0100
Committer: Steve Loughran 
Committed: Fri Oct 7 12:51:40 2016 +0100

--
 .../src/site/markdown/filesystem/filesystem.md  | 77 +++-
 .../apache/hadoop/fs/FileContextURIBase.java|  4 +-
 .../AbstractContractRootDirectoryTest.java  | 34 -
 .../hadoop/fs/contract/ContractTestUtils.java   | 39 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 77 
 5 files changed, 197 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebd4f39a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
index 1587842..2c9dd5d 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
@@ -669,19 +669,40 @@ exists in the metadata, but no copies of any its blocks 
can be located;
 
 ### `boolean delete(Path p, boolean recursive)`
 
+Delete a path, be it a file, symbolic link or directory. The
+`recursive` flag indicates whether a recursive delete should take place —if
+unset then a non-empty directory cannot be deleted.
+
+Except in the special case of the root directory, if this API call
+completed successfully then there is nothing at the end of the path.
+That is: the outcome is desired. The return flag simply tells the caller
+whether or not any change was made to the state of the filesystem.
+
+*Note*: many uses of this method surround it with checks for the return value 
being
+false, raising exception if so. For example
+
+```java
+if (!fs.delete(path, true)) throw new IOException("Could not delete " + path);
+```
+
+This pattern is not needed. Code SHOULD just call `delete(path, recursive)` and
+assume the destination is no longer present —except in the special case of 
root
+directories, which will always remain (see below for special coverage of root 
directories).
+
  Preconditions
 
-A directory with children and recursive == false cannot be deleted
+A directory with children and `recursive == False` cannot be deleted
 
 if isDir(FS, p) and not recursive and (children(FS, p) != {}) : raise 
IOException
 
+(HDFS raises `PathIsNotEmptyDirectoryException` here.)
 
  Postconditions
 
 
 # Nonexistent path
 
-If the file does not exist the FS state does not change
+If the file does not exist the filesystem state does not change
 
 if not exists(FS, p):
 FS' = FS
@@ -700,7 +721,7 @@ A path referring to a file is removed, return value: `True`
 result = True
 
 
-# Empty root directory
+# Empty root directory, `recursive == False`
 
 Deleting an empty root does not change the filesystem state
 and may return true or false.
@@ -711,7 +732,10 @@ and may return true or false.
 
 There is no consistent return code from an attempt to delete the root 
directory.
 
-# Empty (non-root) directory
+Implementations SHOULD return true; this avoids code which checks for a false
+return value from overreacting.
+
+# Empty (non-root) directory `recursive == False`
 
 Deleting an empty directory that is not root will remove the path from the FS 
and
 return true.
@@ -721,26 +745,41 @@ return true.
 result = True
 
 
-# Recursive delete of root directory
+# Recursive delete of non-empty root directory
 
 Deleting a root path with children and `recursive==True`
  can do one of two things.
 
-The POSIX model assumes that if the user has
+1. The POSIX model assumes that if the user has
 the correct permissions to delete everything,
 they are free to do so (resulting in an empty filesystem).
 
-if isDir(FS, p) and isRoot(p) and recursive :
-FS' = ({["/"]}, {}, {}, {})
-result = True
+if isDir(FS, p) and isRoot(p) and recursive :
+FS' = ({["/"]}, {}, {}, {})
+result = True
 
-In contrast, HDFS never permits the deletion of the root of a filesystem; the
-filesystem can be taken offline and reformatted if an empty
+1. HDFS never permits the deletion of the root of a filesystem; 

[39/52] [abbrv] hadoop git commit: HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by java.io.File. (Virajith Jalaparti via lei)

2016-10-12 Thread cnauroth
http://git-wip-us.apache.org/repos/asf/hadoop/blob/96b12662/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
index 57fab66..76af724 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
@@ -23,11 +23,13 @@ import java.io.FileOutputStream;
 import java.io.FilenameFilter;
 import java.io.IOException;
 import java.io.OutputStreamWriter;
+import java.net.URI;
 import java.nio.channels.ClosedChannelException;
 import java.nio.file.Files;
 import java.nio.file.Paths;
 import java.nio.file.StandardCopyOption;
 import java.util.Collections;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -56,13 +58,18 @@ import org.apache.hadoop.hdfs.server.datanode.DatanodeUtil;
 import org.apache.hadoop.hdfs.server.datanode.LocalReplica;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
+import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
 import org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder;
 import org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.BlockDirFilter;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.CloseableReferenceCount;
@@ -102,8 +109,14 @@ public class FsVolumeImpl implements FsVolumeSpi {
   private final StorageType storageType;
   private final Map bpSlices
   = new ConcurrentHashMap();
+
+  // Refers to the base StorageLocation used to construct this volume
+  // (i.e., does not include STORAGE_DIR_CURRENT in
+  // /STORAGE_DIR_CURRENT/)
+  private final StorageLocation storageLocation;
+
   private final File currentDir;// /current
-  private final DF usage;   
+  private final DF usage;
   private final long reserved;
   private CloseableReferenceCount reference = new CloseableReferenceCount();
 
@@ -124,19 +137,25 @@ public class FsVolumeImpl implements FsVolumeSpi {
*/
   protected ThreadPoolExecutor cacheExecutor;
   
-  FsVolumeImpl(FsDatasetImpl dataset, String storageID, File currentDir,
-  Configuration conf, StorageType storageType) throws IOException {
+  FsVolumeImpl(FsDatasetImpl dataset, String storageID, StorageDirectory sd,
+  Configuration conf) throws IOException {
+
+if (sd.getStorageLocation() == null) {
+  throw new IOException("StorageLocation specified for storage directory " 
+
+  sd + " is null");
+}
 this.dataset = dataset;
 this.storageID = storageID;
+this.reservedForReplicas = new AtomicLong(0L);
+this.storageLocation = sd.getStorageLocation();
+this.currentDir = sd.getCurrentDir();
+File parent = currentDir.getParentFile();
+this.usage = new DF(parent, conf);
+this.storageType = storageLocation.getStorageType();
 this.reserved = conf.getLong(DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY
 + "." + StringUtils.toLowerCase(storageType.toString()), conf.getLong(
 DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY,
 DFSConfigKeys.DFS_DATANODE_DU_RESERVED_DEFAULT));
-this.reservedForReplicas = new AtomicLong(0L);
-this.currentDir = currentDir;
-File parent = currentDir.getParentFile();
-this.usage = new DF(parent, conf);
-this.storageType = storageType;
 this.configuredCapacity = -1;
 this.conf = conf;
 cacheExecutor = initializeCacheExecutor(parent);
@@ -285,19 +304,20 @@ public class FsVolumeImpl implements FsVolumeSpi {
 return true;
   }
 
+  @VisibleForTesting
   File getCurrentDir() {
 return currentDir;
   }
   
-  File getRbwDir(String bpid) throws IOException {
+  protected File 

[41/52] [abbrv] hadoop git commit: YARN-5551. Ignore file backed pages from memory computation when smaps is enabled. Contributed by Rajesh Balamohan

2016-10-12 Thread cnauroth
YARN-5551. Ignore file backed pages from memory computation when smaps is 
enabled. Contributed by Rajesh Balamohan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecb51b85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecb51b85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecb51b85

Branch: refs/heads/HADOOP-13037
Commit: ecb51b857ac7faceff981b2b6f22ea1af0d42ab1
Parents: 96b1266
Author: Jason Lowe 
Authored: Tue Oct 11 15:12:43 2016 +
Committer: Jason Lowe 
Committed: Tue Oct 11 15:12:43 2016 +

--
 .../yarn/util/ProcfsBasedProcessTree.java   | 26 ++-
 .../yarn/util/TestProcfsBasedProcessTree.java   | 46 ++--
 2 files changed, 39 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb51b85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
index 80d49c3..29bc277 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
@@ -406,15 +406,14 @@ public class ProcfsBasedProcessTree extends 
ResourceCalculatorProcessTree {
 continue;
   }
 
-  total +=
-  Math.min(info.sharedDirty, info.pss) + info.privateDirty
-  + info.privateClean;
+  // Account for anonymous to know the amount of
+  // memory reclaimable by killing the process
+  total += info.anonymous;
+
   if (LOG.isDebugEnabled()) {
 LOG.debug(" total(" + olderThanAge + "): PID : " + p.getPid()
-+ ", SharedDirty : " + info.sharedDirty + ", PSS : "
-+ info.pss + ", Private_Dirty : " + info.privateDirty
-+ ", Private_Clean : " + info.privateClean + ", total : "
-+ (total * KB_TO_BYTES));
++ ", info : " + info.toString()
++ ", total : " + (total * KB_TO_BYTES));
   }
 }
   }
@@ -877,6 +876,7 @@ public class ProcfsBasedProcessTree extends 
ResourceCalculatorProcessTree {
 private int sharedDirty;
 private int privateClean;
 private int privateDirty;
+private int anonymous;
 private int referenced;
 private String regionName;
 private String permission;
@@ -929,6 +929,10 @@ public class ProcfsBasedProcessTree extends 
ResourceCalculatorProcessTree {
   return referenced;
 }
 
+public int getAnonymous() {
+  return anonymous;
+}
+
 public void setMemInfo(String key, String value) {
   MemInfo info = MemInfo.getMemInfoByName(key);
   int val = 0;
@@ -969,6 +973,9 @@ public class ProcfsBasedProcessTree extends 
ResourceCalculatorProcessTree {
   case REFERENCED:
 referenced = val;
 break;
+  case ANONYMOUS:
+anonymous = val;
+break;
   default:
 break;
   }
@@ -999,10 +1006,7 @@ public class ProcfsBasedProcessTree extends 
ResourceCalculatorProcessTree {
 .append(MemInfo.REFERENCED.name + ":" + this.getReferenced())
 .append(" kB\n");
   sb.append("\t")
-.append(MemInfo.PRIVATE_DIRTY.name + ":" + this.getPrivateDirty())
-.append(" kB\n");
-  sb.append("\t")
-.append(MemInfo.PRIVATE_DIRTY.name + ":" + this.getPrivateDirty())
+.append(MemInfo.ANONYMOUS.name + ":" + this.getAnonymous())
 .append(" kB\n");
   return sb.toString();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb51b85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java
index fa4e8c8..841d333 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java
+++ 

[34/52] [abbrv] hadoop git commit: HADOOP-13669. KMS Server should log exceptions before throwing. Contributed by Suraj Acharya.

2016-10-12 Thread cnauroth
HADOOP-13669. KMS Server should log exceptions before throwing. Contributed by 
Suraj Acharya.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65912e40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65912e40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65912e40

Branch: refs/heads/HADOOP-13037
Commit: 65912e4027548868ebefd8ee36eb00fa889704a7
Parents: 0306007
Author: Xiao Chen 
Authored: Mon Oct 10 12:49:19 2016 -0700
Committer: Xiao Chen 
Committed: Mon Oct 10 12:51:12 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMS.java   | 711 ++-
 1 file changed, 392 insertions(+), 319 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65912e40/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
index 371f3f5..d8755ec 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
@@ -104,89 +104,101 @@ public class KMS {
   @Produces(MediaType.APPLICATION_JSON)
   @SuppressWarnings("unchecked")
   public Response createKey(Map jsonKey) throws Exception {
-LOG.trace("Entering createKey Method.");
-KMSWebApp.getAdminCallsMeter().mark();
-UserGroupInformation user = HttpUserGroupInformation.get();
-final String name = (String) jsonKey.get(KMSRESTConstants.NAME_FIELD);
-KMSClientProvider.checkNotEmpty(name, KMSRESTConstants.NAME_FIELD);
-assertAccess(KMSACLs.Type.CREATE, user, KMSOp.CREATE_KEY, name);
-String cipher = (String) jsonKey.get(KMSRESTConstants.CIPHER_FIELD);
-final String material = (String) 
jsonKey.get(KMSRESTConstants.MATERIAL_FIELD);
-int length = (jsonKey.containsKey(KMSRESTConstants.LENGTH_FIELD))
- ? (Integer) jsonKey.get(KMSRESTConstants.LENGTH_FIELD) : 0;
-String description = (String)
-jsonKey.get(KMSRESTConstants.DESCRIPTION_FIELD);
-LOG.debug("Creating key with name {}, cipher being used{}, " +
-"length of key {}, description of key {}", name, cipher,
- length, description);
-Map attributes = (Map)
-jsonKey.get(KMSRESTConstants.ATTRIBUTES_FIELD);
-if (material != null) {
-  assertAccess(KMSACLs.Type.SET_KEY_MATERIAL, user,
-  KMSOp.CREATE_KEY, name);
-}
-final KeyProvider.Options options = new KeyProvider.Options(
-KMSWebApp.getConfiguration());
-if (cipher != null) {
-  options.setCipher(cipher);
-}
-if (length != 0) {
-  options.setBitLength(length);
-}
-options.setDescription(description);
-options.setAttributes(attributes);
-
-KeyProvider.KeyVersion keyVersion = user.doAs(
-new PrivilegedExceptionAction() {
-  @Override
-  public KeyVersion run() throws Exception {
-KeyProvider.KeyVersion keyVersion = (material != null)
-  ? provider.createKey(name, Base64.decodeBase64(material), 
options)
-  : provider.createKey(name, options);
-provider.flush();
-return keyVersion;
+try{
+  LOG.trace("Entering createKey Method.");
+  KMSWebApp.getAdminCallsMeter().mark();
+  UserGroupInformation user = HttpUserGroupInformation.get();
+  final String name = (String) jsonKey.get(KMSRESTConstants.NAME_FIELD);
+  KMSClientProvider.checkNotEmpty(name, KMSRESTConstants.NAME_FIELD);
+  assertAccess(KMSACLs.Type.CREATE, user, KMSOp.CREATE_KEY, name);
+  String cipher = (String) jsonKey.get(KMSRESTConstants.CIPHER_FIELD);
+  final String material;
+  material = (String) jsonKey.get(KMSRESTConstants.MATERIAL_FIELD);
+  int length = (jsonKey.containsKey(KMSRESTConstants.LENGTH_FIELD))
+   ? (Integer) jsonKey.get(KMSRESTConstants.LENGTH_FIELD) : 0;
+  String description = (String)
+  jsonKey.get(KMSRESTConstants.DESCRIPTION_FIELD);
+  LOG.debug("Creating key with name {}, cipher being used{}, " +
+  "length of key {}, description of key {}", name, cipher,
+   length, description);
+  Map attributes = (Map)
+  jsonKey.get(KMSRESTConstants.ATTRIBUTES_FIELD);
+  if (material != null) {
+assertAccess(KMSACLs.Type.SET_KEY_MATERIAL, user,
+KMSOp.CREATE_KEY, name);
+  }
+  final KeyProvider.Options 

[49/52] [abbrv] hadoop git commit: HADOOP-13698. Document caveat for KeyShell when underlying KeyProvider does not delete a key.

2016-10-12 Thread cnauroth
HADOOP-13698. Document caveat for KeyShell when underlying KeyProvider does not 
delete a key.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b84c4891
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b84c4891
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b84c4891

Branch: refs/heads/HADOOP-13037
Commit: b84c4891f9eca8d56593e48e9df88be42e24220d
Parents: 3c9a010
Author: Xiao Chen 
Authored: Tue Oct 11 17:05:00 2016 -0700
Committer: Xiao Chen 
Committed: Tue Oct 11 17:05:00 2016 -0700

--
 .../hadoop-common/src/site/markdown/CommandsManual.md| 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b84c4891/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
index 4d7d504..2ece71a 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
@@ -202,7 +202,9 @@ Manage keys via the KeyProvider. For details on 
KeyProviders, see the [Transpare
 
 Providers frequently require that a password or other secret is supplied. If 
the provider requires a password and is unable to find one, it will use a 
default password and emit a warning message that the default password is being 
used. If the `-strict` flag is supplied, the warning message becomes an error 
message and the command returns immediately with an error status.
 
-NOTE: Some KeyProviders (e.g. 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider) does not support uppercase 
key names.
+NOTE: Some KeyProviders (e.g. 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider) do not support uppercase key 
names.
+
+NOTE: Some KeyProviders do not directly execute a key deletion (e.g. performs 
a soft-delete instead, or delay the actual deletion, to prevent mistake). In 
these cases, one may encounter errors when creating/deleting a key with the 
same name after deleting it. Please check the underlying KeyProvider for 
details.
 
 ### `trace`
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/52] [abbrv] hadoop git commit: HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. Contributed by Chris Nauroth.

2016-10-12 Thread cnauroth
HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. 
Contributed by Chris Nauroth.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d330fba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d330fba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d330fba

Branch: refs/heads/HADOOP-13037
Commit: 1d330fbaf6b50802750aa461640773fb788ef884
Parents: f32e9fc
Author: Chris Nauroth 
Authored: Thu Oct 6 12:45:11 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 13:19:16 2016 -0700

--
 .../apache/hadoop/fs/permission/AclEntry.java   | 24 ++--
 .../hadoop/fs/permission/AclEntryScope.java |  2 +-
 .../hadoop/fs/permission/AclEntryType.java  | 23 ++-
 .../apache/hadoop/fs/permission/AclStatus.java  |  2 +-
 .../org/apache/hadoop/fs/shell/AclCommands.java |  6 ++---
 .../hdfs/web/resources/AclPermissionParam.java  | 23 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java|  2 +-
 7 files changed, 70 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d330fba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index 45402f8..b42c365 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -36,7 +36,7 @@ import org.apache.hadoop.util.StringUtils;
  * to create a new instance.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
+@InterfaceStability.Stable
 public class AclEntry {
   private final AclEntryType type;
   private final String name;
@@ -100,13 +100,29 @@ public class AclEntry {
   }
 
   @Override
+  @InterfaceStability.Unstable
   public String toString() {
+// This currently just delegates to the stable string representation, but 
it
+// is permissible for the output of this method to change across versions.
+return toStringStable();
+  }
+
+  /**
+   * Returns a string representation guaranteed to be stable across versions to
+   * satisfy backward compatibility requirements, such as for shell command
+   * output or serialization.  The format of this string representation matches
+   * what is expected by the {@link #parseAclSpec(String, boolean)} and
+   * {@link #parseAclEntry(String, boolean)} methods.
+   *
+   * @return stable, backward compatible string representation
+   */
+  public String toStringStable() {
 StringBuilder sb = new StringBuilder();
 if (scope == AclEntryScope.DEFAULT) {
   sb.append("default:");
 }
 if (type != null) {
-  sb.append(StringUtils.toLowerCase(type.toString()));
+  sb.append(StringUtils.toLowerCase(type.toStringStable()));
 }
 sb.append(':');
 if (name != null) {
@@ -203,6 +219,8 @@ public class AclEntry {
   /**
* Parses a string representation of an ACL spec into a list of AclEntry
* objects. Example: "user::rwx,user:foo:rw-,group::r--,other::---"
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclSpec
*  String representation of an ACL spec.
@@ -228,6 +246,8 @@ public class AclEntry {
 
   /**
* Parses a string representation of an ACL into a AclEntry object.
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclStr
*  String representation of an ACL.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d330fba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
index 6d941e7..64c70aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
@@ -24,7 +24,7 @@ import org.apache.hadoop.classification.InterfaceStability;
  * Specifies the scope or intended usage of an ACL entry.
  */
 

[52/52] [abbrv] hadoop git commit: YARN-5677. RM should transition to standby when connection is lost for an extended period. (Daniel Templeton via kasha)

2016-10-12 Thread cnauroth
YARN-5677. RM should transition to standby when connection is lost for an 
extended period. (Daniel Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6476934a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6476934a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6476934a

Branch: refs/heads/HADOOP-13037
Commit: 6476934ae5de1be7988ab198b673d82fe0f006e3
Parents: 6378845
Author: Karthik Kambatla 
Authored: Tue Oct 11 22:07:10 2016 -0700
Committer: Karthik Kambatla 
Committed: Tue Oct 11 22:07:10 2016 -0700

--
 .../resourcemanager/EmbeddedElectorService.java |  59 +-
 .../resourcemanager/TestRMEmbeddedElector.java  | 191 +++
 2 files changed, 244 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6476934a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
index 72327e8..88d2e10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.resourcemanager;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.protobuf.InvalidProtocolBufferException;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -39,6 +40,8 @@ import org.apache.zookeeper.data.ACL;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Timer;
+import java.util.TimerTask;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -54,6 +57,10 @@ public class EmbeddedElectorService extends AbstractService
 
   private byte[] localActiveNodeInfo;
   private ActiveStandbyElector elector;
+  private long zkSessionTimeout;
+  private Timer zkDisconnectTimer;
+  @VisibleForTesting
+  final Object zkDisconnectLock = new Object();
 
   EmbeddedElectorService(RMContext rmContext) {
 super(EmbeddedElectorService.class.getName());
@@ -80,7 +87,7 @@ public class EmbeddedElectorService extends AbstractService
 YarnConfiguration.DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH);
 String electionZNode = zkBasePath + "/" + clusterId;
 
-long zkSessionTimeout = conf.getLong(YarnConfiguration.RM_ZK_TIMEOUT_MS,
+zkSessionTimeout = conf.getLong(YarnConfiguration.RM_ZK_TIMEOUT_MS,
 YarnConfiguration.DEFAULT_RM_ZK_TIMEOUT_MS);
 
 List zkAcls = RMZKUtils.getZKAcls(conf);
@@ -123,6 +130,8 @@ public class EmbeddedElectorService extends AbstractService
 
   @Override
   public void becomeActive() throws ServiceFailedException {
+cancelDisconnectTimer();
+
 try {
   rmContext.getRMAdminService().transitionToActive(req);
 } catch (Exception e) {
@@ -132,6 +141,8 @@ public class EmbeddedElectorService extends AbstractService
 
   @Override
   public void becomeStandby() {
+cancelDisconnectTimer();
+
 try {
   rmContext.getRMAdminService().transitionToStandby(req);
 } catch (Exception e) {
@@ -139,13 +150,49 @@ public class EmbeddedElectorService extends 
AbstractService
 }
   }
 
+  /**
+   * Stop the disconnect timer.  Any running tasks will be allowed to complete.
+   */
+  private void cancelDisconnectTimer() {
+synchronized (zkDisconnectLock) {
+  if (zkDisconnectTimer != null) {
+zkDisconnectTimer.cancel();
+zkDisconnectTimer = null;
+  }
+}
+  }
+
+  /**
+   * When the ZK client loses contact with ZK, this method will be called to
+   * allow the RM to react. Because the loss of connection can be noticed
+   * before the session timeout happens, it is undesirable to transition
+   * immediately. Instead the method starts a timer that will wait
+   * {@link YarnConfiguration#RM_ZK_TIMEOUT_MS} milliseconds before
+   * initiating the transition into standby state.
+   */
   @Override
   public void enterNeutralMode() {
-/**
- * Possibly due to transient connection issues. Do nothing.
- * TODO: Might want to keep track of how long in 

[22/52] [abbrv] hadoop git commit: HDFS-10980. Optimize check for existence of parent directory. Contributed by Daryn Sharp.

2016-10-12 Thread cnauroth
HDFS-10980. Optimize check for existence of parent directory. Contributed by 
Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e57fa81d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e57fa81d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e57fa81d

Branch: refs/heads/HADOOP-13037
Commit: e57fa81d9559a93d77fd724f7792326c31a490be
Parents: f3f37e6
Author: Kihwal Lee 
Authored: Fri Oct 7 17:20:15 2016 -0500
Committer: Kihwal Lee 
Committed: Fri Oct 7 17:20:15 2016 -0500

--
 .../hdfs/server/namenode/FSDirMkdirOp.java  |  2 +-
 .../hdfs/server/namenode/FSDirSymlinkOp.java|  2 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |  2 +-
 .../hdfs/server/namenode/FSDirectory.java   | 11 ++---
 .../hdfs/server/namenode/TestFSDirectory.java   | 48 
 5 files changed, 56 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e57fa81d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
index 2d1914f..4d8d7d7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
@@ -66,7 +66,7 @@ class FSDirMkdirOp {
 }
 
 if (!createParent) {
-  fsd.verifyParentDir(iip, src);
+  fsd.verifyParentDir(iip);
 }
 
 // validate that we have enough inodes. This is, at best, a

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e57fa81d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
index 6938a84..71362f8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
@@ -58,7 +58,7 @@ class FSDirSymlinkOp {
   iip = fsd.resolvePathForWrite(pc, link, false);
   link = iip.getPath();
   if (!createParent) {
-fsd.verifyParentDir(iip, link);
+fsd.verifyParentDir(iip);
   }
   if (!fsd.isValidToCreate(link, iip)) {
 throw new IOException(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e57fa81d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
index 40be83b..aab0f76 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
@@ -323,7 +323,7 @@ class FSDirWriteFileOp {
   }
 } else {
   if (!createParent) {
-dir.verifyParentDir(iip, src);
+dir.verifyParentDir(iip);
   }
   if (!flag.contains(CreateFlag.CREATE)) {
 throw new FileNotFoundException("Can't overwrite non-existent " + src);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e57fa81d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 8456da6..a059ee5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -1765,17 +1765,16 @@ public class FSDirectory implements Closeable {
   /**
* Verify that parent directory of src exists.
*/
-  void 

[44/52] [abbrv] hadoop git commit: HADOOP-13684. Snappy may complain Hadoop is built without snappy if libhadoop is not found. Contributed by Wei-Chiu Chuang.

2016-10-12 Thread cnauroth
HADOOP-13684. Snappy may complain Hadoop is built without snappy if libhadoop 
is not found. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b32b142
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b32b142
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b32b142

Branch: refs/heads/HADOOP-13037
Commit: 4b32b1420d98ea23460d05ae94f2698109b3d6f7
Parents: 2fb392a
Author: Wei-Chiu Chuang 
Authored: Tue Oct 11 13:21:33 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Tue Oct 11 13:21:33 2016 -0700

--
 .../apache/hadoop/io/compress/SnappyCodec.java  | 30 +++-
 1 file changed, 16 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b32b142/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
index 2a9c5d0..20a4cd6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
@@ -60,20 +60,22 @@ public class SnappyCodec implements Configurable, 
CompressionCodec, DirectDecomp
* Are the native snappy libraries loaded & initialized?
*/
   public static void checkNativeCodeLoaded() {
-  if (!NativeCodeLoader.isNativeCodeLoaded() ||
-  !NativeCodeLoader.buildSupportsSnappy()) {
-throw new RuntimeException("native snappy library not available: " +
-"this version of libhadoop was built without " +
-"snappy support.");
-  }
-  if (!SnappyCompressor.isNativeCodeLoaded()) {
-throw new RuntimeException("native snappy library not available: " +
-"SnappyCompressor has not been loaded.");
-  }
-  if (!SnappyDecompressor.isNativeCodeLoaded()) {
-throw new RuntimeException("native snappy library not available: " +
-"SnappyDecompressor has not been loaded.");
-  }
+if (!NativeCodeLoader.buildSupportsSnappy()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "this version of libhadoop was built without " +
+  "snappy support.");
+}
+if (!NativeCodeLoader.isNativeCodeLoaded()) {
+  throw new RuntimeException("Failed to load libhadoop.");
+}
+if (!SnappyCompressor.isNativeCodeLoaded()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "SnappyCompressor has not been loaded.");
+}
+if (!SnappyDecompressor.isNativeCodeLoaded()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "SnappyDecompressor has not been loaded.");
+}
   }
   
   public static boolean isNativeCodeLoaded() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[20/52] [abbrv] hadoop git commit: HADOOP-13627. Have an explicit KerberosAuthException for UGI to throw, text from public constants. Contributed by Xiao Chen.

2016-10-12 Thread cnauroth
HADOOP-13627. Have an explicit KerberosAuthException for UGI to throw, text 
from public constants. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e853be6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e853be6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e853be6

Branch: refs/heads/HADOOP-13037
Commit: 2e853be6577a5b98fd860e6d64f89ca6d160514a
Parents: 3565c9a
Author: Xiao Chen 
Authored: Fri Oct 7 13:46:27 2016 -0700
Committer: Xiao Chen 
Committed: Fri Oct 7 13:46:27 2016 -0700

--
 .../hadoop/security/KerberosAuthException.java  | 118 +++
 .../hadoop/security/UGIExceptionMessages.java   |  46 
 .../hadoop/security/UserGroupInformation.java   |  74 +++-
 3 files changed, 209 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e853be6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KerberosAuthException.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KerberosAuthException.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KerberosAuthException.java
new file mode 100644
index 000..811c7c9
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KerberosAuthException.java
@@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security;
+
+import static org.apache.hadoop.security.UGIExceptionMessages.*;
+
+import java.io.IOException;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Thrown when {@link UserGroupInformation} failed with an unrecoverable error,
+ * such as failure in kerberos login/logout, invalid subject etc.
+ *
+ * Caller should not retry when catching this exception.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class KerberosAuthException extends IOException {
+  static final long serialVersionUID = 31L;
+
+  private String user;
+  private String principal;
+  private String keytabFile;
+  private String ticketCacheFile;
+  private String initialMessage;
+
+  public KerberosAuthException(String msg) {
+super(msg);
+  }
+
+  public KerberosAuthException(Throwable cause) {
+super(cause);
+  }
+
+  public KerberosAuthException(String initialMsg, Throwable cause) {
+this(cause);
+initialMessage = initialMsg;
+  }
+
+  public void setUser(final String u) {
+user = u;
+  }
+
+  public void setPrincipal(final String p) {
+principal = p;
+  }
+
+  public void setKeytabFile(final String k) {
+keytabFile = k;
+  }
+
+  public void setTicketCacheFile(final String t) {
+ticketCacheFile = t;
+  }
+
+  /** @return The initial message, or null if not set. */
+  public String getInitialMessage() {
+return initialMessage;
+  }
+
+  /** @return The keytab file path, or null if not set. */
+  public String getKeytabFile() {
+return keytabFile;
+  }
+
+  /** @return The principal, or null if not set. */
+  public String getPrincipal() {
+return principal;
+  }
+
+  /** @return The ticket cache file path, or null if not set. */
+  public String getTicketCacheFile() {
+return ticketCacheFile;
+  }
+
+  /** @return The user, or null if not set. */
+  public String getUser() {
+return user;
+  }
+
+  @Override
+  public String getMessage() {
+final StringBuilder sb = new StringBuilder();
+if (initialMessage != null) {
+  sb.append(initialMessage);
+}
+if (user != null) {
+  sb.append(FOR_USER + user);
+}
+if (principal != null) {
+  sb.append(FOR_PRINCIPAL + principal);
+}
+if (keytabFile != null) {
+  sb.append(FROM_KEYTAB + keytabFile);
+}
+if (ticketCacheFile != null) {
+  sb.append(USING_TICKET_CACHE_FILE+ 

[08/52] [abbrv] hadoop git commit: HDFS-10939. Reduce performance penalty of encryption zones. Contributed by Daryn sharp.

2016-10-12 Thread cnauroth
HDFS-10939. Reduce performance penalty of encryption zones. Contributed by 
Daryn sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f32e9fc8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f32e9fc8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f32e9fc8

Branch: refs/heads/HADOOP-13037
Commit: f32e9fc8f7150f0e889c0774b3ad712af26fbd65
Parents: 72a2ae6
Author: Kihwal Lee 
Authored: Thu Oct 6 15:11:14 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 15:11:14 2016 -0500

--
 .../namenode/EncryptionFaultInjector.java   |   6 +
 .../server/namenode/EncryptionZoneManager.java  |  25 +--
 .../server/namenode/FSDirEncryptionZoneOp.java  | 144 +---
 .../server/namenode/FSDirErasureCodingOp.java   |   2 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |   4 +-
 .../server/namenode/FSDirStatAndListingOp.java  |  20 +--
 .../hdfs/server/namenode/FSDirWriteFileOp.java  | 163 +--
 .../hdfs/server/namenode/FSDirXAttrOp.java  |  21 +--
 .../hdfs/server/namenode/FSDirectory.java   |   5 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |   3 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 115 ++---
 .../hdfs/server/namenode/XAttrStorage.java  |   7 +-
 .../apache/hadoop/hdfs/TestEncryptionZones.java |  50 --
 13 files changed, 295 insertions(+), 270 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f32e9fc8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
index 27d8f50..104d8c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
@@ -35,5 +35,11 @@ public class EncryptionFaultInjector {
   }
 
   @VisibleForTesting
+  public void startFileNoKey() throws IOException {}
+
+  @VisibleForTesting
+  public void startFileBeforeGenerateKey() throws IOException {}
+
+  @VisibleForTesting
   public void startFileAfterGenerateKey() throws IOException {}
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f32e9fc8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 511c616..ceeccf6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -260,12 +260,14 @@ public class EncryptionZoneManager {
*
* @param srcIIP source IIP
* @param dstIIP destination IIP
-   * @param srcsource path, used for debugging
* @throws IOException if the src cannot be renamed to the dst
*/
-  void checkMoveValidity(INodesInPath srcIIP, INodesInPath dstIIP, String src)
+  void checkMoveValidity(INodesInPath srcIIP, INodesInPath dstIIP)
   throws IOException {
 assert dir.hasReadLock();
+if (!hasCreatedEncryptionZone()) {
+  return;
+}
 final EncryptionZoneInt srcParentEZI =
 getParentEncryptionZoneForPath(srcIIP);
 final EncryptionZoneInt dstParentEZI =
@@ -274,17 +276,17 @@ public class EncryptionZoneManager {
 final boolean dstInEZ = (dstParentEZI != null);
 if (srcInEZ && !dstInEZ) {
   throw new IOException(
-  src + " can't be moved from an encryption zone.");
+  srcIIP.getPath() + " can't be moved from an encryption zone.");
 } else if (dstInEZ && !srcInEZ) {
   throw new IOException(
-  src + " can't be moved into an encryption zone.");
+  srcIIP.getPath() + " can't be moved into an encryption zone.");
 }
 
 if (srcInEZ) {
   if (srcParentEZI != dstParentEZI) {
 final String srcEZPath = getFullPathName(srcParentEZI);
 final String dstEZPath = getFullPathName(dstParentEZI);
-final StringBuilder sb = new StringBuilder(src);
+final StringBuilder sb = new StringBuilder(srcIIP.getPath());
 sb.append(" can't be 

[37/52] [abbrv] hadoop git commit: Merge branch 'HADOOP-12756' into trunk

2016-10-12 Thread cnauroth
Merge branch 'HADOOP-12756' into trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/669d6f13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/669d6f13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/669d6f13

Branch: refs/heads/HADOOP-13037
Commit: 669d6f13ec48a90d4ba7e4ed1dd0e9687580f8f3
Parents: c874fa9 c31b5e6
Author: Kai Zheng 
Authored: Tue Oct 11 03:22:11 2016 +0600
Committer: Kai Zheng 
Committed: Tue Oct 11 03:22:11 2016 +0600

--
 .gitignore  |   2 +
 hadoop-project/pom.xml  |  22 +
 .../dev-support/findbugs-exclude.xml|  18 +
 hadoop-tools/hadoop-aliyun/pom.xml  | 154 +
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 580 +++
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 516 +
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 260 +
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 +
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 ++
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 +++
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 239 
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 145 +
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 +++
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 ++
 .../contract/TestAliyunOSSContractCreate.java   |  35 ++
 .../contract/TestAliyunOSSContractDelete.java   |  34 ++
 .../contract/TestAliyunOSSContractDistCp.java   |  44 ++
 .../TestAliyunOSSContractGetFileStatus.java |  35 ++
 .../contract/TestAliyunOSSContractMkdir.java|  34 ++
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 ++
 .../contract/TestAliyunOSSContractRename.java   |  35 ++
 .../contract/TestAliyunOSSContractRootDir.java  |  69 +++
 .../oss/contract/TestAliyunOSSContractSeek.java |  34 ++
 .../src/test/resources/contract/aliyun-oss.xml  | 115 
 .../src/test/resources/core-site.xml|  46 ++
 .../src/test/resources/log4j.properties |  23 +
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 +
 hadoop-tools/pom.xml|   1 +
 34 files changed, 3695 insertions(+)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/52] [abbrv] hadoop git commit: YARN-5057. Resourcemanager.security.TestDelegationTokenRenewer fails in trunk. Contributed by Jason Lowe.

2016-10-12 Thread cnauroth
YARN-5057. Resourcemanager.security.TestDelegationTokenRenewer fails in trunk. 
Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0773ffd0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0773ffd0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0773ffd0

Branch: refs/heads/HADOOP-13037
Commit: 0773ffd0f8383384f8cf8599476565f78aae70c9
Parents: 669d6f1
Author: Naganarasimha 
Authored: Mon Oct 10 18:04:47 2016 -0400
Committer: Naganarasimha 
Committed: Mon Oct 10 18:04:47 2016 -0400

--
 .../security/TestDelegationTokenRenewer.java| 24 
 1 file changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0773ffd0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
index 5dfee89..205188b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
@@ -1148,17 +1148,21 @@ public class TestDelegationTokenRenewer {
 credentials, null, true, false, false, null, 0, null, false, null);
 MockAM am1 = MockRM.launchAndRegisterAM(app1, rm, nm1);
 rm.waitForState(app1.getApplicationId(), RMAppState.RUNNING);
+DelegationTokenRenewer renewer =
+rm.getRMContext().getDelegationTokenRenewer();
+DelegationTokenToRenew dttr = renewer.getAllTokens().get(token1);
+Assert.assertNotNull(dttr);
 
 // submit app2 with the same token, set cancelTokenWhenComplete to true;
 RMApp app2 = rm.submitApp(resource, "name", "user", null, false, null, 2,
 credentials, null, true, false, false, null, 0, null, true, null);
 MockAM am2 = MockRM.launchAndRegisterAM(app2, rm, nm1);
 rm.waitForState(app2.getApplicationId(), RMAppState.RUNNING);
-MockRM.finishAMAndVerifyAppState(app2, rm, nm1, am2);
+finishAMAndWaitForComplete(app2, rm, nm1, am2, dttr);
 Assert.assertTrue(rm.getRMContext().getDelegationTokenRenewer()
   .getAllTokens().containsKey(token1));
 
-MockRM.finishAMAndVerifyAppState(app1, rm, nm1, am1);
+finishAMAndWaitForComplete(app1, rm, nm1, am1, dttr);
 // app2 completes, app1 is still running, check the token is not cancelled
 Assert.assertFalse(Renewer.cancelled);
   }
@@ -1224,7 +1228,7 @@ public class TestDelegationTokenRenewer {
 Assert.assertTrue(dttr.referringAppIds.contains(app2.getApplicationId()));
 Assert.assertFalse(Renewer.cancelled);
 
-MockRM.finishAMAndVerifyAppState(app2, rm, nm1, am2);
+finishAMAndWaitForComplete(app2, rm, nm1, am2, dttr);
 // app2 completes, app1 is still running, check the token is not cancelled
 Assert.assertTrue(renewer.getAllTokens().containsKey(token1));
 Assert.assertTrue(dttr.referringAppIds.contains(app1.getApplicationId()));
@@ -1242,14 +1246,14 @@ public class TestDelegationTokenRenewer {
 Assert.assertFalse(dttr.isTimerCancelled());
 Assert.assertFalse(Renewer.cancelled);
 
-MockRM.finishAMAndVerifyAppState(app1, rm, nm1, am1);
+finishAMAndWaitForComplete(app1, rm, nm1, am1, dttr);
 Assert.assertTrue(renewer.getAllTokens().containsKey(token1));
 Assert.assertFalse(dttr.referringAppIds.contains(app1.getApplicationId()));
 Assert.assertTrue(dttr.referringAppIds.contains(app3.getApplicationId()));
 Assert.assertFalse(dttr.isTimerCancelled());
 Assert.assertFalse(Renewer.cancelled);
 
-MockRM.finishAMAndVerifyAppState(app3, rm, nm1, am3);
+finishAMAndWaitForComplete(app3, rm, nm1, am3, dttr);
 Assert.assertFalse(renewer.getAllTokens().containsKey(token1));
 Assert.assertTrue(dttr.referringAppIds.isEmpty());
 Assert.assertTrue(dttr.isTimerCancelled());
@@ -1259,4 +1263,14 @@ public class TestDelegationTokenRenewer {
 Assert.assertFalse(renewer.getDelegationTokens().contains(token1));
   }
 
+  private void finishAMAndWaitForComplete(final 

[17/52] [abbrv] hadoop git commit: HDFS-10933. Refactor TestFsck. Contributed by Takanobu Asanuma.

2016-10-12 Thread cnauroth
HDFS-10933. Refactor TestFsck. Contributed by Takanobu Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3059b251
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3059b251
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3059b251

Branch: refs/heads/HADOOP-13037
Commit: 3059b251d8f37456c5761ecaf73fe6c0c5a59067
Parents: be3cb10
Author: Wei-Chiu Chuang 
Authored: Fri Oct 7 10:17:50 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Fri Oct 7 10:17:50 2016 -0700

--
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 2482 --
 1 file changed, 1152 insertions(+), 1330 deletions(-)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[26/52] [abbrv] hadoop git commit: Merge branch 'trunk' into HADOOP-12756

2016-10-12 Thread cnauroth
Merge branch 'trunk' into HADOOP-12756


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a57bba47
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a57bba47
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a57bba47

Branch: refs/heads/HADOOP-13037
Commit: a57bba470b396c163baef7ac9447c063180ec15b
Parents: 26d5df3 6a38d11
Author: Kai Zheng 
Authored: Sun Oct 9 10:29:40 2016 +0800
Committer: Kai Zheng 
Committed: Sun Oct 9 10:29:40 2016 +0800

--
 .../IncludePublicAnnotationsJDiffDoclet.java|64 +
 .../util/RolloverSignerSecretProvider.java  | 2 +-
 .../util/TestZKSignerSecretProvider.java|   221 +-
 .../dev-support/findbugsExcludeFile.xml | 5 +
 .../jdiff/Apache_Hadoop_Common_2.7.2.xml| 41149 ++---
 .../org/apache/hadoop/conf/ConfServlet.java |19 +-
 .../org/apache/hadoop/conf/Configuration.java   |   284 +-
 .../apache/hadoop/fs/DFCachingGetSpaceUsed.java |48 +
 .../src/main/java/org/apache/hadoop/fs/DU.java  | 8 +-
 .../apache/hadoop/fs/FileEncryptionInfo.java|21 +
 .../java/org/apache/hadoop/fs/FileSystem.java   |13 +-
 .../org/apache/hadoop/fs/ftp/FTPFileSystem.java | 6 +-
 .../apache/hadoop/fs/permission/AclEntry.java   |24 +-
 .../hadoop/fs/permission/AclEntryScope.java | 2 +-
 .../hadoop/fs/permission/AclEntryType.java  |23 +-
 .../apache/hadoop/fs/permission/AclStatus.java  | 2 +-
 .../org/apache/hadoop/fs/shell/AclCommands.java | 6 +-
 .../hadoop/fs/shell/CommandWithDestination.java | 5 +-
 .../org/apache/hadoop/fs/viewfs/ViewFs.java | 2 +-
 .../java/org/apache/hadoop/io/BloomMapFile.java |11 +-
 .../main/java/org/apache/hadoop/io/IOUtils.java | 9 +-
 .../main/java/org/apache/hadoop/io/MapFile.java |10 +-
 .../java/org/apache/hadoop/io/SequenceFile.java |16 +-
 .../apache/hadoop/io/compress/BZip2Codec.java   | 9 +-
 .../apache/hadoop/io/compress/DefaultCodec.java | 9 +-
 .../apache/hadoop/io/compress/GzipCodec.java| 9 +-
 .../hadoop/io/file/tfile/Compression.java   |14 +-
 .../org/apache/hadoop/ipc/ExternalCall.java |91 +
 .../main/java/org/apache/hadoop/ipc/Server.java |88 +-
 .../org/apache/hadoop/net/NetworkTopology.java  | 2 +-
 .../apache/hadoop/net/SocksSocketFactory.java   | 4 +-
 .../org/apache/hadoop/security/Credentials.java | 8 +-
 .../hadoop/security/KerberosAuthException.java  |   118 +
 .../hadoop/security/UGIExceptionMessages.java   |46 +
 .../hadoop/security/UserGroupInformation.java   |   105 +-
 .../org/apache/hadoop/security/token/Token.java |60 +-
 .../java/org/apache/hadoop/util/LineReader.java | 6 +-
 .../org/apache/hadoop/util/SysInfoWindows.java  |58 +-
 .../java/org/apache/hadoop/util/hash/Hash.java  | 6 +-
 .../src/main/resources/core-default.xml | 6 +-
 .../src/site/markdown/FileSystemShell.md| 3 +-
 .../src/site/markdown/filesystem/filesystem.md  |77 +-
 .../org/apache/hadoop/conf/TestConfServlet.java |   122 +-
 .../apache/hadoop/conf/TestConfiguration.java   |   140 +-
 .../apache/hadoop/fs/FileContextURIBase.java| 4 +-
 .../hadoop/fs/TestDFCachingGetSpaceUsed.java|75 +
 .../hadoop/fs/TestDelegationTokenRenewer.java   | 3 +-
 .../hadoop/fs/TestFileSystemInitialization.java |12 +-
 .../AbstractContractRootDirectoryTest.java  |34 +-
 .../fs/contract/AbstractFSContractTestBase.java | 2 +-
 .../hadoop/fs/contract/ContractTestUtils.java   |48 +-
 .../java/org/apache/hadoop/ipc/TestRPC.java |85 +
 .../org/apache/hadoop/net/ServerSocketUtil.java |23 +
 .../security/TestUserGroupInformation.java  |33 +-
 .../apache/hadoop/util/TestSysInfoWindows.java  | 7 +-
 .../hadoop/crypto/key/kms/server/KMS.java   |76 +-
 .../hadoop/crypto/key/kms/server/KMSWebApp.java | 2 +
 .../hadoop/crypto/key/kms/server/TestKMS.java   |76 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml  | 4 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 9 +-
 .../org/apache/hadoop/hdfs/DataStreamer.java|   146 +-
 .../hadoop/hdfs/DistributedFileSystem.java  |30 +
 .../hdfs/client/CreateEncryptionZoneFlag.java   |70 +
 .../apache/hadoop/hdfs/client/HdfsAdmin.java|   536 +
 .../apache/hadoop/hdfs/client/HdfsUtils.java|86 +
 .../apache/hadoop/hdfs/client/package-info.java |27 +
 .../server/datanode/DiskBalancerWorkItem.java   | 2 +-
 .../hdfs/shortcircuit/ShortCircuitCache.java|88 +-
 .../hdfs/web/resources/AclPermissionParam.java  |23 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  | 1 -
 .../jdiff/Apache_Hadoop_HDFS_2.7.2.xml  | 21704 +
 .../src/contrib/bkjournal/README.txt|66 -
 

[46/52] [abbrv] hadoop git commit: HDFS-10991. Export hdfsTruncateFile symbol in libhdfs. Contributed by Surendra Singh Lilhore.

2016-10-12 Thread cnauroth
HDFS-10991. Export hdfsTruncateFile symbol in libhdfs. Contributed by Surendra 
Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dacd3ec6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dacd3ec6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dacd3ec6

Branch: refs/heads/HADOOP-13037
Commit: dacd3ec66b111be24131957c986f0c748cf9ea26
Parents: 8a09bf7
Author: Andrew Wang 
Authored: Tue Oct 11 15:07:14 2016 -0700
Committer: Andrew Wang 
Committed: Tue Oct 11 15:07:14 2016 -0700

--
 .../src/main/native/libhdfs/include/hdfs/hdfs.h | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dacd3ec6/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h
index c856928..83c1c59 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h
@@ -493,6 +493,7 @@ extern  "C" {
  * complete before proceeding with further file updates.
  * -1 on error.
  */
+LIBHDFS_EXTERNAL
 int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength);
 
 /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[51/52] [abbrv] hadoop git commit: YARN-4464. Lower the default max applications stored in the RM and store. (Daniel Templeton via kasha)

2016-10-12 Thread cnauroth
YARN-4464. Lower the default max applications stored in the RM and store. 
(Daniel Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6378845f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6378845f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6378845f

Branch: refs/heads/HADOOP-13037
Commit: 6378845f9ef789c3fda862c43bcd498aa3f35068
Parents: 7ba7092
Author: Karthik Kambatla 
Authored: Tue Oct 11 21:41:58 2016 -0700
Committer: Karthik Kambatla 
Committed: Tue Oct 11 21:42:08 2016 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java | 20 
 .../src/main/resources/yarn-default.xml |  4 ++--
 .../server/resourcemanager/RMAppManager.java|  2 +-
 3 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 4d43357..3bd0dcc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -719,17 +719,29 @@ public class YarnConfiguration extends Configuration {
   + "leveldb-state-store.compaction-interval-secs";
   public static final long DEFAULT_RM_LEVELDB_COMPACTION_INTERVAL_SECS = 3600;
 
-  /** The maximum number of completed applications RM keeps. */ 
+  /**
+   * The maximum number of completed applications RM keeps. By default equals
+   * to {@link #DEFAULT_RM_MAX_COMPLETED_APPLICATIONS}.
+   */
   public static final String RM_MAX_COMPLETED_APPLICATIONS =
 RM_PREFIX + "max-completed-applications";
-  public static final int DEFAULT_RM_MAX_COMPLETED_APPLICATIONS = 1;
+  public static final int DEFAULT_RM_MAX_COMPLETED_APPLICATIONS = 1000;
 
   /**
-   * The maximum number of completed applications RM state store keeps, by
-   * default equals to DEFAULT_RM_MAX_COMPLETED_APPLICATIONS
+   * The maximum number of completed applications RM state store keeps. By
+   * default equals to value of {@link #RM_MAX_COMPLETED_APPLICATIONS}.
*/
   public static final String RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS =
   RM_PREFIX + "state-store.max-completed-applications";
+  /**
+   * The default value for
+   * {@code yarn.resourcemanager.state-store.max-completed-applications}.
+   * @deprecated This default value is ignored and will be removed in a future
+   * release. The default value of
+   * {@code yarn.resourcemanager.state-store.max-completed-applications} is the
+   * value of {@link #RM_MAX_COMPLETED_APPLICATIONS}.
+   */
+  @Deprecated
   public static final int DEFAULT_RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS =
   DEFAULT_RM_MAX_COMPLETED_APPLICATIONS;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 524afec..f37c689 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -417,7 +417,7 @@
 the applications remembered in RM memory.
 Any values larger than ${yarn.resourcemanager.max-completed-applications} 
will
 be reset to ${yarn.resourcemanager.max-completed-applications}.
-Note that this value impacts the RM recovery performance.Typically,
+Note that this value impacts the RM recovery performance. Typically,
 a smaller value indicates better performance on RM recovery.
 
 yarn.resourcemanager.state-store.max-completed-applications
@@ -687,7 +687,7 @@
   
 The maximum number of completed applications RM keeps. 

 yarn.resourcemanager.max-completed-applications
-1
+1000
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java

[13/52] [abbrv] hadoop git commit: YARN-5659. getPathFromYarnURL should use standard methods. Contributed by Sergey Shelukhin.

2016-10-12 Thread cnauroth
YARN-5659. getPathFromYarnURL should use standard methods. Contributed by 
Sergey Shelukhin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/459a4833
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/459a4833
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/459a4833

Branch: refs/heads/HADOOP-13037
Commit: 459a4833a90437a52787a41c2759a4b18cfe411c
Parents: ebd4f39
Author: Junping Du 
Authored: Fri Oct 7 07:46:08 2016 -0700
Committer: Junping Du 
Committed: Fri Oct 7 07:46:08 2016 -0700

--
 .../org/apache/hadoop/yarn/api/records/URL.java | 58 ++--
 .../apache/hadoop/yarn/api/records/TestURL.java | 99 
 2 files changed, 130 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/459a4833/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/URL.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/URL.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/URL.java
index aa28585..19bfc32 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/URL.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/URL.java
@@ -18,11 +18,15 @@
 
 package org.apache.hadoop.yarn.api.records;
 
+import com.google.common.annotations.VisibleForTesting;
+
 import java.net.URI;
 import java.net.URISyntaxException;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
 import org.apache.hadoop.yarn.util.Records;
@@ -52,7 +56,7 @@ public abstract class URL {
   @Public
   @Stable
   public abstract String getScheme();
-  
+
   /**
* Set the scheme of the URL
* @param scheme scheme of the URL
@@ -68,7 +72,7 @@ public abstract class URL {
   @Public
   @Stable
   public abstract String getUserInfo();
-  
+
   /**
* Set the user info of the URL.
* @param userInfo user info of the URL
@@ -84,7 +88,7 @@ public abstract class URL {
   @Public
   @Stable
   public abstract String getHost();
-  
+
   /**
* Set the host of the URL.
* @param host host of the URL
@@ -100,7 +104,7 @@ public abstract class URL {
   @Public
   @Stable
   public abstract int getPort();
-  
+
   /**
* Set the port of the URL
* @param port port of the URL
@@ -116,7 +120,7 @@ public abstract class URL {
   @Public
   @Stable
   public abstract String getFile();
-  
+
   /**
* Set the file of the URL.
* @param file file of the URL
@@ -124,32 +128,20 @@ public abstract class URL {
   @Public
   @Stable
   public abstract void setFile(String file);
-  
+
   @Public
   @Stable
   public Path toPath() throws URISyntaxException {
-String scheme = getScheme() == null ? "" : getScheme();
-
-String authority = "";
-if (getHost() != null) {
-  authority = getHost();
-  if (getUserInfo() != null) {
-authority = getUserInfo() + "@" + authority;
-  }
-  if (getPort() > 0) {
-authority += ":" + getPort();
-  }
-}
-
-return new Path(
-(new URI(scheme, authority, getFile(), null, null)).normalize());
+return new Path(new URI(getScheme(), getUserInfo(),
+  getHost(), getPort(), getFile(), null, null));
   }
-  
-  @Public
-  @Stable
-  public static URL fromURI(URI uri) {
+
+
+  @Private
+  @VisibleForTesting
+  public static URL fromURI(URI uri, Configuration conf) {
 URL url =
-RecordFactoryProvider.getRecordFactory(null).newRecordInstance(
+RecordFactoryProvider.getRecordFactory(conf).newRecordInstance(
 URL.class);
 if (uri.getHost() != null) {
   url.setHost(uri.getHost());
@@ -162,7 +154,19 @@ public abstract class URL {
 url.setFile(uri.getPath());
 return url;
   }
-  
+
+  @Public
+  @Stable
+  public static URL fromURI(URI uri) {
+return fromURI(uri, null);
+  }
+
+  @Private
+  @VisibleForTesting
+  public static URL fromPath(Path path, Configuration conf) {
+return fromURI(path.toUri(), conf);
+  }
+
   @Public
   @Stable
   public static URL fromPath(Path path) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/459a4833/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/records/TestURL.java

[16/52] [abbrv] hadoop git commit: HDFS-10933. Refactor TestFsck. Contributed by Takanobu Asanuma.

2016-10-12 Thread cnauroth
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3059b251/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
index 4b7eebd..aa41e9b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
@@ -57,8 +57,11 @@ import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 import com.google.common.base.Supplier;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileContext;
 import org.apache.hadoop.fs.FileSystem;
@@ -74,7 +77,6 @@ import org.apache.hadoop.hdfs.DFSOutputStream;
 import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
-import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.StripedFileTestUtil;
@@ -116,44 +118,49 @@ import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
 import org.apache.log4j.PatternLayout;
 import org.apache.log4j.RollingFileAppender;
+import org.junit.After;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.Test;
 
 import com.google.common.collect.Sets;
 
 /**
- * A JUnit test for doing fsck
+ * A JUnit test for doing fsck.
  */
 public class TestFsck {
+  private static final Log LOG =
+  LogFactory.getLog(TestFsck.class.getName());
+
   static final String AUDITLOG_FILE =
   GenericTestUtils.getTempPath("TestFsck-audit.log");
   
   // Pattern for: 
   // allowed=true ugi=name ip=/address cmd=FSCK src=/ dst=null perm=null
-  static final Pattern fsckPattern = Pattern.compile(
+  static final Pattern FSCK_PATTERN = Pattern.compile(
   "allowed=.*?\\s" +
   "ugi=.*?\\s" + 
   "ip=/\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\s" + 
   "cmd=fsck\\ssrc=\\/\\sdst=null\\s" + 
   "perm=null\\s" + "proto=.*");
-  static final Pattern getfileinfoPattern = Pattern.compile(
+  static final Pattern GET_FILE_INFO_PATTERN = Pattern.compile(
   "allowed=.*?\\s" +
   "ugi=.*?\\s" + 
   "ip=/\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\s" + 
   "cmd=getfileinfo\\ssrc=\\/\\sdst=null\\s" + 
   "perm=null\\s" + "proto=.*");
 
-  static final Pattern numMissingBlocksPattern = Pattern.compile(
+  static final Pattern NUM_MISSING_BLOCKS_PATTERN = Pattern.compile(
   ".*Missing blocks:\t\t([0123456789]*).*");
 
-  static final Pattern numCorruptBlocksPattern = Pattern.compile(
+  static final Pattern NUM_CORRUPT_BLOCKS_PATTERN = Pattern.compile(
   ".*Corrupt blocks:\t\t([0123456789]*).*");
   
   private static final String LINE_SEPARATOR =
-System.getProperty("line.separator");
+  System.getProperty("line.separator");
 
   static String runFsck(Configuration conf, int expectedErrCode, 
-boolean checkErrorCode,String... path)
+boolean checkErrorCode, String... path)
 throws Exception {
 ByteArrayOutputStream bStream = new ByteArrayOutputStream();
 PrintStream out = new PrintStream(bStream, true);
@@ -163,60 +170,72 @@ public class TestFsck {
   assertEquals(expectedErrCode, errCode);
 }
 GenericTestUtils.setLogLevel(FSPermissionChecker.LOG, Level.INFO);
-FSImage.LOG.info("OUTPUT = " + bStream.toString());
+LOG.info("OUTPUT = " + bStream.toString());
 return bStream.toString();
   }
 
-  /** do fsck */
+  private MiniDFSCluster cluster = null;
+  private Configuration conf = null;
+
+  @Before
+  public void setUp() throws Exception {
+conf = new Configuration();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+shutdownCluster();
+  }
+
+  private void shutdownCluster() throws Exception {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  /** do fsck. */
   @Test
   public void testFsck() throws Exception {
 DFSTestUtil util = new DFSTestUtil.Builder().setName("TestFsck").
 setNumFiles(20).build();
-MiniDFSCluster cluster = null;
 FileSystem fs = null;
-try {
-  Configuration conf = new HdfsConfiguration();
-  final long precision = 1L;
-  conf.setLong(DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY, 
precision);
-  conf.setLong(DFSConfigKeys.DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, 

[29/52] [abbrv] hadoop git commit: HADOOP-13641. Update UGI#spawnAutoRenewalThreadForUserCreds to reduce indentation. Contributed by Huafeng Wang

2016-10-12 Thread cnauroth
HADOOP-13641. Update UGI#spawnAutoRenewalThreadForUserCreds to reduce 
indentation. Contributed by Huafeng Wang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d59b18d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d59b18d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d59b18d

Branch: refs/heads/HADOOP-13037
Commit: 3d59b18d49d98a293ae14c5b89d515ef83cc4ff7
Parents: bea004e
Author: Kai Zheng 
Authored: Sun Oct 9 15:53:36 2016 +0600
Committer: Kai Zheng 
Committed: Sun Oct 9 15:53:36 2016 +0600

--
 .../hadoop/security/UserGroupInformation.java   | 98 ++--
 1 file changed, 49 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d59b18d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 329859d..e8711b0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -946,60 +946,60 @@ public class UserGroupInformation {
 
   /**Spawn a thread to do periodic renewals of kerberos credentials*/
   private void spawnAutoRenewalThreadForUserCreds() {
-if (isSecurityEnabled()) {
-  //spawn thread only if we have kerb credentials
-  if (user.getAuthenticationMethod() == AuthenticationMethod.KERBEROS &&
-  !isKeytab) {
-Thread t = new Thread(new Runnable() {
-  
-  @Override
-  public void run() {
-String cmd = conf.get("hadoop.kerberos.kinit.command",
-  "kinit");
-KerberosTicket tgt = getTGT();
+if (!isSecurityEnabled()
+|| user.getAuthenticationMethod() != AuthenticationMethod.KERBEROS
+|| isKeytab) {
+  return;
+}
+
+//spawn thread only if we have kerb credentials
+Thread t = new Thread(new Runnable() {
+
+  @Override
+  public void run() {
+String cmd = conf.get("hadoop.kerberos.kinit.command", "kinit");
+KerberosTicket tgt = getTGT();
+if (tgt == null) {
+  return;
+}
+long nextRefresh = getRefreshTime(tgt);
+while (true) {
+  try {
+long now = Time.now();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Current time is " + now);
+  LOG.debug("Next refresh is " + nextRefresh);
+}
+if (now < nextRefresh) {
+  Thread.sleep(nextRefresh - now);
+}
+Shell.execCommand(cmd, "-R");
+if (LOG.isDebugEnabled()) {
+  LOG.debug("renewed ticket");
+}
+reloginFromTicketCache();
+tgt = getTGT();
 if (tgt == null) {
+  LOG.warn("No TGT after renewal. Aborting renew thread for " +
+  getUserName());
   return;
 }
-long nextRefresh = getRefreshTime(tgt);
-while (true) {
-  try {
-long now = Time.now();
-if(LOG.isDebugEnabled()) {
-  LOG.debug("Current time is " + now);
-  LOG.debug("Next refresh is " + nextRefresh);
-}
-if (now < nextRefresh) {
-  Thread.sleep(nextRefresh - now);
-}
-Shell.execCommand(cmd, "-R");
-if(LOG.isDebugEnabled()) {
-  LOG.debug("renewed ticket");
-}
-reloginFromTicketCache();
-tgt = getTGT();
-if (tgt == null) {
-  LOG.warn("No TGT after renewal. Aborting renew thread for " +
-   getUserName());
-  return;
-}
-nextRefresh = Math.max(getRefreshTime(tgt),
-   now + kerberosMinSecondsBeforeRelogin);
-  } catch (InterruptedException ie) {
-LOG.warn("Terminating renewal thread");
-return;
-  } catch (IOException ie) {
-LOG.warn("Exception encountered while running the" +
-" renewal command. Aborting renew thread. " + ie);
-return;
-  }
-}
+nextRefresh = Math.max(getRefreshTime(tgt),
+  now + 

[40/52] [abbrv] hadoop git commit: HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by java.io.File. (Virajith Jalaparti via lei)

2016-10-12 Thread cnauroth
HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by 
java.io.File. (Virajith Jalaparti via lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96b12662
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96b12662
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96b12662

Branch: refs/heads/HADOOP-13037
Commit: 96b12662ea76e3ded4ef13944fc8df206cfb4613
Parents: 0773ffd
Author: Lei Xu 
Authored: Mon Oct 10 15:28:19 2016 -0700
Committer: Lei Xu 
Committed: Mon Oct 10 15:30:03 2016 -0700

--
 .../hadoop/hdfs/server/common/Storage.java  |  22 ++
 .../server/datanode/BlockPoolSliceStorage.java  |  20 +-
 .../hdfs/server/datanode/BlockScanner.java  |   8 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   |  34 +-
 .../hdfs/server/datanode/DataStorage.java   |  34 +-
 .../hdfs/server/datanode/DirectoryScanner.java  | 320 +--
 .../hdfs/server/datanode/DiskBalancer.java  |  25 +-
 .../hdfs/server/datanode/LocalReplica.java  |   2 +-
 .../hdfs/server/datanode/ReplicaInfo.java   |   2 +-
 .../hdfs/server/datanode/StorageLocation.java   |  32 +-
 .../hdfs/server/datanode/VolumeScanner.java |  27 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   5 +-
 .../server/datanode/fsdataset/FsVolumeSpi.java  | 234 +-
 .../impl/FsDatasetAsyncDiskService.java |  40 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 136 
 .../datanode/fsdataset/impl/FsVolumeImpl.java   | 233 --
 .../fsdataset/impl/FsVolumeImplBuilder.java |  65 
 .../datanode/fsdataset/impl/FsVolumeList.java   |  44 +--
 .../impl/RamDiskAsyncLazyPersistService.java|  79 +++--
 .../fsdataset/impl/VolumeFailureInfo.java   |  13 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   2 +-
 .../TestNameNodePrunesMissingStorages.java  |  15 +-
 .../server/datanode/SimulatedFSDataset.java |  46 ++-
 .../hdfs/server/datanode/TestBlockScanner.java  |   3 +-
 .../datanode/TestDataNodeHotSwapVolumes.java|  15 +-
 .../datanode/TestDataNodeVolumeFailure.java |  12 +-
 .../TestDataNodeVolumeFailureReporting.java |  10 +
 .../server/datanode/TestDirectoryScanner.java   |  76 +++--
 .../hdfs/server/datanode/TestDiskError.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |  10 +-
 .../datanode/extdataset/ExternalVolumeImpl.java |  44 ++-
 .../fsdataset/impl/FsDatasetImplTestUtils.java  |   9 +-
 .../fsdataset/impl/TestFsDatasetImpl.java   |  69 ++--
 .../fsdataset/impl/TestFsVolumeList.java|  83 +++--
 .../TestDiskBalancerWithMockMover.java  |   4 +-
 35 files changed, 1062 insertions(+), 713 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96b12662/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 9218e9d..e55de35 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
 import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.io.nativeio.NativeIOException;
 import org.apache.hadoop.util.ToolRunner;
@@ -269,11 +270,17 @@ public abstract class Storage extends StorageInfo {
 
 private String storageUuid = null;  // Storage directory identifier.
 
+private final StorageLocation location;
 public StorageDirectory(File dir) {
   // default dirType is null
   this(dir, null, false);
 }
 
+public StorageDirectory(StorageLocation location) {
+  // default dirType is null
+  this(location.getFile(), null, false, location);
+}
+
 public StorageDirectory(File dir, StorageDirType dirType) {
   this(dir, dirType, false);
 }
@@ -294,11 +301,22 @@ public abstract class Storage extends StorageInfo {
  *  disables locking on the storage directory, false enables 
locking
  */
 public StorageDirectory(File dir, StorageDirType dirType, boolean 
isShared) {
+  this(dir, dirType, isShared, null);
+}
+
+public StorageDirectory(File dir, 

[36/52] [abbrv] hadoop git commit: HDFS-10985. o.a.h.ha.TestZKFailoverController should not use fixed time sleep before assertions. Contributed by Mingliang Liu

2016-10-12 Thread cnauroth
HDFS-10985. o.a.h.ha.TestZKFailoverController should not use fixed time sleep 
before assertions. Contributed by Mingliang Liu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c874fa91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c874fa91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c874fa91

Branch: refs/heads/HADOOP-13037
Commit: c874fa914dfbf07d1731f5e87398607366675879
Parents: b963818
Author: Mingliang Liu 
Authored: Fri Oct 7 17:03:08 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Oct 10 13:33:07 2016 -0700

--
 .../hadoop/ha/TestZKFailoverController.java | 34 
 1 file changed, 21 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c874fa91/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
index 164167c..846c8ae 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.*;
 
 import java.security.NoSuchAlgorithmException;
 
+import com.google.common.base.Supplier;
 import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
@@ -441,12 +442,16 @@ public class TestZKFailoverController extends 
ClientBaseWithFixes {
 cluster.getService(0).getZKFCProxy(conf, 5000).gracefulFailover();
 cluster.waitForActiveLockHolder(0);
 
-Thread.sleep(1); // allow to quiesce
+GenericTestUtils.waitFor(new Supplier() {
+  @Override
+  public Boolean get() {
+return cluster.getService(0).fenceCount == 0 &&
+cluster.getService(1).fenceCount == 0 &&
+cluster.getService(0).activeTransitionCount == 2 &&
+cluster.getService(1).activeTransitionCount == 1;
+  }
+}, 100, 60 * 1000);
 
-assertEquals(0, cluster.getService(0).fenceCount);
-assertEquals(0, cluster.getService(1).fenceCount);
-assertEquals(2, cluster.getService(0).activeTransitionCount);
-assertEquals(1, cluster.getService(1).activeTransitionCount);
   }
 
   @Test
@@ -590,14 +595,17 @@ public class TestZKFailoverController extends 
ClientBaseWithFixes {
 cluster.getService(0).getZKFCProxy(conf, 5000).gracefulFailover();
 cluster.waitForActiveLockHolder(0);
 
-Thread.sleep(1); // allow to quiesce
-
-assertEquals(0, cluster.getService(0).fenceCount);
-assertEquals(0, cluster.getService(1).fenceCount);
-assertEquals(0, cluster.getService(2).fenceCount);
-assertEquals(2, cluster.getService(0).activeTransitionCount);
-assertEquals(1, cluster.getService(1).activeTransitionCount);
-assertEquals(1, cluster.getService(2).activeTransitionCount);
+GenericTestUtils.waitFor(new Supplier() {
+  @Override
+  public Boolean get() {
+return cluster.getService(0).fenceCount == 0 &&
+cluster.getService(1).fenceCount == 0 &&
+cluster.getService(2).fenceCount == 0 &&
+cluster.getService(0).activeTransitionCount == 2 &&
+cluster.getService(1).activeTransitionCount == 1 &&
+cluster.getService(2).activeTransitionCount == 1;
+  }
+}, 100, 60 * 1000);
   }
 
   private int runFC(DummyHAService target, String ... args) throws Exception {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[32/52] [abbrv] hadoop git commit: HDFS-10972. Add unit test for HDFS command 'dfsadmin -getDatanodeInfo'. Contributed by Xiaobing Zhou

2016-10-12 Thread cnauroth
HDFS-10972. Add unit test for HDFS command 'dfsadmin -getDatanodeInfo'. 
Contributed by Xiaobing Zhou


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3441c746
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3441c746
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3441c746

Branch: refs/heads/HADOOP-13037
Commit: 3441c746b5f35c46fca5a0f252c86c8357fe932e
Parents: cef61d5
Author: Mingliang Liu 
Authored: Mon Oct 10 11:33:37 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Oct 10 11:33:37 2016 -0700

--
 .../apache/hadoop/hdfs/tools/TestDFSAdmin.java  | 124 +--
 1 file changed, 113 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3441c746/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
index e71c5cc..94ecb9e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
@@ -30,12 +30,14 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.ReconfigurationUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.server.common.Storage;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.ToolRunner;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -68,6 +70,10 @@ public class TestDFSAdmin {
   private DFSAdmin admin;
   private DataNode datanode;
   private NameNode namenode;
+  private final ByteArrayOutputStream out = new ByteArrayOutputStream();
+  private final ByteArrayOutputStream err = new ByteArrayOutputStream();
+  private static final PrintStream OLD_OUT = System.out;
+  private static final PrintStream OLD_ERR = System.err;
 
   @Before
   public void setUp() throws Exception {
@@ -77,12 +83,32 @@ public class TestDFSAdmin {
 admin = new DFSAdmin();
   }
 
+  private void redirectStream() {
+System.setOut(new PrintStream(out));
+System.setErr(new PrintStream(err));
+  }
+
+  private void resetStream() {
+out.reset();
+err.reset();
+  }
+
   @After
   public void tearDown() throws Exception {
+try {
+  System.out.flush();
+  System.err.flush();
+} finally {
+  System.setOut(OLD_OUT);
+  System.setErr(OLD_ERR);
+}
+
 if (cluster != null) {
   cluster.shutdown();
   cluster = null;
 }
+
+resetStream();
   }
 
   private void restartCluster() throws IOException {
@@ -111,28 +137,104 @@ public class TestDFSAdmin {
   String nodeType, String address, final List outs,
   final List errs) throws IOException {
 ByteArrayOutputStream bufOut = new ByteArrayOutputStream();
-PrintStream out = new PrintStream(bufOut);
+PrintStream outStream = new PrintStream(bufOut);
 ByteArrayOutputStream bufErr = new ByteArrayOutputStream();
-PrintStream err = new PrintStream(bufErr);
+PrintStream errStream = new PrintStream(bufErr);
 
 if (methodName.equals("getReconfigurableProperties")) {
-  admin.getReconfigurableProperties(nodeType, address, out, err);
+  admin.getReconfigurableProperties(
+  nodeType,
+  address,
+  outStream,
+  errStream);
 } else if (methodName.equals("getReconfigurationStatus")) {
-  admin.getReconfigurationStatus(nodeType, address, out, err);
+  admin.getReconfigurationStatus(nodeType, address, outStream, errStream);
 } else if (methodName.equals("startReconfiguration")) {
-  admin.startReconfiguration(nodeType, address, out, err);
+  admin.startReconfiguration(nodeType, address, outStream, errStream);
 }
 
-Scanner scanner = new Scanner(bufOut.toString());
+scanIntoList(bufOut, outs);
+scanIntoList(bufErr, errs);
+  }
+
+  private static void scanIntoList(
+  final ByteArrayOutputStream baos,
+  final List list) {
+final Scanner scanner = new Scanner(baos.toString());
 while (scanner.hasNextLine()) {
-  outs.add(scanner.nextLine());
+  list.add(scanner.nextLine());
 }
 scanner.close();
-

[18/52] [abbrv] hadoop git commit: HADOOP-13692. hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts. Contributed by Chris Nauroth.

2016-10-12 Thread cnauroth
HADOOP-13692. hadoop-aws should declare explicit dependency on Jackson 2 jars 
to prevent classpath conflicts. Contributed by Chris Nauroth.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/69620f95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/69620f95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/69620f95

Branch: refs/heads/HADOOP-13037
Commit: 69620f955997250d1b543d86d4907ee50218152a
Parents: 3059b25
Author: Chris Nauroth 
Authored: Fri Oct 7 11:41:19 2016 -0700
Committer: Chris Nauroth 
Committed: Fri Oct 7 11:41:19 2016 -0700

--
 hadoop-tools/hadoop-aws/pom.xml | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/69620f95/hadoop-tools/hadoop-aws/pom.xml
--
diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index 49b0379..1c1bb02 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -286,6 +286,18 @@
   compile
 
 
+  com.fasterxml.jackson.core
+  jackson-core
+
+
+  com.fasterxml.jackson.core
+  jackson-databind
+
+
+  com.fasterxml.jackson.core
+  jackson-annotations
+
+
   joda-time
   joda-time
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/52] [abbrv] hadoop git commit: HADOOP-13705. Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code.

2016-10-12 Thread cnauroth
HADOOP-13705. Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and 
initialize code.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a09bf7c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a09bf7c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a09bf7c

Branch: refs/heads/HADOOP-13037
Commit: 8a09bf7c19d9d2f6d6853d45e11b0d38c7c67f2a
Parents: 4b32b14
Author: Andrew Wang 
Authored: Tue Oct 11 13:46:07 2016 -0700
Committer: Andrew Wang 
Committed: Tue Oct 11 13:46:07 2016 -0700

--
 .../java/org/apache/hadoop/fs/TrashPolicy.java  | 30 
 .../apache/hadoop/fs/TrashPolicyDefault.java| 15 ++
 .../java/org/apache/hadoop/fs/TestTrash.java|  4 +++
 3 files changed, 49 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a09bf7c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
index bd99db4..157b9ab 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
@@ -38,6 +38,17 @@ public abstract class TrashPolicy extends Configured {
 
   /**
* Used to setup the trash policy. Must be implemented by all TrashPolicy
+   * implementations.
+   * @param conf the configuration to be used
+   * @param fs the filesystem to be used
+   * @param home the home directory
+   * @deprecated Use {@link #initialize(Configuration, FileSystem)} instead.
+   */
+  @Deprecated
+  public abstract void initialize(Configuration conf, FileSystem fs, Path 
home);
+
+  /**
+   * Used to setup the trash policy. Must be implemented by all TrashPolicy
* implementations. Different from initialize(conf, fs, home), this one does
* not assume trash always under /user/$USER due to HDFS encryption zone.
* @param conf the configuration to be used
@@ -105,6 +116,25 @@ public abstract class TrashPolicy extends Configured {
*
* @param conf the configuration to be used
* @param fs the file system to be used
+   * @param home the home directory
+   * @return an instance of TrashPolicy
+   * @deprecated Use {@link #getInstance(Configuration, FileSystem)} instead.
+   */
+  @Deprecated
+  public static TrashPolicy getInstance(Configuration conf, FileSystem fs, 
Path home) {
+Class trashClass = conf.getClass(
+"fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
+TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);
+trash.initialize(conf, fs, home); // initialize TrashPolicy
+return trash;
+  }
+
+  /**
+   * Get an instance of the configured TrashPolicy based on the value
+   * of the configuration parameter fs.trash.classname.
+   *
+   * @param conf the configuration to be used
+   * @param fs the file system to be used
* @return an instance of TrashPolicy
*/
   public static TrashPolicy getInstance(Configuration conf, FileSystem fs)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a09bf7c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
index f4a825c..7be 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
@@ -75,6 +75,21 @@ public class TrashPolicyDefault extends TrashPolicy {
 initialize(conf, fs);
   }
 
+  /**
+   * @deprecated Use {@link #initialize(Configuration, FileSystem)} instead.
+   */
+  @Override
+  @Deprecated
+  public void initialize(Configuration conf, FileSystem fs, Path home) {
+this.fs = fs;
+this.deletionInterval = (long)(conf.getFloat(
+FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT)
+* MSECS_PER_MINUTE);
+this.emptierInterval = (long)(conf.getFloat(
+FS_TRASH_CHECKPOINT_INTERVAL_KEY, FS_TRASH_CHECKPOINT_INTERVAL_DEFAULT)
+* MSECS_PER_MINUTE);
+   }
+
   @Override
   public void initialize(Configuration conf, FileSystem fs) {
 this.fs = fs;


[48/52] [abbrv] hadoop git commit: HDFS-10903. Replace config key literal strings with config key names II: hadoop hdfs. Contributed by Chen Liang

2016-10-12 Thread cnauroth
HDFS-10903. Replace config key literal strings with config key names II: hadoop 
hdfs. Contributed by Chen Liang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c9a0106
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c9a0106
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c9a0106

Branch: refs/heads/HADOOP-13037
Commit: 3c9a01062e9097c2ed1db75318482543db2e382f
Parents: 61f0490
Author: Mingliang Liu 
Authored: Tue Oct 11 16:29:30 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Oct 11 16:29:30 2016 -0700

--
 .../java/org/apache/hadoop/fs/http/server/FSOperations.java | 9 +++--
 .../hadoop/lib/service/hadoop/FileSystemAccessService.java  | 6 --
 .../src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | 3 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml | 8 
 .../test/java/org/apache/hadoop/hdfs/TestFileAppend4.java   | 3 ++-
 .../hdfs/server/blockmanagement/TestBlockTokenWithDFS.java  | 3 ++-
 6 files changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c9a0106/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 46948f9..001bc92 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -48,6 +48,9 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTPFS_BUFFER_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.HTTP_BUFFER_SIZE_DEFAULT;
+
 /**
  * FileSystem operation executors used by {@link HttpFSServer}.
  */
@@ -462,7 +465,8 @@ public class FSOperations {
 blockSize = fs.getDefaultBlockSize(path);
   }
   FsPermission fsPermission = new FsPermission(permission);
-  int bufferSize = fs.getConf().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = fs.getConf().getInt(HTTPFS_BUFFER_SIZE_KEY,
+  HTTP_BUFFER_SIZE_DEFAULT);
   OutputStream os = fs.create(path, fsPermission, override, bufferSize, 
replication, blockSize, null);
   IOUtils.copyBytes(is, os, bufferSize, true);
   os.close();
@@ -752,7 +756,8 @@ public class FSOperations {
  */
 @Override
 public InputStream execute(FileSystem fs) throws IOException {
-  int bufferSize = 
HttpFSServerWebApp.get().getConfig().getInt("httpfs.buffer.size", 4096);
+  int bufferSize = HttpFSServerWebApp.get().getConfig().getInt(
+  HTTPFS_BUFFER_SIZE_KEY, HTTP_BUFFER_SIZE_DEFAULT);
   return fs.open(path, bufferSize);
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c9a0106/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
index 0b767be..61d3b45 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
@@ -50,6 +50,8 @@ import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
+
 @InterfaceAudience.Private
 public class FileSystemAccessService extends BaseService implements 
FileSystemAccess {
   private static final Logger LOG = 
LoggerFactory.getLogger(FileSystemAccessService.class);
@@ -159,7 +161,7 @@ public class FileSystemAccessService extends BaseService 
implements FileSystemAc
 throw new ServiceException(FileSystemAccessException.ERROR.H01, 
KERBEROS_PRINCIPAL);
   }
   Configuration conf = new Configuration();
-  conf.set("hadoop.security.authentication", "kerberos");
+  conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
   UserGroupInformation.setConfiguration(conf);
   try {
 

[28/52] [abbrv] hadoop git commit: MAPREDUCE-6780. Add support for HDFS directory with erasure code policy to TeraGen and TeraSort. Contributed by Sammi Chen

2016-10-12 Thread cnauroth
MAPREDUCE-6780. Add support for HDFS directory with erasure code policy to 
TeraGen and TeraSort. Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bea004ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bea004ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bea004ea

Branch: refs/heads/HADOOP-13037
Commit: bea004eaeb7ba33bf324ef3e7065cfdd614d8198
Parents: ec0b707
Author: Kai Zheng 
Authored: Sun Oct 9 15:33:26 2016 +0600
Committer: Kai Zheng 
Committed: Sun Oct 9 15:33:26 2016 +0600

--
 .../hadoop/examples/terasort/TeraGen.java   |  3 +++
 .../examples/terasort/TeraOutputFormat.java | 20 +---
 .../hadoop/examples/terasort/TeraSort.java  |  3 +++
 3 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bea004ea/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java
index 22fe344..7fbb22a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java
@@ -246,6 +246,9 @@ public class TeraGen extends Configured implements Tool {
 
   private static void usage() throws IOException {
 System.err.println("teragen  ");
+System.err.println("If you want to generate data and store them as " +
+"erasure code striping file, just make sure that the parent dir " +
+"of  has erasure code policy set");
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bea004ea/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java
index fd3ea78..73c446d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.examples.terasort;
 
 import java.io.IOException;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
@@ -40,6 +42,7 @@ import org.apache.hadoop.mapreduce.security.TokenCache;
  * An output format that writes the key and value appended together.
  */
 public class TeraOutputFormat extends FileOutputFormat {
+  private static final Log LOG = LogFactory.getLog(TeraOutputFormat.class);
   private OutputCommitter committer = null;
 
   /**
@@ -74,10 +77,22 @@ public class TeraOutputFormat extends 
FileOutputFormat {
   out.write(key.getBytes(), 0, key.getLength());
   out.write(value.getBytes(), 0, value.getLength());
 }
-
+
 public void close(TaskAttemptContext context) throws IOException {
   if (finalSync) {
-out.hsync();
+try {
+  out.hsync();
+} catch (UnsupportedOperationException e) {
+  /*
+   * Currently, hsync operation on striping file with erasure code
+   * policy is not supported yet. So this is a workaround to make
+   * teragen and terasort to support directory with striping files. In
+   * future, if the hsync operation is supported on striping file, this
+   * workaround should be removed.
+   */
+  LOG.info("Operation hsync is not supported so far on path with " +
+  "erasure code policy set");
+}
   }
   out.close();
 }
@@ -135,5 +150,4 @@ public class TeraOutputFormat extends 
FileOutputFormat {
 }
 return committer;
   }
-
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bea004ea/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java

[47/52] [abbrv] hadoop git commit: HDFS-10984. Expose nntop output as metrics. Contributed by Siddharth Wagle.

2016-10-12 Thread cnauroth
HDFS-10984. Expose nntop output as metrics. Contributed by Siddharth Wagle.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/61f0490a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/61f0490a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/61f0490a

Branch: refs/heads/HADOOP-13037
Commit: 61f0490a73085bbaf6639d9234277e59dc1145db
Parents: dacd3ec
Author: Xiaoyu Yao 
Authored: Tue Oct 11 15:55:02 2016 -0700
Committer: Xiaoyu Yao 
Committed: Tue Oct 11 15:55:02 2016 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  |  6 ++
 .../server/namenode/top/metrics/TopMetrics.java | 67 ++--
 .../server/namenode/metrics/TestTopMetrics.java | 63 ++
 3 files changed, 129 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/61f0490a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 2471dc8..b9b02ef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -89,6 +89,7 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
 import static org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.*;
 import static org.apache.hadoop.util.Time.now;
 import static org.apache.hadoop.util.Time.monotonicNow;
+import static 
org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics.TOPMETRICS_METRICS_SOURCE_NAME;
 
 import java.io.BufferedWriter;
 import java.io.ByteArrayInputStream;
@@ -989,6 +990,11 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 // Add audit logger to calculate top users
 if (topConf.isEnabled) {
   topMetrics = new TopMetrics(conf, topConf.nntopReportingPeriodsMs);
+  if (DefaultMetricsSystem.instance().getSource(
+  TOPMETRICS_METRICS_SOURCE_NAME) == null) {
+
DefaultMetricsSystem.instance().register(TOPMETRICS_METRICS_SOURCE_NAME,
+"Top N operations by user", topMetrics);
+  }
   auditLoggers.add(new TopAuditLogger(topMetrics));
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/61f0490a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
index ab55392..2719c88 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
@@ -17,24 +17,32 @@
  */
 package org.apache.hadoop.hdfs.server.namenode.top.metrics;
 
-import java.net.InetAddress;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-
 import com.google.common.collect.Lists;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.server.namenode.top.TopConf;
 import org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager;
+import 
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager.Op;
+import 
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager.User;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.lib.Interns;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.net.InetAddress;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
 import static 
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager.TopWindow;
 
 /**
@@ -58,8 +66,11 @@ import static 

[35/52] [abbrv] hadoop git commit: HDFS-10988. Refactor TestBalancerBandwidth. Contributed by Brahma Reddy Battula

2016-10-12 Thread cnauroth
HDFS-10988. Refactor TestBalancerBandwidth. Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9638186
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9638186
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9638186

Branch: refs/heads/HADOOP-13037
Commit: b963818621c200160bb37624f177bdcb059de4eb
Parents: 65912e4
Author: Mingliang Liu 
Authored: Mon Oct 10 13:19:17 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Oct 10 13:19:17 2016 -0700

--
 .../hadoop/hdfs/TestBalancerBandwidth.java  | 57 +---
 1 file changed, 25 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9638186/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
index 6e6bbee..6bbe3a1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
@@ -24,13 +24,15 @@ import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 import java.nio.charset.Charset;
 import java.util.ArrayList;
+import java.util.concurrent.TimeoutException;
 
+import com.google.common.base.Supplier;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.hadoop.hdfs.tools.DFSAdmin;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Test;
 
 /**
@@ -54,9 +56,8 @@ public class TestBalancerBandwidth {
 DEFAULT_BANDWIDTH);
 
 /* Create and start cluster */
-MiniDFSCluster cluster = 
-  new MiniDFSCluster.Builder(conf).numDataNodes(NUM_OF_DATANODES).build();
-try {
+try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(NUM_OF_DATANODES).build()) {
   cluster.waitActive();
 
   DistributedFileSystem fs = cluster.getFileSystem();
@@ -65,12 +66,6 @@ public class TestBalancerBandwidth {
   // Ensure value from the configuration is reflected in the datanodes.
   assertEquals(DEFAULT_BANDWIDTH, (long) 
datanodes.get(0).getBalancerBandwidth());
   assertEquals(DEFAULT_BANDWIDTH, (long) 
datanodes.get(1).getBalancerBandwidth());
-  ClientDatanodeProtocol dn1Proxy = DFSUtilClient
-  .createClientDatanodeProtocolProxy(datanodes.get(0).getDatanodeId(),
-  conf, 6, false);
-  ClientDatanodeProtocol dn2Proxy = DFSUtilClient
-  .createClientDatanodeProtocolProxy(datanodes.get(1).getDatanodeId(),
-  conf, 6, false);
   DFSAdmin admin = new DFSAdmin(conf);
   String dn1Address = datanodes.get(0).ipcServer.getListenerAddress()
   .getHostName() + ":" + datanodes.get(0).getIpcPort();
@@ -79,51 +74,49 @@ public class TestBalancerBandwidth {
 
   // verifies the dfsadmin command execution
   String[] args = new String[] { "-getBalancerBandwidth", dn1Address };
-  runGetBalancerBandwidthCmd(admin, args, dn1Proxy, DEFAULT_BANDWIDTH);
+  runGetBalancerBandwidthCmd(admin, args, DEFAULT_BANDWIDTH);
   args = new String[] { "-getBalancerBandwidth", dn2Address };
-  runGetBalancerBandwidthCmd(admin, args, dn2Proxy, DEFAULT_BANDWIDTH);
+  runGetBalancerBandwidthCmd(admin, args, DEFAULT_BANDWIDTH);
 
   // Dynamically change balancer bandwidth and ensure the updated value
   // is reflected on the datanodes.
   long newBandwidth = 12 * DEFAULT_BANDWIDTH; // 12M bps
   fs.setBalancerBandwidth(newBandwidth);
+  verifyBalancerBandwidth(datanodes, newBandwidth);
 
-  // Give it a few seconds to propogate new the value to the datanodes.
-  try {
-Thread.sleep(5000);
-  } catch (Exception e) {}
-
-  assertEquals(newBandwidth, (long) 
datanodes.get(0).getBalancerBandwidth());
-  assertEquals(newBandwidth, (long) 
datanodes.get(1).getBalancerBandwidth());
   // verifies the dfsadmin command execution
   args = new String[] { "-getBalancerBandwidth", dn1Address };
-  runGetBalancerBandwidthCmd(admin, args, dn1Proxy, newBandwidth);
+  runGetBalancerBandwidthCmd(admin, args, newBandwidth);
   args = new String[] { "-getBalancerBandwidth", dn2Address };
-  runGetBalancerBandwidthCmd(admin, args, dn2Proxy, 

[42/52] [abbrv] hadoop git commit: HDFS-10916. Switch from "raw" to "system" xattr namespace for erasure coding policy. (Andrew Wang via lei)

2016-10-12 Thread cnauroth
HDFS-10916. Switch from "raw" to "system" xattr namespace for erasure coding 
policy. (Andrew Wang via lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/809cfd27
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/809cfd27
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/809cfd27

Branch: refs/heads/HADOOP-13037
Commit: 809cfd27a30900d2c0e0e133574de49d0b4538cf
Parents: ecb51b8
Author: Lei Xu 
Authored: Tue Oct 11 10:04:46 2016 -0700
Committer: Lei Xu 
Committed: Tue Oct 11 10:04:46 2016 -0700

--
 .../org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/809cfd27/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
index 3798394..d112a48 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
@@ -369,7 +369,7 @@ public interface HdfsServerConstants {
   String SECURITY_XATTR_UNREADABLE_BY_SUPERUSER =
   "security.hdfs.unreadable.by.superuser";
   String XATTR_ERASURECODING_POLICY =
-  "raw.hdfs.erasurecoding.policy";
+  "system.hdfs.erasurecoding.policy";
 
   long BLOCK_GROUP_INDEX_MASK = 15;
   byte MAX_BLOCKS_IN_GROUP = 16;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[31/52] [abbrv] hadoop git commit: HADOOP-13696. change hadoop-common dependency scope of jsch to provided. Contributed by Yuanbo Liu.

2016-10-12 Thread cnauroth
HADOOP-13696. change hadoop-common dependency scope of jsch to provided. 
Contributed by Yuanbo Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cef61d50
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cef61d50
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cef61d50

Branch: refs/heads/HADOOP-13037
Commit: cef61d505e289f074130cc3981c20f7692437cee
Parents: af50da3
Author: Steve Loughran 
Authored: Mon Oct 10 12:32:39 2016 +0100
Committer: Steve Loughran 
Committed: Mon Oct 10 12:32:39 2016 +0100

--
 hadoop-common-project/hadoop-common/pom.xml | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cef61d50/hadoop-common-project/hadoop-common/pom.xml
--
diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 54d1cdd..92582ae 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -235,6 +235,7 @@
 
   com.jcraft
   jsch
+  provided
 
 
   org.apache.curator


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[23/52] [abbrv] hadoop git commit: HDFS-10797. Disk usage summary of snapshots causes renamed blocks to get counted twice. Contributed by Sean Mackrory.

2016-10-12 Thread cnauroth
HDFS-10797. Disk usage summary of snapshots causes renamed blocks to get 
counted twice. Contributed by Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a38d118
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a38d118
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a38d118

Branch: refs/heads/HADOOP-13037
Commit: 6a38d118d86b7907009bcec34f1b788d076f1d1c
Parents: e57fa81
Author: Xiao Chen 
Authored: Fri Oct 7 17:30:30 2016 -0700
Committer: Xiao Chen 
Committed: Fri Oct 7 17:37:15 2016 -0700

--
 .../ContentSummaryComputationContext.java   |  94 -
 .../hadoop/hdfs/server/namenode/INode.java  |   1 +
 .../hdfs/server/namenode/INodeDirectory.java|  11 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   1 +
 .../hdfs/server/namenode/INodeReference.java|   2 +
 .../hdfs/server/namenode/INodeSymlink.java  |   1 +
 .../snapshot/DirectorySnapshottableFeature.java |   9 +-
 .../snapshot/DirectoryWithSnapshotFeature.java  |  14 +-
 .../hdfs/server/namenode/snapshot/Snapshot.java |   1 +
 .../snapshot/TestRenameWithSnapshots.java   | 199 +++
 10 files changed, 307 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a38d118/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
index 6df9e75..4208b53 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
@@ -21,6 +21,10 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
+import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot;
+
+import java.util.HashSet;
+import java.util.Set;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -35,6 +39,8 @@ public class ContentSummaryComputationContext {
   private long yieldCount = 0;
   private long sleepMilliSec = 0;
   private int sleepNanoSec = 0;
+  private Set includedNodes = new HashSet<>();
+  private Set deletedSnapshottedNodes = new HashSet<>();
 
   /**
* Constructor
@@ -51,8 +57,8 @@ public class ContentSummaryComputationContext {
 this.fsn = fsn;
 this.limitPerRun = limitPerRun;
 this.nextCountLimit = limitPerRun;
-this.counts = new ContentCounts.Builder().build();
-this.snapshotCounts = new ContentCounts.Builder().build();
+setCounts(new ContentCounts.Builder().build());
+setSnapshotCounts(new ContentCounts.Builder().build());
 this.sleepMilliSec = sleepMicroSec/1000;
 this.sleepNanoSec = (int)((sleepMicroSec%1000)*1000);
   }
@@ -82,6 +88,7 @@ public class ContentSummaryComputationContext {
 }
 
 // Have we reached the limit?
+ContentCounts counts = getCounts();
 long currentCount = counts.getFileCount() +
 counts.getSymlinkCount() +
 counts.getDirectoryCount() +
@@ -123,14 +130,22 @@ public class ContentSummaryComputationContext {
   }
 
   /** Get the content counts */
-  public ContentCounts getCounts() {
+  public synchronized ContentCounts getCounts() {
 return counts;
   }
 
+  private synchronized void setCounts(ContentCounts counts) {
+this.counts = counts;
+  }
+
   public ContentCounts getSnapshotCounts() {
 return snapshotCounts;
   }
 
+  private void setSnapshotCounts(ContentCounts snapshotCounts) {
+this.snapshotCounts = snapshotCounts;
+  }
+
   public BlockStoragePolicySuite getBlockStoragePolicySuite() {
 Preconditions.checkState((bsps != null || fsn != null),
 "BlockStoragePolicySuite must be either initialized or available via" +
@@ -138,4 +153,77 @@ public class ContentSummaryComputationContext {
 return (bsps != null) ? bsps:
 fsn.getBlockManager().getStoragePolicySuite();
   }
+
+  /**
+   * If the node is an INodeReference, resolves it to the actual inode.
+   * Snapshot diffs represent renamed / moved files as different
+   * INodeReferences, but the underlying INode it refers to is consistent.
+   *
+   * @param node
+   * @return The referred INode if there is one, else returns the input
+   * unmodified.

[06/52] [abbrv] hadoop git commit: HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed by Steve Loughran.

2016-10-12 Thread cnauroth
HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed 
by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d46c3f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d46c3f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d46c3f6

Branch: refs/heads/HADOOP-13037
Commit: 2d46c3f6b7d55b6a2f124d07fe26d37359615df4
Parents: 2cc841f
Author: Chris Nauroth 
Authored: Thu Oct 6 10:57:01 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 10:57:01 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java  | 10 +-
 .../apache/hadoop/fs/TestFileSystemInitialization.java  | 12 
 2 files changed, 13 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d46c3f6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index c36598f..cc062c4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -2858,7 +2858,15 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   ClassUtil.findContainingJar(fs.getClass()), e);
 }
   } catch (ServiceConfigurationError ee) {
-LOG.warn("Cannot load filesystem", ee);
+LOG.warn("Cannot load filesystem: " + ee);
+Throwable cause = ee.getCause();
+// print all the nested exception messages
+while (cause != null) {
+  LOG.warn(cause.toString());
+  cause = cause.getCause();
+}
+// and at debug: the full stack
+LOG.debug("Stack Trace", ee);
   }
 }
 FILE_SYSTEMS_LOADED = true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d46c3f6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
index 18e8b01..4d627a5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
@@ -47,16 +47,12 @@ public class TestFileSystemInitialization {
 
   @Test
   public void testMissingLibraries() {
-boolean catched = false;
 try {
   Configuration conf = new Configuration();
-  FileSystem.getFileSystemClass("s3a", conf);
-} catch (Exception e) {
-  catched = true;
-} catch (ServiceConfigurationError e) {
-  // S3A shouldn't find AWS SDK and fail
-  catched = true;
+  Class fs = FileSystem.getFileSystemClass("s3a",
+  conf);
+  fail("Expected an exception, got a filesystem: " + fs);
+} catch (Exception | ServiceConfigurationError expected) {
 }
-assertTrue(catched);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/52] [abbrv] hadoop git commit: HDFS-10969. Fix typos in hdfs-default.xml Contributed by Yiqun Lin

2016-10-12 Thread cnauroth
HDFS-10969. Fix typos in hdfs-default.xml Contributed by Yiqun Lin


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/be3cb10f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/be3cb10f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/be3cb10f

Branch: refs/heads/HADOOP-13037
Commit: be3cb10f5301c2d526d0ba37dbe82f426683a801
Parents: c183b9d
Author: Brahma Reddy Battula 
Authored: Fri Oct 7 22:18:40 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Oct 7 22:18:40 2016 +0530

--
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml| 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/be3cb10f/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index ebaefde..672b597 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -725,7 +725,7 @@
 
   Setting this limit to 1000 disables compiler thread throttling. Only
   values between 1 and 1000 are valid. Setting an invalid value will result
-  in the throttle being disbled and an error message being logged. 1000 is
+  in the throttle being disabled and an error message being logged. 1000 is
   the default setting.
   
 
@@ -2559,7 +2559,7 @@
   dfs.block.local-path-access.user
   
   
-Comma separated list of the users allowd to open block files
+Comma separated list of the users allowed to open block files
 on legacy short-circuit local read.
   
 
@@ -3650,7 +3650,7 @@
   dfs.datanode.transferTo.allowed
   true
   
-If false, break block tranfers on 32-bit machines greater than
+If false, break block transfers on 32-bit machines greater than
 or equal to 2GB into smaller chunks.
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/52] [abbrv] hadoop git commit: HDFS-10957. Retire BKJM from trunk (Vinayakumar B)

2016-10-12 Thread cnauroth
HDFS-10957. Retire BKJM from trunk (Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/31195488
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/31195488
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/31195488

Branch: refs/heads/HADOOP-13037
Commit: 311954883f714973784432589896553eb320b597
Parents: 35b9d7d
Author: Vinayakumar B 
Authored: Thu Oct 6 19:28:25 2016 +0530
Committer: Vinayakumar B 
Committed: Thu Oct 6 19:28:25 2016 +0530

--
 .../src/contrib/bkjournal/README.txt|  66 --
 .../dev-support/findbugsExcludeFile.xml |   5 -
 .../hadoop-hdfs/src/contrib/bkjournal/pom.xml   | 175 
 .../bkjournal/BookKeeperEditLogInputStream.java | 264 -
 .../BookKeeperEditLogOutputStream.java  | 188 
 .../bkjournal/BookKeeperJournalManager.java | 893 -
 .../contrib/bkjournal/CurrentInprogress.java| 160 ---
 .../bkjournal/EditLogLedgerMetadata.java| 217 
 .../hadoop/contrib/bkjournal/MaxTxId.java   | 103 --
 .../bkjournal/src/main/proto/bkjournal.proto|  49 -
 .../hadoop/contrib/bkjournal/BKJMUtil.java  | 184 
 .../bkjournal/TestBookKeeperAsHASharedDir.java  | 414 
 .../bkjournal/TestBookKeeperConfiguration.java  | 174 
 .../bkjournal/TestBookKeeperEditLogStreams.java |  92 --
 .../bkjournal/TestBookKeeperHACheckpoints.java  | 109 --
 .../bkjournal/TestBookKeeperJournalManager.java | 984 ---
 .../TestBookKeeperSpeculativeRead.java  | 167 
 .../bkjournal/TestBootstrapStandbyWithBKJM.java | 170 
 .../bkjournal/TestCurrentInprogress.java| 160 ---
 .../hdfs/server/namenode/FSEditLogTestUtil.java |  40 -
 .../src/test/resources/log4j.properties |  55 --
 .../markdown/HDFSHighAvailabilityWithNFS.md | 114 ---
 hadoop-hdfs-project/pom.xml |   1 -
 hadoop-project/pom.xml  |   6 -
 24 files changed, 4790 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31195488/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt
deleted file mode 100644
index 7f67226..000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/README.txt
+++ /dev/null
@@ -1,66 +0,0 @@
-This module provides a BookKeeper backend for HFDS Namenode write
-ahead logging.  
-
-BookKeeper is a highly available distributed write ahead logging
-system. For more details, see
-   
-http://zookeeper.apache.org/bookkeeper
-

-How do I build?
-
- To generate the distribution packages for BK journal, do the
- following.
-
-   $ mvn clean package -Pdist
-
- This will generate a jar with all the dependencies needed by the journal
- manager, 
-
- target/hadoop-hdfs-bkjournal-.jar
-
- Note that the -Pdist part of the build command is important, as otherwise
- the dependencies would not be packaged in the jar. 
-

-How do I use the BookKeeper Journal?
-
- To run a HDFS namenode using BookKeeper as a backend, copy the bkjournal
- jar, generated above, into the lib directory of hdfs. In the standard 
- distribution of HDFS, this is at $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/
-
-  cp target/hadoop-hdfs-bkjournal-.jar \
-$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/
-
- Then, in hdfs-site.xml, set the following properties.
-
-   
- dfs.namenode.edits.dir
- 
bookkeeper://localhost:2181/bkjournal,file:///path/for/edits
-   
-
-   
- dfs.namenode.edits.journal-plugin.bookkeeper
- 
org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager
-   
-
- In this example, the namenode is configured to use 2 write ahead
- logging devices. One writes to BookKeeper and the other to a local
- file system. At the moment is is not possible to only write to 
- BookKeeper, as the resource checker explicitly checked for local
- disks currently.
-
- The given example, configures the namenode to look for the journal
- metadata at the path /bkjournal on the a standalone zookeeper ensemble
- at localhost:2181. To configure a multiple host zookeeper ensemble,
- separate the hosts with semicolons. For example, if you have 3
- zookeeper servers, zk1, zk2 & zk3, each listening on port 2181, you
- would specify this with 
-  
-   bookkeeper://zk1:2181;zk2:2181;zk3:2181/bkjournal
-
- The final part /bkjournal specifies the znode in zookeeper where
- ledger metadata will be store. Administrators can 

[11/52] [abbrv] hadoop git commit: HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

2016-10-12 Thread cnauroth
HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf372173
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf372173
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf372173

Branch: refs/heads/HADOOP-13037
Commit: bf372173d0f7cb97b62556cbd199a075254b96e6
Parents: 48b9d5f
Author: Andrew Wang 
Authored: Thu Oct 6 15:08:24 2016 -0700
Committer: Andrew Wang 
Committed: Thu Oct 6 15:08:24 2016 -0700

--
 hadoop-project-dist/pom.xml | 16 
 1 file changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf372173/hadoop-project-dist/pom.xml
--
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index e64f173..4423d94 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -88,22 +88,6 @@
 
   
   
-org.apache.maven.plugins
-maven-source-plugin
-
-  
-prepare-package
-
-  jar
-  test-jar
-
-  
-
-
-  true
-
-  
-  
 org.codehaus.mojo
 findbugs-maven-plugin
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[10/52] [abbrv] hadoop git commit: HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.

2016-10-12 Thread cnauroth
HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48b9d5fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48b9d5fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48b9d5fd

Branch: refs/heads/HADOOP-13037
Commit: 48b9d5fd2a96728b1118be217ca597c4098e99ca
Parents: 1d330fb
Author: Kihwal Lee 
Authored: Thu Oct 6 16:33:46 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 16:33:46 2016 -0500

--
 .../hdfs/server/namenode/FSDirAttrOp.java   | 110 ---
 .../hdfs/server/namenode/FSEditLogLoader.java   |  62 +++
 .../hdfs/server/namenode/FSNamesystem.java  |   3 +-
 3 files changed, 83 insertions(+), 92 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48b9d5fd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 4c5ecb1d..91d9bce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -50,9 +50,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_STORAGE_POLICY_ENABLED_KE
 
 public class FSDirAttrOp {
   static HdfsFileStatus setPermission(
-  FSDirectory fsd, final String srcArg, FsPermission permission)
+  FSDirectory fsd, final String src, FsPermission permission)
   throws IOException {
-String src = srcArg;
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
@@ -61,13 +60,12 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
-  unprotectedSetPermission(fsd, src, permission);
+  unprotectedSetPermission(fsd, iip, permission);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetPermissions(src, permission);
+fsd.getEditLog().logSetPermissions(iip.getPath(), permission);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -82,7 +80,6 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
@@ -92,11 +89,11 @@ public class FSDirAttrOp {
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  unprotectedSetOwner(fsd, iip, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+fsd.getEditLog().logSetOwner(iip.getPath(), username, group);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -109,20 +106,18 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   // Write access is required to set access and modification times
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
   final INode inode = iip.getLastINode();
   if (inode == null) {
-throw new FileNotFoundException("File/Directory " + src +
+throw new FileNotFoundException("File/Directory " + iip.getPath() +
 " does not exist.");
   }
-  boolean changed = unprotectedSetTimes(fsd, inode, mtime, atime, true,
-  iip.getLatestSnapshotId());
+  boolean changed = unprotectedSetTimes(fsd, iip, mtime, atime, true);
   if (changed) {
-fsd.getEditLog().logTimes(src, mtime, atime);
+fsd.getEditLog().logTimes(iip.getPath(), mtime, atime);
   }
 } finally {
   fsd.writeUnlock();
@@ -139,16 +134,15 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   final INodesInPath iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
 
-  final BlockInfo[] blocks = unprotectedSetReplication(fsd, src,
+  final BlockInfo[] blocks = unprotectedSetReplication(fsd, iip,
replication);
   isFile = blocks != null;
   if 

[01/52] [abbrv] hadoop git commit: HDFS-10957. Retire BKJM from trunk (Vinayakumar B)

2016-10-12 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13037 846ada2de -> 6476934ae


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31195488/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestCurrentInprogress.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestCurrentInprogress.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestCurrentInprogress.java
deleted file mode 100644
index 169a8a8..000
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestCurrentInprogress.java
+++ /dev/null
@@ -1,160 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.contrib.bkjournal;
-
-import static org.junit.Assert.assertEquals;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.util.concurrent.CountDownLatch;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.bookkeeper.util.LocalBookKeeper;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.server.NIOServerCnxnFactory;
-import org.apache.zookeeper.server.ZooKeeperServer;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-/**
- * Tests that read, update, clear api from CurrentInprogress
- */
-public class TestCurrentInprogress {
-  private static final Log LOG = 
LogFactory.getLog(TestCurrentInprogress.class);
-  private static final String CURRENT_NODE_PATH = "/test";
-  private static final String HOSTPORT = "127.0.0.1:2181";
-  private static final int CONNECTION_TIMEOUT = 3;
-  private static NIOServerCnxnFactory serverFactory;
-  private static ZooKeeperServer zks;
-  private static ZooKeeper zkc;
-  private static int ZooKeeperDefaultPort = 2181;
-  private static File zkTmpDir;
-
-  private static ZooKeeper connectZooKeeper(String ensemble)
-  throws IOException, KeeperException, InterruptedException {
-final CountDownLatch latch = new CountDownLatch(1);
-
-ZooKeeper zkc = new ZooKeeper(HOSTPORT, 3600, new Watcher() {
-  public void process(WatchedEvent event) {
-if (event.getState() == Watcher.Event.KeeperState.SyncConnected) {
-  latch.countDown();
-}
-  }
-});
-if (!latch.await(10, TimeUnit.SECONDS)) {
-  throw new IOException("Zookeeper took too long to connect");
-}
-return zkc;
-  }
-
-  @BeforeClass
-  public static void setupZooKeeper() throws Exception {
-LOG.info("Starting ZK server");
-zkTmpDir = File.createTempFile("zookeeper", "test");
-zkTmpDir.delete();
-zkTmpDir.mkdir();
-try {
-  zks = new ZooKeeperServer(zkTmpDir, zkTmpDir, ZooKeeperDefaultPort);
-  serverFactory = new NIOServerCnxnFactory();
-  serverFactory.configure(new InetSocketAddress(ZooKeeperDefaultPort), 10);
-  serverFactory.startup(zks);
-} catch (Exception e) {
-  LOG.error("Exception while instantiating ZooKeeper", e);
-}
-boolean b = LocalBookKeeper.waitForServerUp(HOSTPORT, CONNECTION_TIMEOUT);
-LOG.debug("ZooKeeper server up: " + b);
-  }
-
-  @AfterClass
-  public static void shutDownServer() {
-if (null != zks) {
-  zks.shutdown();
-}
-zkTmpDir.delete();
-  }
-
-  @Before
-  public void setup() throws Exception {
-zkc = connectZooKeeper(HOSTPORT);
-  }
-
-  @After
-  public void teardown() throws Exception {
-if (null != zkc) {
-  zkc.close();
-}
-
-  }
-
-  /**
-   * Tests that read should be able to read the data which updated with update
-   * api
-   */
-  @Test
-  public void testReadShouldReturnTheZnodePathAfterUpdate() throws Exception {
-String data = "inprogressNode";
-CurrentInprogress ci = new 

[24/52] [abbrv] hadoop git commit: HDFS-10968. BlockManager#isInNewRack should consider decommissioning nodes. Contributed by Jing Zhao.

2016-10-12 Thread cnauroth
HDFS-10968. BlockManager#isInNewRack should consider decommissioning nodes. 
Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d106213
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d106213
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d106213

Branch: refs/heads/HADOOP-13037
Commit: 4d106213c0f4835b723c9a50bd8080a9017122d7
Parents: 6a38d11
Author: Jing Zhao 
Authored: Fri Oct 7 22:44:54 2016 -0700
Committer: Jing Zhao 
Committed: Fri Oct 7 22:44:54 2016 -0700

--
 .../server/blockmanagement/BlockManager.java|   6 +-
 ...constructStripedBlocksWithRackAwareness.java | 158 +++
 2 files changed, 130 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d106213/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 8b74609..7949439 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1781,8 +1781,12 @@ public class BlockManager implements BlockStatsMXBean {
 
   private boolean isInNewRack(DatanodeDescriptor[] srcs,
   DatanodeDescriptor target) {
+LOG.debug("check if target {} increases racks, srcs={}", target,
+Arrays.asList(srcs));
 for (DatanodeDescriptor src : srcs) {
-  if (src.getNetworkLocation().equals(target.getNetworkLocation())) {
+  if (!src.isDecommissionInProgress() &&
+  src.getNetworkLocation().equals(target.getNetworkLocation())) {
+LOG.debug("the target {} is in the same rack with src {}", target, 
src);
 return false;
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d106213/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
index 152e153..3bc13a8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
@@ -35,12 +35,14 @@ import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.log4j.Level;
 import org.junit.After;
 import org.junit.Assert;
-import org.junit.Before;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.mockito.internal.util.reflection.Whitebox;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.util.Arrays;
 import java.util.HashSet;
 import java.util.Set;
 
@@ -58,57 +60,44 @@ public class TestReconstructStripedBlocksWithRackAwareness {
 GenericTestUtils.setLogLevel(BlockManager.LOG, Level.ALL);
   }
 
-  private static final String[] hosts = getHosts();
-  private static final String[] racks = getRacks();
+  private static final String[] hosts =
+  getHosts(NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS + 1);
+  private static final String[] racks =
+  getRacks(NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS + 1, NUM_DATA_BLOCKS);
 
-  private static String[] getHosts() {
-String[] hosts = new String[NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS + 1];
+  private static String[] getHosts(int numHosts) {
+String[] hosts = new String[numHosts];
 for (int i = 0; i < hosts.length; i++) {
   hosts[i] = "host" + (i + 1);
 }
 return hosts;
   }
 
-  private static String[] getRacks() {
-String[] racks = new String[NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS + 1];
-int numHostEachRack = (NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS - 1) /
-(NUM_DATA_BLOCKS - 1) + 1;
+  private static String[] getRacks(int numHosts, int numRacks) {
+String[] racks = new String[numHosts];
+int numHostEachRack = numHosts / numRacks;
+int residue = numHosts % numRacks;
 int j = 0;
-// we have NUM_DATA_BLOCKS racks
-for (int i = 1; i 

[04/52] [abbrv] hadoop git commit: YARN-5101. YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order. Contributed by Sunil

2016-10-12 Thread cnauroth
YARN-5101. YARN_APPLICATION_UPDATED event is parsed in 
ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with 
reversed order. Contributed by Sunil G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d2f380d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d2f380d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d2f380d

Branch: refs/heads/HADOOP-13037
Commit: 4d2f380d787a6145f45c87ba663079fedbf645b8
Parents: 3119548
Author: Rohith Sharma K S 
Authored: Thu Oct 6 18:16:48 2016 +0530
Committer: Rohith Sharma K S 
Committed: Thu Oct 6 20:42:36 2016 +0530

--
 .../ApplicationHistoryManagerOnTimelineStore.java | 14 +++---
 .../TestApplicationHistoryManagerOnTimelineStore.java | 14 +-
 .../yarn/server/resourcemanager/rmapp/RMAppImpl.java  |  2 +-
 3 files changed, 21 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d2f380d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 84d4543..feeafdd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -351,6 +351,7 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   }
 }
 List events = entity.getEvents();
+long updatedTimeStamp = 0L;
 if (events != null) {
   for (TimelineEvent event : events) {
 if (event.getEventType().equals(
@@ -358,9 +359,16 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   createdTime = event.getTimestamp();
 } else if (event.getEventType().equals(
 ApplicationMetricsConstants.UPDATED_EVENT_TYPE)) {
-  // TODO: YARN-5101. This type of events are parsed in
-  // time-stamp descending order which means the previous event
-  // could override the information from the later same type of event.
+  // This type of events are parsed in time-stamp descending order
+  // which means the previous event could override the information
+  // from the later same type of event. Hence compare timestamp
+  // before over writing.
+  if (event.getTimestamp() > updatedTimeStamp) {
+updatedTimeStamp = event.getTimestamp();
+  } else {
+continue;
+  }
+
   Map eventInfo = event.getEventInfo();
   if (eventInfo == null) {
 continue;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d2f380d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
index b65b22b..dd1a453 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
@@ 

[08/50] [abbrv] hadoop git commit: HADOOP-13684. Snappy may complain Hadoop is built without snappy if libhadoop is not found. Contributed by Wei-Chiu Chuang.

2016-10-12 Thread sunilg
HADOOP-13684. Snappy may complain Hadoop is built without snappy if libhadoop 
is not found. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b32b142
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b32b142
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b32b142

Branch: refs/heads/YARN-3368
Commit: 4b32b1420d98ea23460d05ae94f2698109b3d6f7
Parents: 2fb392a
Author: Wei-Chiu Chuang 
Authored: Tue Oct 11 13:21:33 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Tue Oct 11 13:21:33 2016 -0700

--
 .../apache/hadoop/io/compress/SnappyCodec.java  | 30 +++-
 1 file changed, 16 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b32b142/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
index 2a9c5d0..20a4cd6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
@@ -60,20 +60,22 @@ public class SnappyCodec implements Configurable, 
CompressionCodec, DirectDecomp
* Are the native snappy libraries loaded & initialized?
*/
   public static void checkNativeCodeLoaded() {
-  if (!NativeCodeLoader.isNativeCodeLoaded() ||
-  !NativeCodeLoader.buildSupportsSnappy()) {
-throw new RuntimeException("native snappy library not available: " +
-"this version of libhadoop was built without " +
-"snappy support.");
-  }
-  if (!SnappyCompressor.isNativeCodeLoaded()) {
-throw new RuntimeException("native snappy library not available: " +
-"SnappyCompressor has not been loaded.");
-  }
-  if (!SnappyDecompressor.isNativeCodeLoaded()) {
-throw new RuntimeException("native snappy library not available: " +
-"SnappyDecompressor has not been loaded.");
-  }
+if (!NativeCodeLoader.buildSupportsSnappy()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "this version of libhadoop was built without " +
+  "snappy support.");
+}
+if (!NativeCodeLoader.isNativeCodeLoaded()) {
+  throw new RuntimeException("Failed to load libhadoop.");
+}
+if (!SnappyCompressor.isNativeCodeLoaded()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "SnappyCompressor has not been loaded.");
+}
+if (!SnappyDecompressor.isNativeCodeLoaded()) {
+  throw new RuntimeException("native snappy library not available: " +
+  "SnappyDecompressor has not been loaded.");
+}
   }
   
   public static boolean isNativeCodeLoaded() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[37/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
index ff49403..b945451 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
@@ -20,7 +20,9 @@ import Ember from 'ember';
 
 export default Ember.Route.extend({
   model() {
-var apps = this.store.findAll('yarn-app');
-return apps;
+return Ember.RSVP.hash({
+  apps: this.store.findAll('yarn-app'),
+  clusterMetrics: this.store.findAll('ClusterMetric'),
+});
   }
 });

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
new file mode 100644
index 000..8719170
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
@@ -0,0 +1,22 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
new file mode 100644
index 000..8719170
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
@@ -0,0 +1,22 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
index 6e57388..64a1b3e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
@@ -22,6 +22,7 @@ export default Ember.Route.extend({
   model(param) {
 // Fetches data from both NM and RM. RM is queried to get node usage info.
 return Ember.RSVP.hash({
+  nodeInfo: { id: param.node_id, addr: param.node_addr },
   node: this.store.findRecord('yarn-node', param.node_addr),
   rmNode: this.store.findRecord('yarn-rm-node', param.node_id)
 });


[19/50] [abbrv] hadoop git commit: YARN-3334. [YARN-3368] Introduce REFRESH button in various UI pages (Sreenath Somarajapuram via Sunil G)

2016-10-12 Thread sunilg
YARN-3334. [YARN-3368] Introduce REFRESH button in various UI pages (Sreenath 
Somarajapuram via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/57e7b9e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/57e7b9e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/57e7b9e2

Branch: refs/heads/YARN-3368
Commit: 57e7b9e2c5ddfa0fb0477bbcbc141a9107975bae
Parents: a570f73
Author: sunilg 
Authored: Wed Aug 10 06:53:13 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../app/components/app-usage-donut-chart.js |  5 ---
 .../src/main/webapp/app/components/bar-chart.js |  4 +-
 .../webapp/app/components/breadcrumb-bar.js | 31 ++
 .../main/webapp/app/components/donut-chart.js   |  8 ++--
 .../app/components/queue-usage-donut-chart.js   |  2 +-
 .../app/controllers/yarn-container-log.js   | 40 ++
 .../webapp/app/controllers/yarn-node-app.js | 36 
 .../src/main/webapp/app/routes/abstract.js  | 32 +++
 .../main/webapp/app/routes/cluster-overview.js  | 12 +-
 .../main/webapp/app/routes/yarn-app-attempt.js  |  9 +++-
 .../main/webapp/app/routes/yarn-app-attempts.js |  8 +++-
 .../src/main/webapp/app/routes/yarn-app.js  | 11 -
 .../src/main/webapp/app/routes/yarn-apps.js |  9 +++-
 .../webapp/app/routes/yarn-container-log.js | 10 -
 .../src/main/webapp/app/routes/yarn-node-app.js |  8 +++-
 .../main/webapp/app/routes/yarn-node-apps.js|  8 +++-
 .../webapp/app/routes/yarn-node-container.js|  8 +++-
 .../webapp/app/routes/yarn-node-containers.js   |  8 +++-
 .../src/main/webapp/app/routes/yarn-node.js |  9 +++-
 .../src/main/webapp/app/routes/yarn-nodes.js|  9 +++-
 .../main/webapp/app/routes/yarn-queue-apps.js   | 12 --
 .../src/main/webapp/app/routes/yarn-queue.js| 14 ---
 .../src/main/webapp/app/routes/yarn-queues.js   | 14 ---
 .../src/main/webapp/app/styles/app.css  |  6 +++
 .../webapp/app/templates/cluster-overview.hbs   |  4 +-
 .../app/templates/components/breadcrumb-bar.hbs | 22 ++
 .../webapp/app/templates/yarn-app-attempt.hbs   |  4 +-
 .../webapp/app/templates/yarn-app-attempts.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-app.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-apps.hbs |  4 +-
 .../webapp/app/templates/yarn-container-log.hbs |  2 +
 .../main/webapp/app/templates/yarn-node-app.hbs |  2 +
 .../webapp/app/templates/yarn-node-apps.hbs |  4 +-
 .../app/templates/yarn-node-container.hbs   |  4 +-
 .../app/templates/yarn-node-containers.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-node.hbs |  4 +-
 .../main/webapp/app/templates/yarn-nodes.hbs|  4 +-
 .../webapp/app/templates/yarn-queue-apps.hbs|  4 +-
 .../main/webapp/app/templates/yarn-queue.hbs|  4 +-
 .../main/webapp/app/templates/yarn-queues.hbs   |  4 +-
 .../components/breadcrumb-bar-test.js   | 43 
 .../unit/controllers/yarn-container-log-test.js | 30 ++
 .../unit/controllers/yarn-node-app-test.js  | 30 ++
 43 files changed, 417 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/57e7b9e2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
index 0baf630..90f41fc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
@@ -26,7 +26,6 @@ export default BaseUsageDonutChart.extend({
   colors: d3.scale.category20().range(),
 
   draw: function() {
-this.initChart();
 var usageByApps = [];
 var avail = 100;
 
@@ -60,8 +59,4 @@ export default BaseUsageDonutChart.extend({
 this.renderDonutChart(usageByApps, this.get("title"), 
this.get("showLabels"),
   this.get("middleLabel"), "100%", "%");
   },
-
-  didInsertElement: function() {
-this.draw();
-  },
 })
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/57e7b9e2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
 

[39/50] [abbrv] hadoop git commit: YARN-5019. [YARN-3368] Change urls in new YARN ui from camel casing to hyphens. (Sunil G via wangda)

2016-10-12 Thread sunilg
YARN-5019. [YARN-3368] Change urls in new YARN ui from camel casing to hyphens. 
(Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ad52bce7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ad52bce7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ad52bce7

Branch: refs/heads/YARN-3368
Commit: ad52bce73372c43f306393b57609e7bc96d79d26
Parents: 58f6023
Author: Wangda Tan 
Authored: Mon May 9 11:29:59 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../main/webapp/app/components/tree-selector.js |  4 +--
 .../main/webapp/app/controllers/application.js  | 16 +-
 .../main/webapp/app/helpers/log-files-comma.js  |  2 +-
 .../src/main/webapp/app/helpers/node-link.js|  2 +-
 .../src/main/webapp/app/helpers/node-menu.js| 12 
 .../main/webapp/app/models/yarn-app-attempt.js  |  2 +-
 .../src/main/webapp/app/router.js   | 32 ++--
 .../src/main/webapp/app/routes/index.js |  2 +-
 .../main/webapp/app/routes/yarn-app-attempt.js  |  6 ++--
 .../src/main/webapp/app/routes/yarn-app.js  |  4 +--
 .../src/main/webapp/app/routes/yarn-apps.js |  2 +-
 .../webapp/app/routes/yarn-container-log.js |  2 +-
 .../src/main/webapp/app/routes/yarn-node-app.js |  2 +-
 .../main/webapp/app/routes/yarn-node-apps.js|  2 +-
 .../webapp/app/routes/yarn-node-container.js|  2 +-
 .../webapp/app/routes/yarn-node-containers.js   |  2 +-
 .../src/main/webapp/app/routes/yarn-node.js |  4 +--
 .../src/main/webapp/app/routes/yarn-nodes.js|  2 +-
 .../src/main/webapp/app/routes/yarn-queue.js|  6 ++--
 .../main/webapp/app/routes/yarn-queues/index.js |  2 +-
 .../app/routes/yarn-queues/queues-selector.js   |  2 +-
 .../app/templates/components/app-table.hbs  |  4 +--
 .../webapp/app/templates/yarn-container-log.hbs |  2 +-
 .../main/webapp/app/templates/yarn-node-app.hbs |  4 +--
 .../webapp/app/templates/yarn-node-apps.hbs |  4 +--
 .../app/templates/yarn-node-container.hbs   |  2 +-
 .../app/templates/yarn-node-containers.hbs  |  4 +--
 .../src/main/webapp/app/templates/yarn-node.hbs |  2 +-
 28 files changed, 66 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ad52bce7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
index f7ec020..698c253 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -126,7 +126,7 @@ export default Ember.Component.extend({
   .attr("transform", function(d) { return "translate(" + source.y0 + "," + 
source.x0 + ")"; })
   .on("click", function(d,i){
 if (d.queueData.get("name") != this.get("selected")) {
-document.location.href = "yarnQueue/" + d.queueData.get("name");
+document.location.href = "yarn-queue/" + d.queueData.get("name");
 }
   }.bind(this));
   // .on("click", click);
@@ -176,7 +176,7 @@ export default Ember.Component.extend({
   .attr("r", 20)
   .attr("href", 
 function(d) {
-  return "yarnQueues/" + d.queueData.get("name");
+  return "yarn-queues/" + d.queueData.get("name");
 })
   .style("stroke", function(d) {
 if (d.queueData.get("name") == this.get("selected")) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ad52bce7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
index 3c68365..2effb13 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
@@ -29,25 +29,25 @@ export default Ember.Controller.extend({
   outputMainMenu: function(){
 var path = this.get('currentPath');
 var html = 'Queues' +
+html = html + '>Queues' +
 '(current)

[26/50] [abbrv] hadoop git commit: YARN-4514. [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses. (Sunil G via wangda)

2016-10-12 Thread sunilg
YARN-4514. [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS 
addresses. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/58f6023d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/58f6023d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/58f6023d

Branch: refs/heads/YARN-3368
Commit: 58f6023dfa41a6dfca22ef6bc965adc59f1f6ac6
Parents: b9b9397
Author: Wangda Tan 
Authored: Sat Apr 16 23:04:45 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../src/main/webapp/app/adapters/abstract.js| 48 +
 .../main/webapp/app/adapters/cluster-info.js| 22 ++
 .../main/webapp/app/adapters/cluster-metric.js  | 22 ++
 .../webapp/app/adapters/yarn-app-attempt.js | 24 ++-
 .../src/main/webapp/app/adapters/yarn-app.js| 27 ++-
 .../webapp/app/adapters/yarn-container-log.js   | 10 ++-
 .../main/webapp/app/adapters/yarn-container.js  | 20 +++---
 .../main/webapp/app/adapters/yarn-node-app.js   | 24 +++
 .../webapp/app/adapters/yarn-node-container.js  | 24 +++
 .../src/main/webapp/app/adapters/yarn-node.js   | 23 +++---
 .../src/main/webapp/app/adapters/yarn-queue.js  | 22 ++
 .../main/webapp/app/adapters/yarn-rm-node.js| 21 ++
 .../hadoop-yarn-ui/src/main/webapp/app/app.js   |  4 +-
 .../src/main/webapp/app/config.js   |  5 +-
 .../src/main/webapp/app/index.html  |  1 +
 .../src/main/webapp/app/initializers/env.js | 29 
 .../src/main/webapp/app/initializers/hosts.js   | 28 
 .../src/main/webapp/app/services/env.js | 59 
 .../src/main/webapp/app/services/hosts.js   | 74 
 .../hadoop-yarn-ui/src/main/webapp/bower.json   | 25 +++
 .../src/main/webapp/config/configs.env  | 48 +
 .../src/main/webapp/config/default-config.js| 32 +
 .../src/main/webapp/config/environment.js   | 11 ++-
 .../src/main/webapp/ember-cli-build.js  | 10 ++-
 .../hadoop-yarn-ui/src/main/webapp/package.json | 35 -
 .../webapp/tests/unit/initializers/env-test.js  | 41 +++
 .../tests/unit/initializers/hosts-test.js   | 41 +++
 .../tests/unit/initializers/jquery-test.js  | 41 +++
 .../main/webapp/tests/unit/services/env-test.js | 30 
 .../webapp/tests/unit/services/hosts-test.js| 30 
 30 files changed, 637 insertions(+), 194 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/58f6023d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
new file mode 100644
index 000..c7e5c36
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+import Ember from 'ember';
+
+export default DS.JSONAPIAdapter.extend({
+  address: null, //Must be set by inheriting classes
+  restNameSpace: null, //Must be set by inheriting classes
+  serverName: null, //Must be set by inheriting classes
+
+  headers: {
+Accept: 'application/json'
+  },
+
+  host: Ember.computed("address", function () {
+var address = this.get("address");
+return this.get(`hosts.${address}`);
+  }),
+
+  namespace: Ember.computed("restNameSpace", function () {
+var serverName = this.get("restNameSpace");
+return this.get(`env.app.namespaces.${serverName}`);
+  }),
+
+  ajax: function(url, method, options) {
+options = options || {};
+options.crossDomain = true;
+options.xhrFields = {
+  withCredentials: true
+};
+options.targetServer = this.get('serverName');
+return this._super(url, method, options);
+  

[03/50] [abbrv] hadoop git commit: HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by java.io.File. (Virajith Jalaparti via lei)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/96b12662/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
index 57fab66..76af724 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
@@ -23,11 +23,13 @@ import java.io.FileOutputStream;
 import java.io.FilenameFilter;
 import java.io.IOException;
 import java.io.OutputStreamWriter;
+import java.net.URI;
 import java.nio.channels.ClosedChannelException;
 import java.nio.file.Files;
 import java.nio.file.Paths;
 import java.nio.file.StandardCopyOption;
 import java.util.Collections;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -56,13 +58,18 @@ import org.apache.hadoop.hdfs.server.datanode.DatanodeUtil;
 import org.apache.hadoop.hdfs.server.datanode.LocalReplica;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
+import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
 import org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder;
 import org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline;
 import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.BlockDirFilter;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
 import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.CloseableReferenceCount;
@@ -102,8 +109,14 @@ public class FsVolumeImpl implements FsVolumeSpi {
   private final StorageType storageType;
   private final Map bpSlices
   = new ConcurrentHashMap();
+
+  // Refers to the base StorageLocation used to construct this volume
+  // (i.e., does not include STORAGE_DIR_CURRENT in
+  // /STORAGE_DIR_CURRENT/)
+  private final StorageLocation storageLocation;
+
   private final File currentDir;// /current
-  private final DF usage;   
+  private final DF usage;
   private final long reserved;
   private CloseableReferenceCount reference = new CloseableReferenceCount();
 
@@ -124,19 +137,25 @@ public class FsVolumeImpl implements FsVolumeSpi {
*/
   protected ThreadPoolExecutor cacheExecutor;
   
-  FsVolumeImpl(FsDatasetImpl dataset, String storageID, File currentDir,
-  Configuration conf, StorageType storageType) throws IOException {
+  FsVolumeImpl(FsDatasetImpl dataset, String storageID, StorageDirectory sd,
+  Configuration conf) throws IOException {
+
+if (sd.getStorageLocation() == null) {
+  throw new IOException("StorageLocation specified for storage directory " 
+
+  sd + " is null");
+}
 this.dataset = dataset;
 this.storageID = storageID;
+this.reservedForReplicas = new AtomicLong(0L);
+this.storageLocation = sd.getStorageLocation();
+this.currentDir = sd.getCurrentDir();
+File parent = currentDir.getParentFile();
+this.usage = new DF(parent, conf);
+this.storageType = storageLocation.getStorageType();
 this.reserved = conf.getLong(DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY
 + "." + StringUtils.toLowerCase(storageType.toString()), conf.getLong(
 DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY,
 DFSConfigKeys.DFS_DATANODE_DU_RESERVED_DEFAULT));
-this.reservedForReplicas = new AtomicLong(0L);
-this.currentDir = currentDir;
-File parent = currentDir.getParentFile();
-this.usage = new DF(parent, conf);
-this.storageType = storageType;
 this.configuredCapacity = -1;
 this.conf = conf;
 cacheExecutor = initializeCacheExecutor(parent);
@@ -285,19 +304,20 @@ public class FsVolumeImpl implements FsVolumeSpi {
 return true;
   }
 
+  @VisibleForTesting
   File getCurrentDir() {
 return currentDir;
   }
   
-  File getRbwDir(String bpid) throws IOException {
+  protected File 

[07/50] [abbrv] hadoop git commit: HADOOP-13697. LogLevel#main should not throw exception if no arguments. Contributed by Mingliang Liu

2016-10-12 Thread sunilg
HADOOP-13697. LogLevel#main should not throw exception if no arguments. 
Contributed by Mingliang Liu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2fb392a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2fb392a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2fb392a5

Branch: refs/heads/YARN-3368
Commit: 2fb392a587d288b628936ca6d18fabad04afc585
Parents: 809cfd2
Author: Mingliang Liu 
Authored: Fri Oct 7 14:05:40 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Oct 11 10:57:08 2016 -0700

--
 .../src/main/java/org/apache/hadoop/log/LogLevel.java   | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2fb392a5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
index 4fa839f..79eae12 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
@@ -47,15 +47,17 @@ import org.apache.hadoop.http.HttpServer2;
 import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
 import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
 import org.apache.hadoop.security.ssl.SSLFactory;
+import org.apache.hadoop.util.GenericOptionsParser;
 import org.apache.hadoop.util.ServletUtil;
 import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
 
 /**
  * Change log level in runtime.
  */
 @InterfaceStability.Evolving
 public class LogLevel {
-  public static final String USAGES = "\nUsage: General options are:\n"
+  public static final String USAGES = "\nUsage: Command options are:\n"
   + "\t[-getlevel   [-protocol (http|https)]\n"
   + "\t[-setlevel"
   + "[-protocol (http|https)]\n";
@@ -67,7 +69,7 @@ public class LogLevel {
*/
   public static void main(String[] args) throws Exception {
 CLI cli = new CLI(new Configuration());
-System.exit(cli.run(args));
+System.exit(ToolRunner.run(cli, args));
   }
 
   /**
@@ -81,6 +83,7 @@ public class LogLevel {
 
   private static void printUsage() {
 System.err.println(USAGES);
+GenericOptionsParser.printGenericCommandUsage(System.err);
   }
 
   public static boolean isValidProtocol(String protocol) {
@@ -107,7 +110,7 @@ public class LogLevel {
 sendLogLevelRequest();
   } catch (HadoopIllegalArgumentException e) {
 printUsage();
-throw e;
+return -1;
   }
   return 0;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[50/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to 
mvn, and fix licenses. (wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9b93975
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9b93975
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9b93975

Branch: refs/heads/YARN-3368
Commit: b9b93975198e7be27dec17d96334273d299ab05d
Parents: 6738eca
Author: Wangda Tan 
Authored: Mon Mar 21 14:03:13 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .gitignore  |  13 +
 BUILDING.txt|   4 +-
 LICENSE.txt |  80 +
 dev-support/create-release.sh   | 144 +
 dev-support/docker/Dockerfile   |   5 +
 .../src/site/markdown/YarnUI2.md|  43 +++
 .../hadoop-yarn/hadoop-yarn-ui/.bowerrc |   4 -
 .../hadoop-yarn/hadoop-yarn-ui/.editorconfig|  34 ---
 .../hadoop-yarn/hadoop-yarn-ui/.ember-cli   |  11 -
 .../hadoop-yarn/hadoop-yarn-ui/.gitignore   |  17 --
 .../hadoop-yarn/hadoop-yarn-ui/.jshintrc|  32 --
 .../hadoop-yarn/hadoop-yarn-ui/.travis.yml  |  23 --
 .../hadoop-yarn/hadoop-yarn-ui/.watchmanconfig  |   3 -
 .../hadoop-yarn/hadoop-yarn-ui/README.md|  24 --
 .../hadoop-yarn-ui/app/adapters/cluster-info.js |  20 --
 .../app/adapters/cluster-metric.js  |  20 --
 .../app/adapters/yarn-app-attempt.js|  32 --
 .../hadoop-yarn-ui/app/adapters/yarn-app.js |  26 --
 .../app/adapters/yarn-container-log.js  |  74 -
 .../app/adapters/yarn-container.js  |  43 ---
 .../app/adapters/yarn-node-app.js   |  63 
 .../app/adapters/yarn-node-container.js |  64 
 .../hadoop-yarn-ui/app/adapters/yarn-node.js|  40 ---
 .../hadoop-yarn-ui/app/adapters/yarn-queue.js   |  20 --
 .../hadoop-yarn-ui/app/adapters/yarn-rm-node.js |  45 ---
 .../hadoop-yarn/hadoop-yarn-ui/app/app.js   |  20 --
 .../hadoop-yarn-ui/app/components/.gitkeep  |   0
 .../app/components/app-attempt-table.js |   4 -
 .../hadoop-yarn-ui/app/components/app-table.js  |   4 -
 .../hadoop-yarn-ui/app/components/bar-chart.js  | 104 ---
 .../app/components/base-chart-component.js  | 109 ---
 .../app/components/container-table.js   |   4 -
 .../app/components/donut-chart.js   | 148 --
 .../app/components/item-selector.js |  21 --
 .../app/components/queue-configuration-table.js |   4 -
 .../app/components/queue-navigator.js   |   4 -
 .../hadoop-yarn-ui/app/components/queue-view.js | 272 -
 .../app/components/simple-table.js  |  58 
 .../app/components/timeline-view.js | 250 
 .../app/components/tree-selector.js | 257 
 .../hadoop-yarn/hadoop-yarn-ui/app/config.js|  27 --
 .../hadoop-yarn/hadoop-yarn-ui/app/constants.js |  24 --
 .../hadoop-yarn-ui/app/controllers/.gitkeep |   0
 .../app/controllers/application.js  |  55 
 .../app/controllers/cluster-overview.js |   5 -
 .../hadoop-yarn-ui/app/controllers/yarn-apps.js |   4 -
 .../app/controllers/yarn-queue.js   |   6 -
 .../hadoop-yarn-ui/app/helpers/.gitkeep |   0
 .../hadoop-yarn-ui/app/helpers/divide.js|  31 --
 .../app/helpers/log-files-comma.js  |  48 ---
 .../hadoop-yarn-ui/app/helpers/node-link.js |  37 ---
 .../hadoop-yarn-ui/app/helpers/node-menu.js |  66 -
 .../hadoop-yarn/hadoop-yarn-ui/app/index.html   |  25 --
 .../hadoop-yarn-ui/app/models/.gitkeep  |   0
 .../hadoop-yarn-ui/app/models/cluster-info.js   |  13 -
 .../hadoop-yarn-ui/app/models/cluster-metric.js | 115 
 .../app/models/yarn-app-attempt.js  |  44 ---
 .../hadoop-yarn-ui/app/models/yarn-app.js   |  65 -
 .../app/models/yarn-container-log.js|  25 --
 .../hadoop-yarn-ui/app/models/yarn-container.js |  39 ---
 .../hadoop-yarn-ui/app/models/yarn-node-app.js  |  44 ---
 .../app/models/yarn-node-container.js   |  57 
 .../hadoop-yarn-ui/app/models/yarn-node.js  |  33 ---
 .../hadoop-yarn-ui/app/models/yarn-queue.js |  76 -
 .../hadoop-yarn-ui/app/models/yarn-rm-node.js   |  92 --
 .../hadoop-yarn-ui/app/models/yarn-user.js  |   8 -
 .../hadoop-yarn/hadoop-yarn-ui/app/router.js|  29 --
 .../hadoop-yarn-ui/app/routes/.gitkeep  |   0
 .../hadoop-yarn-ui/app/routes/application.js|  38 ---
 .../app/routes/cluster-overview.js  |  11 -
 .../hadoop-yarn-ui/app/routes/index.js  |  29 --
 .../app/routes/yarn-app-attempt.js  |  21 --
 

[43/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-container-log-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-container-log-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-container-log-test.js
new file mode 100644
index 000..4e68da0
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-container-log-test.js
@@ -0,0 +1,120 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import { moduleFor, test } from 'ember-qunit';
+import Constants from 'yarn-ui/constants';
+
+moduleFor('route:yarn-container-log', 'Unit | Route | ContainerLog', {
+});
+
+test('Basic creation test', function(assert) {
+  let route = this.subject();
+  assert.ok(route);
+  assert.ok(route.model);
+});
+
+test('Test getting container log', function(assert) {
+  var response = {
+  logs: "This is syslog",
+  containerID: "container_e32_1456000363780_0002_01_01",
+  logFileName: "syslog"};
+  var store = {
+findRecord: function(type) {
+  return new Ember.RSVP.Promise(function(resolve) {
+resolve(response);
+  }
+)}
+  };
+  assert.expect(6);
+  var route = this.subject();
+  route.set('store', store);
+  var model = route.model({node_id: "localhost:64318",
+  node_addr: "localhost:8042",
+  container_id: "container_e32_1456000363780_0002_01_01",
+  filename: "syslog"});
+   model.then(function(value) {
+ assert.ok(value);
+ assert.ok(value.containerLog);
+ assert.deepEqual(value.containerLog, response);
+ assert.ok(value.nodeInfo);
+ assert.equal(value.nodeInfo.addr, 'localhost:8042');
+ assert.equal(value.nodeInfo.id, 'localhost:64318');
+   });
+});
+
+/**
+ * This can happen when an empty response is sent from server
+ */
+test('Test non HTTP error while getting container log', function(assert) {
+  var error = {};
+  var response = {
+  logs: "",
+  containerID: "container_e32_1456000363780_0002_01_01",
+  logFileName: "syslog"};
+  var store = {
+findRecord: function(type) {
+  return new Ember.RSVP.Promise(function(resolve, reject) {
+reject(error);
+  }
+)}
+  };
+  assert.expect(6);
+  var route = this.subject();
+  route.set('store', store);
+  var model = route.model({node_id: "localhost:64318",
+  node_addr: "localhost:8042",
+  container_id: "container_e32_1456000363780_0002_01_01",
+  filename: "syslog"});
+   model.then(function(value) {
+ assert.ok(value);
+ assert.ok(value.containerLog);
+ assert.deepEqual(value.containerLog, response);
+ assert.ok(value.nodeInfo);
+ assert.equal(value.nodeInfo.addr, 'localhost:8042');
+ assert.equal(value.nodeInfo.id, 'localhost:64318');
+   });
+});
+
+test('Test HTTP error while getting container log', function(assert) {
+  var error = {errors: [{status: 404, responseText: 'Not Found'}]};
+  var response = {
+  logs: "",
+  containerID: "container_e32_1456000363780_0002_01_01",
+  logFileName: "syslog"};
+  var store = {
+findRecord: function(type) {
+  return new Ember.RSVP.Promise(function(resolve, reject) {
+reject(error);
+  }
+)}
+  };
+  assert.expect(5);
+  var route = this.subject();
+  route.set('store', store);
+  var model = route.model({node_id: "localhost:64318",
+  node_addr: "localhost:8042",
+  container_id: "container_e32_1456000363780_0002_01_01",
+  filename: "syslog"});
+   model.then(function(value) {
+ assert.ok(value);
+ assert.ok(value.errors);
+ assert.equal(value.errors.length, 1);
+ assert.equal(value.errors[0].status, 404);
+ assert.equal(value.errors[0].responseText, 'Not Found');
+   });
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-node-app-test.js
--
diff 

[40/50] [abbrv] hadoop git commit: YARN-4849. Addendum patch to fix javadocs. (Sunil G via wangda)

2016-10-12 Thread sunilg
 YARN-4849. Addendum patch to fix javadocs. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f93c4f01
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f93c4f01
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f93c4f01

Branch: refs/heads/YARN-3368
Commit: f93c4f011a37f0b090825245364f9a8ea02fe96a
Parents: b17
Author: Wangda Tan 
Authored: Fri Sep 9 10:54:37 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../hadoop/yarn/server/resourcemanager/ResourceManager.java| 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f93c4f01/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index d32f649..f739e31 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -916,6 +916,12 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
* Return a HttpServer.Builder that the journalnode / namenode / secondary
* namenode can use to initialize their HTTP / HTTPS server.
*
+   * @param conf configuration object
+   * @param httpAddr HTTP address
+   * @param httpsAddr HTTPS address
+   * @param name  Name of the server
+   * @throws IOException from Builder
+   * @return builder object
*/
   public static HttpServer2.Builder httpServerTemplateForRM(Configuration conf,
   final InetSocketAddress httpAddr, final InetSocketAddress httpsAddr,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[18/50] [abbrv] hadoop git commit: YARN-5504. [YARN-3368] Fix YARN UI build pom.xml (Sreenath Somarajapuram via Sunil G)

2016-10-12 Thread sunilg
YARN-5504. [YARN-3368] Fix YARN UI build pom.xml (Sreenath Somarajapuram via 
Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30fe1b5e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30fe1b5e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30fe1b5e

Branch: refs/heads/YARN-3368
Commit: 30fe1b5e2e031b84e87b960fb4739a23a909b382
Parents: 06f2246
Author: sunilg 
Authored: Thu Aug 25 23:21:29 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 59 +---
 .../src/main/webapp/ember-cli-build.js  |  2 +-
 .../hadoop-yarn-ui/src/main/webapp/package.json |  3 +-
 3 files changed, 17 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/30fe1b5e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 2933a76..fca8d30 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -35,7 +35,7 @@
 node
 v0.12.2
 2.10.0
-false
+false
   
 
   
@@ -60,19 +60,20 @@
   
 
   
- maven-clean-plugin
- 3.0.0
- 
-false
-
-   
-  
${basedir}/src/main/webapp/bower_components
-   
-   
-  
${basedir}/src/main/webapp/node_modules
-   
-
- 
+maven-clean-plugin
+3.0.0
+
+  ${keep-ui-build-cache}
+  false
+  
+
+  
${basedir}/src/main/webapp/bower_components
+
+
+  ${basedir}/src/main/webapp/node_modules
+
+  
+
   
 
   
@@ -126,21 +127,6 @@
 
   
   
-generate-sources
-bower --allow-root install
-
-  exec
-
-
-  ${webappDir}
-  bower
-  
---allow-root
-install
-  
-
-  
-  
 ember build
 generate-sources
 
@@ -158,21 +144,6 @@
 
   
   
-ember test
-generate-resources
-
-  exec
-
-
-  ${skipTests}
-  ${webappDir}
-  ember
-  
-test
-  
-
-  
-  
 cleanup tmp
 generate-sources
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30fe1b5e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
index d21cc3e..7736c75 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
@@ -22,7 +22,7 @@ var EmberApp = require('ember-cli/lib/broccoli/ember-app');
 
 module.exports = function(defaults) {
   var app = new EmberApp(defaults, {
-// Add options here
+hinting: false
   });
 
   
app.import("bower_components/datatables/media/css/jquery.dataTables.min.css");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30fe1b5e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
index baa473a..6a4eb16 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
@@ -9,8 +9,7 @@
   },
   "scripts": {
 "build": "ember build",
-"start": "ember server",
-"test": "ember test"
+"start": "ember server"
   },
   "repository": "",
   "engines": {


-
To unsubscribe, e-mail: 

[20/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6738eca4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
new file mode 100644
index 000..21a715c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
@@ -0,0 +1,102 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import { moduleFor, test } from 'ember-qunit';
+
+moduleFor('serializer:yarn-node-app', 'Unit | Serializer | NodeApp', {
+});
+
+test('Basic creation test', function(assert) {
+  let serializer = this.subject();
+
+  assert.ok(serializer);
+  assert.ok(serializer.normalizeSingleResponse);
+  assert.ok(serializer.normalizeArrayResponse);
+  assert.ok(serializer.internalNormalizeSingleResponse);
+});
+
+test('normalizeArrayResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+apps: {
+  app: [{
+id:"application_1456251210105_0001", state:"FINISHED", user:"root"
+  },{
+id:"application_1456251210105_0002", state:"RUNNING",user:"root",
+containerids:["container_e38_1456251210105_0002_01_01",
+"container_e38_1456251210105_0002_01_02"]
+  }]
+}
+  };
+  assert.expect(15);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 2);
+  assert.equal(response.data[0].attributes.containers, undefined);
+  assert.equal(response.data[1].attributes.containers.length, 2);
+  assert.deepEqual(response.data[1].attributes.containers,
+  payload.apps.app[1].containerids);
+  for (var i = 0; i < 2; i++) {
+assert.equal(response.data[i].type, modelClass.modelName);
+assert.equal(response.data[i].id, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.appId, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.state, payload.apps.app[i].state);
+assert.equal(response.data[i].attributes.user, payload.apps.app[i].user);
+  }
+});
+
+test('normalizeArrayResponse no apps test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = { apps: null };
+  assert.expect(5);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 1);
+  assert.equal(response.data[0].type, modelClass.modelName);
+  assert.equal(response.data[0].id, "dummy");
+  assert.equal(response.data[0].attributes.appId, undefined);
+});
+
+test('normalizeSingleResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+app: {id:"application_1456251210105_0001", state:"FINISHED", user:"root"}
+  };
+  assert.expect(7);
+  var response =
+  serializer.normalizeSingleResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(payload.app.id, response.data.id);
+  assert.equal(modelClass.modelName, response.data.type);
+  assert.equal(payload.app.id, response.data.attributes.appId);
+  assert.equal(payload.app.state, response.data.attributes.state);
+  assert.equal(payload.app.user, response.data.attributes.user);
+  assert.equal(response.data.attributes.containers, undefined);
+});
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6738eca4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
new file mode 100644
index 000..1f08467
--- 

[42/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
deleted file mode 100644
index 5877589..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-node', 'Unit | Model | Node', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.totalVmemAllocatedContainersMB);
-  assert.ok(model.vmemCheckEnabled);
-  assert.ok(model.pmemCheckEnabled);
-  assert.ok(model.nodeHealthy);
-  assert.ok(model.lastNodeUpdateTime);
-  assert.ok(model.healthReport);
-  assert.ok(model.nmStartupTime);
-  assert.ok(model.nodeManagerBuildVersion);
-  assert.ok(model.hadoopBuildVersion);
-});
-
-test('test fields', function(assert) {
-  let model = this.subject();
-
-  assert.expect(4);
-  Ember.run(function () {
-model.set("totalVmemAllocatedContainersMB", 4096);
-model.set("totalPmemAllocatedContainersMB", 2048);
-model.set("totalVCoresAllocatedContainers", 4);
-model.set("hadoopBuildVersion", "3.0.0-SNAPSHOT");
-assert.equal(model.get("totalVmemAllocatedContainersMB"), 4096);
-assert.equal(model.get("totalPmemAllocatedContainersMB"), 2048);
-assert.equal(model.get("totalVCoresAllocatedContainers"), 4);
-assert.equal(model.get("hadoopBuildVersion"), "3.0.0-SNAPSHOT");
-  });
-});
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
deleted file mode 100644
index 4fd2517..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
+++ /dev/null
@@ -1,95 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-rm-node', 'Unit | Model | RMNode', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.rack);
-  assert.ok(model.state);
-  assert.ok(model.nodeHostName);
-  assert.ok(model.nodeHTTPAddress);
-  assert.ok(model.lastHealthUpdate);
-  assert.ok(model.healthReport);
-  assert.ok(model.numContainers);
-  assert.ok(model.usedMemoryMB);
-  assert.ok(model.availMemoryMB);
-  assert.ok(model.usedVirtualCores);
-  assert.ok(model.availableVirtualCores);
-  assert.ok(model.version);
-  assert.ok(model.nodeLabels);
-  

[46/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
new file mode 100644
index 000..f7ec020
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -0,0 +1,275 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Component.extend({
+  // Map: 
+  map : undefined,
+
+  // Normalized data for d3
+  treeData: undefined,
+
+  // folded queues, folded[] == true means  is folded
+  foldedQueues: { },
+
+  // maxDepth
+  maxDepth: 0,
+
+  // num of leaf queue, folded queue is treated as leaf queue
+  numOfLeafQueue: 0,
+
+  // mainSvg
+  mainSvg: undefined,
+
+  // Init data
+  initData: function() {
+this.map = { };
+this.treeData = { };
+this.maxDepth = 0;
+this.numOfLeafQueue = 0;
+
+this.get("model")
+  .forEach(function(o) {
+this.map[o.id] = o;
+  }.bind(this));
+
+var selected = this.get("selected");
+
+this.initQueue("root", 1, this.treeData);
+  },
+
+  // get Children array of given queue
+  getChildrenNamesArray: function(q) {
+var namesArr = [];
+
+// Folded queue's children is empty
+if (this.foldedQueues[q.get("name")]) {
+  return namesArr;
+}
+
+var names = q.get("children");
+if (names) {
+  names.forEach(function(name) {
+namesArr.push(name);
+  });
+}
+
+return namesArr;
+  },
+
+  // Init queues
+  initQueue: function(queueName, depth, node) {
+if ((!queueName) || (!this.map[queueName])) {
+  // Queue is not existed
+  return;
+}
+
+if (depth > this.maxDepth) {
+  this.maxDepth = this.maxDepth + 1;
+}
+
+var queue = this.map[queueName];
+
+var names = this.getChildrenNamesArray(queue);
+
+node.name = queueName;
+node.parent = queue.get("parent");
+node.queueData = queue;
+
+if (names.length > 0) {
+  node.children = [];
+
+  names.forEach(function(name) {
+var childQueueData = {};
+node.children.push(childQueueData);
+this.initQueue(name, depth + 1, childQueueData);
+  }.bind(this));
+} else {
+  this.numOfLeafQueue = this.numOfLeafQueue + 1;
+}
+  },
+
+  update: function(source, root, tree, diagonal) {
+var duration = 300;
+var i = 0;
+
+// Compute the new tree layout.
+var nodes = tree.nodes(root).reverse();
+var links = tree.links(nodes);
+
+// Normalize for fixed-depth.
+nodes.forEach(function(d) { d.y = d.depth * 200; });
+
+// Update the nodes…
+var node = this.mainSvg.selectAll("g.node")
+  .data(nodes, function(d) { return d.id || (d.id = ++i); });
+
+// Enter any new nodes at the parent's previous position.
+var nodeEnter = node.enter().append("g")
+  .attr("class", "node")
+  .attr("transform", function(d) { return "translate(" + source.y0 + "," + 
source.x0 + ")"; })
+  .on("click", function(d,i){
+if (d.queueData.get("name") != this.get("selected")) {
+document.location.href = "yarnQueue/" + d.queueData.get("name");
+}
+  }.bind(this));
+  // .on("click", click);
+
+nodeEnter.append("circle")
+  .attr("r", 1e-6)
+  .style("fill", function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap <= 60.0) {
+  return "LimeGreen";
+} else if (usedCap <= 100.0) {
+  return "DarkOrange";
+} else {
+  return "LightCoral";
+}
+  });
+
+// append percentage
+nodeEnter.append("text")
+  .attr("x", function(d) { return 0; })
+  .attr("dy", ".35em")
+  .attr("text-anchor", function(d) { return "middle"; })
+  .text(function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap >= 100.0) {
+

[22/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-10-12 Thread sunilg
YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6738eca4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6738eca4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6738eca4

Branch: refs/heads/YARN-3368
Commit: 6738eca4041eda8dfe4aafae4a1536b4efeb6f44
Parents: 80b5173
Author: Wangda Tan 
Authored: Mon Mar 21 13:13:02 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../hadoop-yarn-ui/app/adapters/cluster-info.js |   5 +-
 .../app/adapters/cluster-metric.js  |   5 +-
 .../app/adapters/yarn-app-attempt.js|   3 +-
 .../hadoop-yarn-ui/app/adapters/yarn-app.js |   3 +-
 .../app/adapters/yarn-container-log.js  |  74 +
 .../app/adapters/yarn-container.js  |   5 +-
 .../app/adapters/yarn-node-app.js   |  63 
 .../app/adapters/yarn-node-container.js |  64 
 .../hadoop-yarn-ui/app/adapters/yarn-node.js|  40 +
 .../hadoop-yarn-ui/app/adapters/yarn-queue.js   |   3 +-
 .../hadoop-yarn-ui/app/adapters/yarn-rm-node.js |  45 ++
 .../app/components/simple-table.js  |  38 -
 .../hadoop-yarn/hadoop-yarn-ui/app/config.js|  27 
 .../hadoop-yarn/hadoop-yarn-ui/app/constants.js |  24 +++
 .../app/controllers/application.js  |  55 +++
 .../hadoop-yarn-ui/app/helpers/divide.js|  31 
 .../app/helpers/log-files-comma.js  |  48 ++
 .../hadoop-yarn-ui/app/helpers/node-link.js |  37 +
 .../hadoop-yarn-ui/app/helpers/node-menu.js |  66 
 .../hadoop-yarn-ui/app/models/yarn-app.js   |  14 +-
 .../app/models/yarn-container-log.js|  25 +++
 .../hadoop-yarn-ui/app/models/yarn-node-app.js  |  44 ++
 .../app/models/yarn-node-container.js   |  57 +++
 .../hadoop-yarn-ui/app/models/yarn-node.js  |  33 
 .../hadoop-yarn-ui/app/models/yarn-rm-node.js   |  92 +++
 .../hadoop-yarn/hadoop-yarn-ui/app/router.js|  13 ++
 .../hadoop-yarn-ui/app/routes/application.js|  38 +
 .../hadoop-yarn-ui/app/routes/index.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-apps.js  |   4 +-
 .../app/routes/yarn-container-log.js|  55 +++
 .../hadoop-yarn-ui/app/routes/yarn-node-app.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-node-apps.js |  29 
 .../app/routes/yarn-node-container.js   |  30 
 .../app/routes/yarn-node-containers.js  |  28 
 .../hadoop-yarn-ui/app/routes/yarn-node.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-nodes.js |  25 +++
 .../app/serializers/yarn-container-log.js   |  39 +
 .../app/serializers/yarn-node-app.js|  86 +++
 .../app/serializers/yarn-node-container.js  |  74 +
 .../hadoop-yarn-ui/app/serializers/yarn-node.js |  56 +++
 .../app/serializers/yarn-rm-node.js |  77 ++
 .../app/templates/application.hbs   |   4 +-
 .../hadoop-yarn-ui/app/templates/error.hbs  |  19 +++
 .../hadoop-yarn-ui/app/templates/notfound.hbs   |  20 +++
 .../hadoop-yarn-ui/app/templates/yarn-apps.hbs  |   4 +-
 .../app/templates/yarn-container-log.hbs|  36 +
 .../app/templates/yarn-node-app.hbs |  60 
 .../app/templates/yarn-node-apps.hbs|  51 +++
 .../app/templates/yarn-node-container.hbs   |  70 +
 .../app/templates/yarn-node-containers.hbs  |  58 +++
 .../hadoop-yarn-ui/app/templates/yarn-node.hbs  |  94 
 .../hadoop-yarn-ui/app/templates/yarn-nodes.hbs |  65 
 .../hadoop-yarn-ui/app/utils/converter.js   |  21 ++-
 .../hadoop-yarn-ui/app/utils/sorter.js  |  42 -
 .../hadoop-yarn/hadoop-yarn-ui/bower.json   |   2 +-
 .../hadoop-yarn-ui/config/environment.js|   1 -
 .../unit/adapters/yarn-container-log-test.js|  73 +
 .../tests/unit/adapters/yarn-node-app-test.js   |  93 +++
 .../unit/adapters/yarn-node-container-test.js   |  93 +++
 .../tests/unit/adapters/yarn-node-test.js   |  42 +
 .../tests/unit/adapters/yarn-rm-node-test.js|  44 ++
 .../unit/models/yarn-container-log-test.js  |  48 ++
 .../tests/unit/models/yarn-node-app-test.js |  65 
 .../unit/models/yarn-node-container-test.js |  78 ++
 .../tests/unit/models/yarn-node-test.js |  58 +++
 .../tests/unit/models/yarn-rm-node-test.js  |  95 
 .../unit/routes/yarn-container-log-test.js  | 120 +++
 .../tests/unit/routes/yarn-node-app-test.js |  56 +++
 .../tests/unit/routes/yarn-node-apps-test.js|  60 
 .../unit/routes/yarn-node-container-test.js |  61 

[14/50] [abbrv] hadoop git commit: HDFS-10965. Add unit test for HDFS command 'dfsadmin -printTopology'. Contributed by Xiaobing Zhou

2016-10-12 Thread sunilg
HDFS-10965. Add unit test for HDFS command 'dfsadmin -printTopology'. 
Contributed by Xiaobing Zhou


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7ba7092b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7ba7092b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7ba7092b

Branch: refs/heads/YARN-3368
Commit: 7ba7092bbcbbccfa24b672414d315656e600096c
Parents: b84c489
Author: Mingliang Liu 
Authored: Tue Oct 11 16:47:39 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Oct 11 17:23:54 2016 -0700

--
 .../apache/hadoop/hdfs/tools/TestDFSAdmin.java  | 50 
 1 file changed, 50 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7ba7092b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
index 94ecb9e..b49f73d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.PathUtils;
 import org.apache.hadoop.util.ToolRunner;
 import org.junit.After;
 import org.junit.Before;
@@ -364,6 +365,55 @@ public class TestDFSAdmin {
   }
 
   @Test(timeout = 3)
+  public void testPrintTopology() throws Exception {
+redirectStream();
+
+/* init conf */
+final Configuration dfsConf = new HdfsConfiguration();
+final File baseDir = new File(
+PathUtils.getTestDir(getClass()),
+GenericTestUtils.getMethodName());
+dfsConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, 
baseDir.getAbsolutePath());
+
+final int numDn = 4;
+final String[] racks = {
+"/d1/r1", "/d1/r2",
+"/d2/r1", "/d2/r2"};
+
+/* init cluster using topology */
+try (MiniDFSCluster miniCluster = new MiniDFSCluster.Builder(dfsConf)
+.numDataNodes(numDn).racks(racks).build()) {
+
+  miniCluster.waitActive();
+  assertEquals(numDn, miniCluster.getDataNodes().size());
+  final DFSAdmin dfsAdmin = new DFSAdmin(dfsConf);
+
+  resetStream();
+  final int ret = ToolRunner.run(dfsAdmin, new String[] 
{"-printTopology"});
+
+  /* collect outputs */
+  final List outs = Lists.newArrayList();
+  scanIntoList(out, outs);
+
+  /* verify results */
+  assertEquals(0, ret);
+  assertEquals(
+  "There should be three lines per Datanode: the 1st line is"
+  + " rack info, 2nd node info, 3rd empty line. The total"
+  + " should be as a result of 3 * numDn.",
+  12, outs.size());
+  assertThat(outs.get(0),
+  is(allOf(containsString("Rack:"), containsString("/d1/r1";
+  assertThat(outs.get(3),
+  is(allOf(containsString("Rack:"), containsString("/d1/r2";
+  assertThat(outs.get(6),
+  is(allOf(containsString("Rack:"), containsString("/d2/r1";
+  assertThat(outs.get(9),
+  is(allOf(containsString("Rack:"), containsString("/d2/r2";
+}
+  }
+
+  @Test(timeout = 3)
   public void testNameNodeGetReconfigurationStatus() throws IOException,
   InterruptedException, TimeoutException {
 ReconfigurationUtil ru = mock(ReconfigurationUtil.class);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: Merge branch 'HADOOP-12756' into trunk [Forced Update!]

2016-10-12 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/YARN-3368 424117bf3 -> 1e4751815 (forced update)


Merge branch 'HADOOP-12756' into trunk


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/669d6f13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/669d6f13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/669d6f13

Branch: refs/heads/YARN-3368
Commit: 669d6f13ec48a90d4ba7e4ed1dd0e9687580f8f3
Parents: c874fa9 c31b5e6
Author: Kai Zheng 
Authored: Tue Oct 11 03:22:11 2016 +0600
Committer: Kai Zheng 
Committed: Tue Oct 11 03:22:11 2016 +0600

--
 .gitignore  |   2 +
 hadoop-project/pom.xml  |  22 +
 .../dev-support/findbugs-exclude.xml|  18 +
 hadoop-tools/hadoop-aliyun/pom.xml  | 154 +
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 580 +++
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 516 +
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 260 +
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 +
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 ++
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 +++
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 239 
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 145 +
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 +++
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 ++
 .../contract/TestAliyunOSSContractCreate.java   |  35 ++
 .../contract/TestAliyunOSSContractDelete.java   |  34 ++
 .../contract/TestAliyunOSSContractDistCp.java   |  44 ++
 .../TestAliyunOSSContractGetFileStatus.java |  35 ++
 .../contract/TestAliyunOSSContractMkdir.java|  34 ++
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 ++
 .../contract/TestAliyunOSSContractRename.java   |  35 ++
 .../contract/TestAliyunOSSContractRootDir.java  |  69 +++
 .../oss/contract/TestAliyunOSSContractSeek.java |  34 ++
 .../src/test/resources/contract/aliyun-oss.xml  | 115 
 .../src/test/resources/core-site.xml|  46 ++
 .../src/test/resources/log4j.properties |  23 +
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 +
 hadoop-tools/pom.xml|   1 +
 34 files changed, 3695 insertions(+)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[44/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
new file mode 100644
index 000..ca80ccd
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
@@ -0,0 +1,58 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+
+  
+{{node-menu path="yarnNodeContainers" nodeAddr=model.nodeInfo.addr 
nodeId=model.nodeInfo.id}}
+
+  
+
+  
+Container ID
+Container State
+User
+Logs
+  
+
+
+  {{#if model.containers}}
+{{#each model.containers as |container|}}
+  {{#if container.isDummyContainer}}
+No containers found on this 
node
+  {{else}}
+
+  {{container.containerId}}
+  {{container.state}}
+  {{container.user}}
+  
+{{log-files-comma nodeId=model.nodeInfo.id
+nodeAddr=model.nodeInfo.addr
+containerId=container.containerId
+logFiles=container.containerLogFiles}}
+  
+
+  {{/if}}
+{{/each}}
+  {{/if}}
+
+  
+  {{simple-table table-id="node-containers-table" bFilter=true 
colsOrder="0,desc" colTypes="natural" colTargets="0"}}
+
+  
+
+{{outlet}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
new file mode 100644
index 000..a036076
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
@@ -0,0 +1,94 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+
+  
+{{node-menu path="yarnNode" nodeId=model.rmNode.id nodeAddr=model.node.id}}
+
+  
+Node Information
+  
+
+  
+Total Vmem allocated for Containers
+{{divide num=model.node.totalVmemAllocatedContainersMB 
den=1024}} GB
+  
+  
+Vmem enforcement enabled
+{{model.node.vmemCheckEnabled}}
+  
+  
+Total Pmem allocated for Containers
+{{divide num=model.node.totalPmemAllocatedContainersMB 
den=1024}} GB
+  
+  
+Pmem enforcement enabled
+{{model.node.pmemCheckEnabled}}
+  
+  
+Total VCores allocated for Containers
+{{model.node.totalVCoresAllocatedContainers}}
+  
+  
+Node Healthy Status
+

[48/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
deleted file mode 100644
index c5394d0..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
+++ /dev/null
@@ -1,49 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  
-  if (payload.appAttempt) {
-payload = payload.appAttempt;  
-  }
-  
-  var fixedPayload = {
-id: payload.appAttemptId,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  startTime: Converter.timeStampToDate(payload.startTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  containerId: payload.containerId,
-  nodeHttpAddress: payload.nodeHttpAddress,
-  nodeId: payload.nodeId,
-  state: payload.nodeId,
-  logsLink: payload.logsLink
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  // return expected is { data: [ {}, {} ] }
-  var normalizedArrayResponse = {};
-
-  // payload has apps : { app: [ {},{},{} ]  }
-  // need some error handling for ex apps or app may not be defined.
-  normalizedArrayResponse.data = 
payload.appAttempts.appAttempt.map(singleApp => {
-return this.internalNormalizeSingleResponse(store, primaryModelClass,
-  singleApp, singleApp.id, requestType);
-  }, this);
-  return normalizedArrayResponse;
-}
-});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
deleted file mode 100644
index a038fff..000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
+++ /dev/null
@@ -1,66 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  if (payload.app) {
-payload = payload.app;  
-  }
-  
-  var fixedPayload = {
-id: id,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  appName: payload.name,
-  user: payload.user,
-  queue: payload.queue,
-  state: payload.state,
-  startTime: Converter.timeStampToDate(payload.startedTime),
-  elapsedTime: Converter.msToElapsedTime(payload.elapsedTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  finalStatus: payload.finalStatus,
-  progress: payload.progress,
-  diagnostics: payload.diagnostics,
-  amContainerLogs: payload.amContainerLogs,
-  amHostHttpAddress: payload.amHostHttpAddress,
-  logAggregationStatus: payload.logAggregationStatus,
-  unmanagedApplication: payload.unmanagedApplication,
-  amNodeLabelExpression: payload.amNodeLabelExpression,
-  priority: payload.priority,
-  allocatedMB: payload.allocatedMB,
-  allocatedVCores: payload.allocatedVCores,
-  runningContainers: payload.runningContainers,
-  memorySeconds: payload.memorySeconds,
-  vcoreSeconds: payload.vcoreSeconds,
-  preemptedResourceMB: payload.preemptedResourceMB,
-  preemptedResourceVCores: payload.preemptedResourceVCores,
-  numNonAMContainerPreempted: payload.numNonAMContainerPreempted,
-  numAMContainerPreempted: payload.numAMContainerPreempted
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {

[29/50] [abbrv] hadoop git commit: YARN-4849. Addendum patch to fix ASF warnings. (Wangda Tan via Sunil G)

2016-10-12 Thread sunilg
YARN-4849. Addendum patch to fix ASF warnings. (Wangda Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8e537433
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8e537433
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8e537433

Branch: refs/heads/YARN-3368
Commit: 8e5374330ce6608d5bc763527c7ee9e5f6a373c9
Parents: 3d4edb4
Author: sunilg 
Authored: Wed Aug 31 23:43:02 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../assets/images/datatables/Sorting icons.psd | Bin 27490 -> 0 bytes
 .../public/assets/images/datatables/favicon.ico| Bin 894 -> 0 bytes
 .../public/assets/images/datatables/sort_asc.png   | Bin 160 -> 0 bytes
 .../assets/images/datatables/sort_asc_disabled.png | Bin 148 -> 0 bytes
 .../public/assets/images/datatables/sort_both.png  | Bin 201 -> 0 bytes
 .../public/assets/images/datatables/sort_desc.png  | Bin 158 -> 0 bytes
 .../images/datatables/sort_desc_disabled.png   | Bin 146 -> 0 bytes
 7 files changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/Sorting
 icons.psd
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/Sorting
 icons.psd 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/Sorting
 icons.psd
deleted file mode 100644
index 53b2e06..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/Sorting
 icons.psd and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
deleted file mode 100644
index 6eeaa2a..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/favicon.ico
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
deleted file mode 100644
index e1ba61a..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc.png
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc_disabled.png
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc_disabled.png
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc_disabled.png
deleted file mode 100644
index fb11dfe..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_asc_disabled.png
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
deleted file mode 100644
index af5bc7c..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_both.png
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e537433/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
deleted file mode 100644
index 0e156de..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/assets/images/datatables/sort_desc.png
 and /dev/null differ


[38/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-10-12 Thread sunilg
YARN-5321. [YARN-3368] Add resource usage for application by node managers 
(Wangda Tan via Sunil G)
YARN-5320. [YARN-3368] Add resource usage by applications and queues to cluster 
overview page  (Wangda Tan via Sunil G)
YARN-5322. [YARN-3368] Add a node heat chart map (Wangda Tan via Sunil G)
YARN-5347. [YARN-3368] Applications page improvements (Sreenath Somarajapuram 
via Sunil G)
YARN-5348. [YARN-3368] Node details page improvements (Sreenath Somarajapuram 
via Sunil G)
YARN-5346. [YARN-3368] Queues page improvements (Sreenath Somarajapuram via 
Sunil G)
YARN-5345. [YARN-3368] Cluster overview page improvements (Sreenath 
Somarajapuram via Sunil G)
YARN-5344. [YARN-3368] Generic UI improvements (Sreenath Somarajapuram via 
Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a570f734
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a570f734
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a570f734

Branch: refs/heads/YARN-3368
Commit: a570f73495fb87eca55e9cfcb3f7f741fd968275
Parents: 58d6c69
Author: Sunil 
Authored: Fri Jul 15 21:16:06 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../src/main/webapp/app/adapters/yarn-app.js|  14 +
 .../app/components/app-usage-donut-chart.js |  67 
 .../src/main/webapp/app/components/bar-chart.js |   5 +
 .../app/components/base-chart-component.js  |  55 ++-
 .../app/components/base-usage-donut-chart.js|  43 +++
 .../main/webapp/app/components/donut-chart.js   |  55 ++-
 .../main/webapp/app/components/nodes-heatmap.js | 209 +++
 ...er-app-memusage-by-nodes-stacked-barchart.js |  88 +
 ...app-ncontainers-by-nodes-stacked-barchart.js |  67 
 .../app/components/queue-usage-donut-chart.js   |  69 
 .../main/webapp/app/components/queue-view.js|   3 +-
 .../main/webapp/app/components/simple-table.js  |   9 +-
 .../webapp/app/components/stacked-barchart.js   | 198 +++
 .../main/webapp/app/components/timeline-view.js |   2 +-
 .../main/webapp/app/components/tree-selector.js |  43 ++-
 .../webapp/app/controllers/cluster-overview.js  |   9 +
 .../webapp/app/controllers/yarn-app-attempt.js  |  40 +++
 .../webapp/app/controllers/yarn-app-attempts.js |  40 +++
 .../src/main/webapp/app/controllers/yarn-app.js |  38 ++
 .../main/webapp/app/controllers/yarn-apps.js|   9 +
 .../webapp/app/controllers/yarn-node-apps.js|  39 +++
 .../app/controllers/yarn-node-containers.js |  39 +++
 .../main/webapp/app/controllers/yarn-node.js|  37 ++
 .../app/controllers/yarn-nodes-heatmap.js   |  36 ++
 .../main/webapp/app/controllers/yarn-nodes.js   |  33 ++
 .../webapp/app/controllers/yarn-queue-apps.js   |  46 +++
 .../main/webapp/app/controllers/yarn-queue.js   |  20 ++
 .../main/webapp/app/controllers/yarn-queues.js  |  34 ++
 .../webapp/app/controllers/yarn-services.js |  34 ++
 .../main/webapp/app/models/cluster-metric.js|   2 +-
 .../main/webapp/app/models/yarn-app-attempt.js  |  11 +
 .../src/main/webapp/app/models/yarn-app.js  |   4 +
 .../src/main/webapp/app/models/yarn-rm-node.js  |   7 +
 .../src/main/webapp/app/router.js   |  15 +-
 .../src/main/webapp/app/routes/application.js   |   2 +
 .../main/webapp/app/routes/cluster-overview.js  |   9 +-
 .../main/webapp/app/routes/yarn-app-attempts.js |  30 ++
 .../src/main/webapp/app/routes/yarn-app.js  |  17 +-
 .../src/main/webapp/app/routes/yarn-apps.js |   6 +-
 .../main/webapp/app/routes/yarn-apps/apps.js|  22 ++
 .../webapp/app/routes/yarn-apps/services.js |  22 ++
 .../src/main/webapp/app/routes/yarn-node.js |   1 +
 .../src/main/webapp/app/routes/yarn-nodes.js|   5 +-
 .../webapp/app/routes/yarn-nodes/heatmap.js |  22 ++
 .../main/webapp/app/routes/yarn-nodes/table.js  |  22 ++
 .../main/webapp/app/routes/yarn-queue-apps.js   |  36 ++
 .../src/main/webapp/app/routes/yarn-queues.js   |  38 ++
 .../webapp/app/serializers/yarn-app-attempt.js  |  19 +-
 .../src/main/webapp/app/serializers/yarn-app.js |   8 +-
 .../webapp/app/serializers/yarn-container.js|  20 +-
 .../src/main/webapp/app/styles/app.css  | 139 ++--
 .../main/webapp/app/templates/application.hbs   |  99 --
 .../webapp/app/templates/cluster-overview.hbs   | 168 ++---
 .../app/templates/components/app-table.hbs  |  10 +-
 .../templates/components/node-menu-panel.hbs|   2 +-
 .../app/templates/components/nodes-heatmap.hbs  |  27 ++
 .../components/queue-configuration-table.hbs|   4 -
 .../templates/components/queue-navigator.hbs|  14 +-
 .../app/templates/components/timeline-view.hbs  |   3 +-
 .../webapp/app/templates/yarn-app-attempt.hbs   |  13 +-
 .../webapp/app/templates/yarn-app-attempts.hbs  |  57 +++
 .../src/main/webapp/app/templates/yarn-app.hbs  | 346 ---
 

[31/50] [abbrv] hadoop git commit: YARN-5583. [YARN-3368] Fix wrong paths in .gitignore (Sreenath Somarajapuram via Sunil G)

2016-10-12 Thread sunilg
YARN-5583. [YARN-3368] Fix wrong paths in .gitignore (Sreenath Somarajapuram 
via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d74fd598
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d74fd598
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d74fd598

Branch: refs/heads/YARN-3368
Commit: d74fd59892ec5bda47533f359e8be9ff03f41d45
Parents: 30fe1b5
Author: sunilg 
Authored: Tue Aug 30 20:27:59 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .gitignore | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d74fd598/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 677bde6..f9a7163 100644
--- a/.gitignore
+++ b/.gitignore
@@ -35,8 +35,8 @@ 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.sass-cache
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/connect.lock
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/coverage/*
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/libpeerconnection.log
-hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webappnpm-debug.log
-hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapptestem.log
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/npm-debug.log
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/testem.log
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/dist
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tmp
 yarnregistry.pdf


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/50] [abbrv] hadoop git commit: HADOOP-13698. Document caveat for KeyShell when underlying KeyProvider does not delete a key.

2016-10-12 Thread sunilg
HADOOP-13698. Document caveat for KeyShell when underlying KeyProvider does not 
delete a key.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b84c4891
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b84c4891
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b84c4891

Branch: refs/heads/YARN-3368
Commit: b84c4891f9eca8d56593e48e9df88be42e24220d
Parents: 3c9a010
Author: Xiao Chen 
Authored: Tue Oct 11 17:05:00 2016 -0700
Committer: Xiao Chen 
Committed: Tue Oct 11 17:05:00 2016 -0700

--
 .../hadoop-common/src/site/markdown/CommandsManual.md| 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b84c4891/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
index 4d7d504..2ece71a 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
@@ -202,7 +202,9 @@ Manage keys via the KeyProvider. For details on 
KeyProviders, see the [Transpare
 
 Providers frequently require that a password or other secret is supplied. If 
the provider requires a password and is unable to find one, it will use a 
default password and emit a warning message that the default password is being 
used. If the `-strict` flag is supplied, the warning message becomes an error 
message and the command returns immediately with an error status.
 
-NOTE: Some KeyProviders (e.g. 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider) does not support uppercase 
key names.
+NOTE: Some KeyProviders (e.g. 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider) do not support uppercase key 
names.
+
+NOTE: Some KeyProviders do not directly execute a key deletion (e.g. performs 
a soft-delete instead, or delay the actual deletion, to prevent mistake). In 
these cases, one may encounter errors when creating/deleting a key with the 
same name after deleting it. Please check the underlying KeyProvider for 
details.
 
 ### `trace`
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[33/50] [abbrv] hadoop git commit: YARN-4733. [YARN-3368] Initial commit of new YARN web UI. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/80b51737/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-metric.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-metric.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-metric.js
new file mode 100644
index 000..d39885e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/cluster-metric.js
@@ -0,0 +1,29 @@
+import DS from 'ember-data';
+
+export default DS.JSONAPISerializer.extend({
+normalizeSingleResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  var fixedPayload = {
+id: id,
+type: primaryModelClass.modelName,
+attributes: payload
+  };
+
+  return this._super(store, primaryModelClass, fixedPayload, id,
+requestType);
+},
+
+normalizeArrayResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  // return expected is { data: [ {}, {} ] }
+  var normalizedArrayResponse = {};
+
+  // payload has apps : { app: [ {},{},{} ]  }
+  // need some error handling for ex apps or app may not be defined.
+  normalizedArrayResponse.data = [
+this.normalizeSingleResponse(store, primaryModelClass,
+  payload.clusterMetrics, 1, requestType)
+  ];
+  return normalizedArrayResponse;
+}
+});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/80b51737/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
new file mode 100644
index 000..c5394d0
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
@@ -0,0 +1,49 @@
+import DS from 'ember-data';
+import Converter from 'yarn-ui/utils/converter';
+
+export default DS.JSONAPISerializer.extend({
+internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  
+  if (payload.appAttempt) {
+payload = payload.appAttempt;  
+  }
+  
+  var fixedPayload = {
+id: payload.appAttemptId,
+type: primaryModelClass.modelName, // yarn-app
+attributes: {
+  startTime: Converter.timeStampToDate(payload.startTime),
+  finishedTime: Converter.timeStampToDate(payload.finishedTime),
+  containerId: payload.containerId,
+  nodeHttpAddress: payload.nodeHttpAddress,
+  nodeId: payload.nodeId,
+  state: payload.nodeId,
+  logsLink: payload.logsLink
+}
+  };
+
+  return fixedPayload;
+},
+
+normalizeSingleResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  var p = this.internalNormalizeSingleResponse(store, 
+primaryModelClass, payload, id, requestType);
+  return { data: p };
+},
+
+normalizeArrayResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  // return expected is { data: [ {}, {} ] }
+  var normalizedArrayResponse = {};
+
+  // payload has apps : { app: [ {},{},{} ]  }
+  // need some error handling for ex apps or app may not be defined.
+  normalizedArrayResponse.data = 
payload.appAttempts.appAttempt.map(singleApp => {
+return this.internalNormalizeSingleResponse(store, primaryModelClass,
+  singleApp, singleApp.id, requestType);
+  }, this);
+  return normalizedArrayResponse;
+}
+});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/80b51737/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
new file mode 100644
index 000..a038fff
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
@@ -0,0 +1,66 @@
+import DS from 'ember-data';
+import Converter from 'yarn-ui/utils/converter';
+
+export default DS.JSONAPISerializer.extend({
+internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
+  requestType) {
+  if (payload.app) {
+payload = payload.app;  
+  }
+  
+  var fixedPayload = {
+id: id,
+type: primaryModelClass.modelName, // yarn-app
+attributes: {
+  appName: payload.name,
+  user: payload.user,
+  queue: payload.queue,
+  state: payload.state,
+  startTime: Converter.timeStampToDate(payload.startedTime),
+  elapsedTime: 

[25/50] [abbrv] hadoop git commit: YARN-4515. [YARN-3368] Support hosting web UI framework inside YARN RM. (Sunil G via wangda) YARN-5000. [YARN-3368] App attempt page is not loading when timeline ser

2016-10-12 Thread sunilg
YARN-4515. [YARN-3368] Support hosting web UI framework inside YARN RM. (Sunil 
G via wangda)
YARN-5000. [YARN-3368] App attempt page is not loading when timeline server is 
not started (Sunil G via wangda)
YARN-5038. [YARN-3368] Application and Container pages shows wrong values when 
RM is stopped. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/639606b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/639606b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/639606b0

Branch: refs/heads/YARN-3368
Commit: 639606b0fad5d45fa243a240087c2ee3ffc82d34
Parents: ad52bce
Author: Wangda Tan 
Authored: Tue May 17 22:28:24 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 LICENSE.txt |  2 +
 .../resources/assemblies/hadoop-yarn-dist.xml   |  7 ++
 .../hadoop/yarn/conf/YarnConfiguration.java | 23 ++
 .../src/main/resources/yarn-default.xml | 26 +++
 .../server/resourcemanager/ResourceManager.java | 76 +---
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  |  4 +-
 .../webapp/app/adapters/yarn-app-attempt.js |  4 +-
 .../webapp/app/adapters/yarn-container-log.js   |  2 +-
 .../main/webapp/app/adapters/yarn-node-app.js   | 10 ++-
 .../webapp/app/adapters/yarn-node-container.js  | 10 ++-
 .../src/main/webapp/app/adapters/yarn-node.js   |  5 +-
 .../main/webapp/app/components/timeline-view.js | 17 +++--
 .../main/webapp/app/components/tree-selector.js |  4 +-
 .../main/webapp/app/helpers/log-files-comma.js  |  2 +-
 .../src/main/webapp/app/helpers/node-link.js|  2 +-
 .../src/main/webapp/app/helpers/node-menu.js|  6 +-
 .../src/main/webapp/app/helpers/node-name.js| 46 
 .../main/webapp/app/models/yarn-app-attempt.js  | 72 ++-
 .../src/main/webapp/app/models/yarn-app.js  | 14 
 .../main/webapp/app/models/yarn-container.js|  7 ++
 .../main/webapp/app/routes/yarn-app-attempt.js  |  6 +-
 .../webapp/app/serializers/yarn-app-attempt.js  |  5 +-
 .../src/main/webapp/app/serializers/yarn-app.js | 11 ++-
 .../webapp/app/serializers/yarn-container.js|  3 +-
 .../webapp/app/serializers/yarn-node-app.js |  5 +-
 .../app/serializers/yarn-node-container.js  |  5 +-
 .../main/webapp/app/serializers/yarn-rm-node.js |  5 +-
 .../main/webapp/app/templates/application.hbs   | 21 +-
 .../templates/components/app-attempt-table.hbs  | 22 +-
 .../app/templates/components/app-table.hbs  |  8 +--
 .../templates/components/container-table.hbs|  4 +-
 .../templates/components/node-menu-panel.hbs| 44 
 .../app/templates/components/timeline-view.hbs  |  2 +-
 .../src/main/webapp/app/templates/error.hbs |  2 +-
 .../webapp/app/templates/yarn-app-attempt.hbs   |  4 ++
 .../src/main/webapp/app/templates/yarn-app.hbs  |  2 +-
 .../src/main/webapp/app/templates/yarn-apps.hbs |  9 ++-
 .../main/webapp/app/templates/yarn-node-app.hbs |  4 +-
 .../webapp/app/templates/yarn-node-apps.hbs | 12 ++--
 .../app/templates/yarn-node-container.hbs   |  2 +-
 .../app/templates/yarn-node-containers.hbs  | 12 ++--
 .../src/main/webapp/app/templates/yarn-node.hbs |  2 +-
 .../main/webapp/app/templates/yarn-nodes.hbs| 10 ++-
 .../main/webapp/app/templates/yarn-queue.hbs|  8 ++-
 .../src/main/webapp/config/environment.js   |  2 +-
 .../hadoop-yarn-ui/src/main/webapp/package.json |  2 +
 .../webapp/tests/unit/helpers/node-name-test.js | 28 
 47 files changed, 486 insertions(+), 93 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/639606b0/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 45b6cdf..5efbd14 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1882,6 +1882,7 @@ The Apache Hadoop YARN Web UI component bundles the 
following files under the MI
  - datatables v1.10.8 (https://datatables.net/)
  - moment v2.10.6 (http://momentjs.com/) - Copyright (c) 2011-2015 Tim Wood, 
Iskren Chernev, Moment.js contributors
  - em-helpers v0.5.8 (https://github.com/sreenaths/em-helpers)
+ - ember-array-contains-helper v1.0.2 
(https://github.com/bmeurant/ember-array-contains-helper)
  - ember-cli-app-version v0.5.8 
(https://github.com/EmberSherpa/ember-cli-app-version) - Authored by Taras 
Mankovski 
  - ember-cli-babel v5.1.6 (https://github.com/babel/ember-cli-babel) - 
Authored by Stefan Penner 
  - ember-cli-content-security-policy v0.4.0 
(https://github.com/rwjblue/ember-cli-content-security-policy)
@@ -1895,6 +1896,7 @@ The Apache Hadoop YARN Web UI component bundles the 
following files under the MI
  - ember-cli-sri v1.2.1 

[06/50] [abbrv] hadoop git commit: HDFS-10916. Switch from "raw" to "system" xattr namespace for erasure coding policy. (Andrew Wang via lei)

2016-10-12 Thread sunilg
HDFS-10916. Switch from "raw" to "system" xattr namespace for erasure coding 
policy. (Andrew Wang via lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/809cfd27
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/809cfd27
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/809cfd27

Branch: refs/heads/YARN-3368
Commit: 809cfd27a30900d2c0e0e133574de49d0b4538cf
Parents: ecb51b8
Author: Lei Xu 
Authored: Tue Oct 11 10:04:46 2016 -0700
Committer: Lei Xu 
Committed: Tue Oct 11 10:04:46 2016 -0700

--
 .../org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/809cfd27/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
index 3798394..d112a48 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
@@ -369,7 +369,7 @@ public interface HdfsServerConstants {
   String SECURITY_XATTR_UNREADABLE_BY_SUPERUSER =
   "security.hdfs.unreadable.by.superuser";
   String XATTR_ERASURECODING_POLICY =
-  "raw.hdfs.erasurecoding.policy";
+  "system.hdfs.erasurecoding.policy";
 
   long BLOCK_GROUP_INDEX_MASK = 15;
   byte MAX_BLOCKS_IN_GROUP = 16;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[32/50] [abbrv] hadoop git commit: YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki via wangda)

2016-10-12 Thread sunilg
YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55b1afa7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55b1afa7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55b1afa7

Branch: refs/heads/YARN-3368
Commit: 55b1afa7590cca42954da50a37db8b5847c4bb0d
Parents: 57e7b9e
Author: Wangda Tan 
Authored: Thu Aug 11 14:59:14 2016 -0700
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55b1afa7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 6d46fda..2933a76 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -20,12 +20,12 @@
   
 hadoop-yarn
 org.apache.hadoop
-3.0.0-alpha1-SNAPSHOT
+3.0.0-alpha2-SNAPSHOT
   
   4.0.0
   org.apache.hadoop
   hadoop-yarn-ui
-  3.0.0-alpha1-SNAPSHOT
+  3.0.0-alpha2-SNAPSHOT
   Apache Hadoop YARN UI
   ${packaging.type}
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[04/50] [abbrv] hadoop git commit: HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by java.io.File. (Virajith Jalaparti via lei)

2016-10-12 Thread sunilg
HDFS-10637. Modifications to remove the assumption that FsVolumes are backed by 
java.io.File. (Virajith Jalaparti via lei)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96b12662
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96b12662
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96b12662

Branch: refs/heads/YARN-3368
Commit: 96b12662ea76e3ded4ef13944fc8df206cfb4613
Parents: 0773ffd
Author: Lei Xu 
Authored: Mon Oct 10 15:28:19 2016 -0700
Committer: Lei Xu 
Committed: Mon Oct 10 15:30:03 2016 -0700

--
 .../hadoop/hdfs/server/common/Storage.java  |  22 ++
 .../server/datanode/BlockPoolSliceStorage.java  |  20 +-
 .../hdfs/server/datanode/BlockScanner.java  |   8 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   |  34 +-
 .../hdfs/server/datanode/DataStorage.java   |  34 +-
 .../hdfs/server/datanode/DirectoryScanner.java  | 320 +--
 .../hdfs/server/datanode/DiskBalancer.java  |  25 +-
 .../hdfs/server/datanode/LocalReplica.java  |   2 +-
 .../hdfs/server/datanode/ReplicaInfo.java   |   2 +-
 .../hdfs/server/datanode/StorageLocation.java   |  32 +-
 .../hdfs/server/datanode/VolumeScanner.java |  27 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   5 +-
 .../server/datanode/fsdataset/FsVolumeSpi.java  | 234 +-
 .../impl/FsDatasetAsyncDiskService.java |  40 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 136 
 .../datanode/fsdataset/impl/FsVolumeImpl.java   | 233 --
 .../fsdataset/impl/FsVolumeImplBuilder.java |  65 
 .../datanode/fsdataset/impl/FsVolumeList.java   |  44 +--
 .../impl/RamDiskAsyncLazyPersistService.java|  79 +++--
 .../fsdataset/impl/VolumeFailureInfo.java   |  13 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   2 +-
 .../TestNameNodePrunesMissingStorages.java  |  15 +-
 .../server/datanode/SimulatedFSDataset.java |  46 ++-
 .../hdfs/server/datanode/TestBlockScanner.java  |   3 +-
 .../datanode/TestDataNodeHotSwapVolumes.java|  15 +-
 .../datanode/TestDataNodeVolumeFailure.java |  12 +-
 .../TestDataNodeVolumeFailureReporting.java |  10 +
 .../server/datanode/TestDirectoryScanner.java   |  76 +++--
 .../hdfs/server/datanode/TestDiskError.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |  10 +-
 .../datanode/extdataset/ExternalVolumeImpl.java |  44 ++-
 .../fsdataset/impl/FsDatasetImplTestUtils.java  |   9 +-
 .../fsdataset/impl/TestFsDatasetImpl.java   |  69 ++--
 .../fsdataset/impl/TestFsVolumeList.java|  83 +++--
 .../TestDiskBalancerWithMockMover.java  |   4 +-
 35 files changed, 1062 insertions(+), 713 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96b12662/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 9218e9d..e55de35 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
 import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.io.nativeio.NativeIOException;
 import org.apache.hadoop.util.ToolRunner;
@@ -269,11 +270,17 @@ public abstract class Storage extends StorageInfo {
 
 private String storageUuid = null;  // Storage directory identifier.
 
+private final StorageLocation location;
 public StorageDirectory(File dir) {
   // default dirType is null
   this(dir, null, false);
 }
 
+public StorageDirectory(StorageLocation location) {
+  // default dirType is null
+  this(location.getFile(), null, false, location);
+}
+
 public StorageDirectory(File dir, StorageDirType dirType) {
   this(dir, dirType, false);
 }
@@ -294,11 +301,22 @@ public abstract class Storage extends StorageInfo {
  *  disables locking on the storage directory, false enables 
locking
  */
 public StorageDirectory(File dir, StorageDirType dirType, boolean 
isShared) {
+  this(dir, dirType, isShared, null);
+}
+
+public StorageDirectory(File dir, 

[23/50] [abbrv] hadoop git commit: YARN-5488. [YARN-3368] Applications table overflows beyond the page boundary(Harish Jaiprakash via Sunil G)

2016-10-12 Thread sunilg
YARN-5488. [YARN-3368] Applications table overflows beyond the page 
boundary(Harish Jaiprakash via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e26433a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e26433a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e26433a

Branch: refs/heads/YARN-3368
Commit: 3e26433a2c80385967e7b2c4fb899f807d355ba5
Parents: 55b1afa
Author: sunilg 
Authored: Fri Aug 12 14:51:03 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../src/main/webapp/app/styles/app.css  |  4 +
 .../src/main/webapp/app/templates/yarn-app.hbs  | 98 ++--
 2 files changed, 54 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e26433a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
index a68a0ac..da5b4bf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
@@ -273,3 +273,7 @@ li a.navigation-link.ember-view {
   right: 20px;
   top: 3px;
 }
+
+.x-scroll {
+  overflow-x: scroll;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e26433a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
index 49c4bfd..9e92fc1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
@@ -49,55 +49,57 @@
 
   
 Basic Info
-
-  
-
-  Application ID
-  Name
-  User
-  Queue
-  State
-  Final Status
-  Start Time
-  Elapsed Time
-  Finished Time
-  Priority
-  Progress
-  Is Unmanaged AM
-
-  
+
+  
+
+  
+Application ID
+Name
+User
+Queue
+State
+Final Status
+Start Time
+Elapsed Time
+Finished Time
+Priority
+Progress
+Is Unmanaged AM
+  
+
 
-  
-
-  {{model.app.id}}
-  {{model.app.appName}}
-  {{model.app.user}}
-  {{model.app.queue}}
-  {{model.app.state}}
-  
-
-  {{model.app.finalStatus}}
-
-  
-  {{model.app.startTime}}
-  {{model.app.elapsedTime}}
-  {{model.app.validatedFinishedTs}}
-  {{model.app.priority}}
-  
-
-  
-{{model.app.progress}}%
+
+  
+{{model.app.id}}
+{{model.app.appName}}
+{{model.app.user}}
+{{model.app.queue}}
+{{model.app.state}}
+
+  
+{{model.app.finalStatus}}
+  
+
+{{model.app.startTime}}
+{{model.app.elapsedTime}}
+{{model.app.validatedFinishedTs}}
+{{model.app.priority}}
+
+  
+
+  {{model.app.progress}}%
+
   
-
-  
-  {{model.app.unmanagedApplication}}
-
-  
-
+
+{{model.app.unmanagedApplication}}
+  
+
+  
+ 

[16/50] [abbrv] hadoop git commit: YARN-5677. RM should transition to standby when connection is lost for an extended period. (Daniel Templeton via kasha)

2016-10-12 Thread sunilg
YARN-5677. RM should transition to standby when connection is lost for an 
extended period. (Daniel Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6476934a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6476934a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6476934a

Branch: refs/heads/YARN-3368
Commit: 6476934ae5de1be7988ab198b673d82fe0f006e3
Parents: 6378845
Author: Karthik Kambatla 
Authored: Tue Oct 11 22:07:10 2016 -0700
Committer: Karthik Kambatla 
Committed: Tue Oct 11 22:07:10 2016 -0700

--
 .../resourcemanager/EmbeddedElectorService.java |  59 +-
 .../resourcemanager/TestRMEmbeddedElector.java  | 191 +++
 2 files changed, 244 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6476934a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
index 72327e8..88d2e10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/EmbeddedElectorService.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.resourcemanager;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.protobuf.InvalidProtocolBufferException;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -39,6 +40,8 @@ import org.apache.zookeeper.data.ACL;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Timer;
+import java.util.TimerTask;
 
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
@@ -54,6 +57,10 @@ public class EmbeddedElectorService extends AbstractService
 
   private byte[] localActiveNodeInfo;
   private ActiveStandbyElector elector;
+  private long zkSessionTimeout;
+  private Timer zkDisconnectTimer;
+  @VisibleForTesting
+  final Object zkDisconnectLock = new Object();
 
   EmbeddedElectorService(RMContext rmContext) {
 super(EmbeddedElectorService.class.getName());
@@ -80,7 +87,7 @@ public class EmbeddedElectorService extends AbstractService
 YarnConfiguration.DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH);
 String electionZNode = zkBasePath + "/" + clusterId;
 
-long zkSessionTimeout = conf.getLong(YarnConfiguration.RM_ZK_TIMEOUT_MS,
+zkSessionTimeout = conf.getLong(YarnConfiguration.RM_ZK_TIMEOUT_MS,
 YarnConfiguration.DEFAULT_RM_ZK_TIMEOUT_MS);
 
 List zkAcls = RMZKUtils.getZKAcls(conf);
@@ -123,6 +130,8 @@ public class EmbeddedElectorService extends AbstractService
 
   @Override
   public void becomeActive() throws ServiceFailedException {
+cancelDisconnectTimer();
+
 try {
   rmContext.getRMAdminService().transitionToActive(req);
 } catch (Exception e) {
@@ -132,6 +141,8 @@ public class EmbeddedElectorService extends AbstractService
 
   @Override
   public void becomeStandby() {
+cancelDisconnectTimer();
+
 try {
   rmContext.getRMAdminService().transitionToStandby(req);
 } catch (Exception e) {
@@ -139,13 +150,49 @@ public class EmbeddedElectorService extends 
AbstractService
 }
   }
 
+  /**
+   * Stop the disconnect timer.  Any running tasks will be allowed to complete.
+   */
+  private void cancelDisconnectTimer() {
+synchronized (zkDisconnectLock) {
+  if (zkDisconnectTimer != null) {
+zkDisconnectTimer.cancel();
+zkDisconnectTimer = null;
+  }
+}
+  }
+
+  /**
+   * When the ZK client loses contact with ZK, this method will be called to
+   * allow the RM to react. Because the loss of connection can be noticed
+   * before the session timeout happens, it is undesirable to transition
+   * immediately. Instead the method starts a timer that will wait
+   * {@link YarnConfiguration#RM_ZK_TIMEOUT_MS} milliseconds before
+   * initiating the transition into standby state.
+   */
   @Override
   public void enterNeutralMode() {
-/**
- * Possibly due to transient connection issues. Do nothing.
- * TODO: Might want to keep track of how long in 

[45/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
new file mode 100644
index 000..89858bf
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+  model(param) {
+return Ember.RSVP.hash({
+  selected : param.queue_name,
+  queues: this.store.findAll('yarnQueue'),
+  selectedQueue : undefined,
+  apps: undefined, // apps of selected queue
+});
+  },
+
+  afterModel(model) {
+model.selectedQueue = this.store.peekRecord('yarnQueue', model.selected);
+model.apps = this.store.findAll('yarnApp');
+model.apps.forEach(function(o) {
+  console.log(o);
+})
+  }
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
new file mode 100644
index 000..7da6f6d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+export default Ember.Route.extend({
+  beforeModel() {
+this.transitionTo('yarnQueues.root');
+  }
+});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
new file mode 100644
index 000..3686c83
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
@@ -0,0 +1,25 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+

[41/50] [abbrv] hadoop git commit: YARN-5503. [YARN-3368] Add missing hidden files in webapp folder for deployment (Sreenath Somarajapuram via Sunil G)

2016-10-12 Thread sunilg
YARN-5503. [YARN-3368] Add missing hidden files in webapp folder for deployment 
(Sreenath Somarajapuram via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d4edb49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d4edb49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d4edb49

Branch: refs/heads/YARN-3368
Commit: 3d4edb497991c2f785faecbf4ce955e6773e3d8a
Parents: d74fd59
Author: sunilg 
Authored: Tue Aug 30 20:58:35 2016 +0530
Committer: sunilg 
Committed: Wed Oct 12 20:36:11 2016 +0530

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 19 ++-
 .../hadoop-yarn-ui/src/main/webapp/.bowerrc |  4 +++
 .../src/main/webapp/.editorconfig   | 34 
 .../hadoop-yarn-ui/src/main/webapp/.ember-cli   |  9 ++
 .../hadoop-yarn-ui/src/main/webapp/.jshintrc| 32 ++
 .../src/main/webapp/.watchmanconfig |  3 ++
 6 files changed, 100 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d4edb49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index fca8d30..b750a73 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -30,7 +30,7 @@
   ${packaging.type}
 
   
-jar
+war
 src/main/webapp
 node
 v0.12.2
@@ -52,9 +52,26 @@
 src/main/webapp/bower.json
 src/main/webapp/package.json
 src/main/webapp/testem.json
+
+src/main/webapp/dist/**/*
+src/main/webapp/tmp/**/*
 src/main/webapp/public/assets/images/**/*
+src/main/webapp/public/assets/images/*
 src/main/webapp/public/robots.txt
+
+public/assets/images/**/*
 public/crossdomain.xml
+
+src/main/webapp/.tmp/**/*
+src/main/webapp/.bowerrc
+src/main/webapp/.editorconfig
+src/main/webapp/.ember-cli
+src/main/webapp/.gitignore
+src/main/webapp/.jshintrc
+src/main/webapp/.travis.yml
+src/main/webapp/.watchmanconfig
+src/main/webapp/tests/.jshintrc
+src/main/webapp/blueprints/.jshintrc
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d4edb49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
new file mode 100644
index 000..959e169
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
@@ -0,0 +1,4 @@
+{
+  "directory": "bower_components",
+  "analytics": false
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d4edb49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
new file mode 100644
index 000..47c5438
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
@@ -0,0 +1,34 @@
+# EditorConfig helps developers define and maintain consistent
+# coding styles between different editors and IDEs
+# editorconfig.org
+
+root = true
+
+
+[*]
+end_of_line = lf
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+indent_style = space
+indent_size = 2
+
+[*.js]
+indent_style = space
+indent_size = 2
+
+[*.hbs]
+insert_final_newline = false
+indent_style = space
+indent_size = 2
+
+[*.css]
+indent_style = space
+indent_size = 2
+
+[*.html]
+indent_style = space
+indent_size = 2
+
+[*.{diff,md}]
+trim_trailing_whitespace = false

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d4edb49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
new file mode 100644
index 000..ee64cfe
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
@@ -0,0 +1,9 @@
+{
+  /**
+Ember CLI sends analytics information by default. The data is 

[36/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
index 8ce4ffa..aae4177 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
@@ -16,55 +16,95 @@
  * limitations under the License.
 }}
 
-
-  {{queue-navigator model=model.queues selected=model.selected}}
+
+  {{em-breadcrumbs items=breadcrumbs}}
 
 
-
-  
-{{queue-configuration-table queue=model.selectedQueue}}
-  
+
+  
 
-  
-{{bar-chart data=model.selectedQueue.capacitiesBarChartData 
-title="Queue Capacities" 
-parentId="capacity-bar-chart"
-textWidth=150
-ratio=0.5
-maxHeight=350}}
-  
+
+  
+
+  Application
+
+
+  
+
+  {{#link-to 'yarn-queue' tagName="li"}}
+{{#link-to 'yarn-queue' model.selected}}Information
+{{/link-to}}
+  {{/link-to}}
+  {{#link-to 'yarn-queue-apps' tagName="li"}}
+{{#link-to 'yarn-queue-apps' model.selected}}Applications List
+{{/link-to}}
+  {{/link-to}}
+
+  
+
+  
+
 
-{{#if model.selectedQueue.hasUserUsages}}
-  
-{{donut-chart data=model.selectedQueue.userUsagesDonutChartData 
-title="User Usages" 
-showLabels=true
-parentId="userusage-donut-chart"
-maxHeight=350}}
-  
-{{/if}}
+
+  
+  
 
-  
-{{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData 
-title="Running Apps" 
-showLabels=true
-parentId="numapplications-donut-chart"
-ratio=0.5
-maxHeight=350}}
-  
-
+
+  
+
+  Queue Information
+
+{{queue-configuration-table queue=model.selectedQueue}}
+  
+
 
-
+
+  
+
+  Queue Capacities
+
+
+  
+  {{bar-chart data=model.selectedQueue.capacitiesBarChartData
+  title=""
+  parentId="capacity-bar-chart"
+  textWidth=170
+  ratio=0.55
+  maxHeight=350}}
+
+  
+
+
+{{#if model.selectedQueue.hasUserUsages}}
+  
+{{donut-chart data=model.selectedQueue.userUsagesDonutChartData
+title="User Usages"
+showLabels=true
+parentId="userusage-donut-chart"
+type="memory"
+ratio=0.6
+maxHeight=350}}
+  
+{{/if}}
+
+
+  
+
+  Running Apps
+
+
+  {{donut-chart 
data=model.selectedQueue.numOfApplicationsDonutChartData
+  showLabels=true
+  parentId="numapplications-donut-chart"
+  ratio=0.6
+  maxHeight=350}}
+
+  
+
+
+  
+
 
-
-  
-{{#if model.apps}}
-  {{app-table table-id="apps-table" arr=model.apps}}
-  {{simple-table table-id="apps-table" bFilter=true 
colTypes="elapsed-time" colTargets="7"}}
-{{else}}
-  Could not find any applications from this 
cluster
-{{/if}}
   
 
-
-{{outlet}}
\ No newline at end of file
+{{outlet}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a570f734/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
new file mode 100644
index 000..e27341b
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
@@ -0,0 +1,72 @@
+{{!
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on 

[47/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
new file mode 100644
index 000..66bf54a
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -0,0 +1,207 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+  
+hadoop-yarn
+org.apache.hadoop
+3.0.0-SNAPSHOT
+  
+  4.0.0
+  org.apache.hadoop
+  hadoop-yarn-ui
+  3.0.0-SNAPSHOT
+  Apache Hadoop YARN UI
+  ${packaging.type}
+
+  
+jar
+src/main/webapp
+node
+v0.12.2
+2.10.0
+false
+  
+
+  
+
+  
+  
+org.apache.rat
+apache-rat-plugin
+
+  
+src/main/webapp/node_modules/**/*
+src/main/webapp/bower_components/**/*
+src/main/webapp/jsconfig.json
+src/main/webapp/bower.json
+src/main/webapp/package.json
+src/main/webapp/testem.json
+src/main/webapp/public/assets/images/**/*
+src/main/webapp/public/robots.txt
+public/crossdomain.xml
+  
+
+  
+
+  
+ maven-clean-plugin
+ 3.0.0
+ 
+false
+
+   
+  
${basedir}/src/main/webapp/bower_components
+   
+   
+  
${basedir}/src/main/webapp/node_modules
+   
+
+ 
+  
+
+  
+
+  
+
+  yarn-ui
+
+  
+false
+  
+
+  
+war
+  
+
+  
+
+  
+  
+exec-maven-plugin
+org.codehaus.mojo
+
+  
+generate-sources
+npm install
+
+  exec
+
+
+  ${webappDir}
+  npm
+  
+install
+  
+
+  
+  
+generate-sources
+bower install
+
+  exec
+
+
+  ${webappDir}
+  bower
+  
+--allow-root
+install
+  
+
+  
+  
+generate-sources
+bower --allow-root install
+
+  exec
+
+
+  ${webappDir}
+  bower
+  
+--allow-root
+install
+  
+
+  
+  
+ember build
+generate-sources
+
+  exec
+
+
+  ${webappDir}
+  ember
+  
+build
+-prod
+--output-path
+${basedir}/target/dist
+  
+
+  
+  
+ember test
+generate-resources
+
+  exec
+
+
+  ${skipTests}
+  ${webappDir}
+  ember
+  
+test
+  
+
+  
+  
+cleanup tmp
+generate-sources
+
+  exec
+
+
+  ${webappDir}
+  rm
+  
+-rf
+tmp
+  
+
+  
+
+  
+
+  
+  
+org.apache.maven.plugins
+maven-war-plugin
+
+  ${basedir}/src/main/webapp/WEB-INF/web.xml
+  ${basedir}/target/dist
+
+  
+
+
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
deleted file mode 100644
index f591645..000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-# http://www.robotstxt.org
-User-agent: *
-Disallow:


[15/50] [abbrv] hadoop git commit: YARN-4464. Lower the default max applications stored in the RM and store. (Daniel Templeton via kasha)

2016-10-12 Thread sunilg
YARN-4464. Lower the default max applications stored in the RM and store. 
(Daniel Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6378845f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6378845f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6378845f

Branch: refs/heads/YARN-3368
Commit: 6378845f9ef789c3fda862c43bcd498aa3f35068
Parents: 7ba7092
Author: Karthik Kambatla 
Authored: Tue Oct 11 21:41:58 2016 -0700
Committer: Karthik Kambatla 
Committed: Tue Oct 11 21:42:08 2016 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java | 20 
 .../src/main/resources/yarn-default.xml |  4 ++--
 .../server/resourcemanager/RMAppManager.java|  2 +-
 3 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 4d43357..3bd0dcc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -719,17 +719,29 @@ public class YarnConfiguration extends Configuration {
   + "leveldb-state-store.compaction-interval-secs";
   public static final long DEFAULT_RM_LEVELDB_COMPACTION_INTERVAL_SECS = 3600;
 
-  /** The maximum number of completed applications RM keeps. */ 
+  /**
+   * The maximum number of completed applications RM keeps. By default equals
+   * to {@link #DEFAULT_RM_MAX_COMPLETED_APPLICATIONS}.
+   */
   public static final String RM_MAX_COMPLETED_APPLICATIONS =
 RM_PREFIX + "max-completed-applications";
-  public static final int DEFAULT_RM_MAX_COMPLETED_APPLICATIONS = 1;
+  public static final int DEFAULT_RM_MAX_COMPLETED_APPLICATIONS = 1000;
 
   /**
-   * The maximum number of completed applications RM state store keeps, by
-   * default equals to DEFAULT_RM_MAX_COMPLETED_APPLICATIONS
+   * The maximum number of completed applications RM state store keeps. By
+   * default equals to value of {@link #RM_MAX_COMPLETED_APPLICATIONS}.
*/
   public static final String RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS =
   RM_PREFIX + "state-store.max-completed-applications";
+  /**
+   * The default value for
+   * {@code yarn.resourcemanager.state-store.max-completed-applications}.
+   * @deprecated This default value is ignored and will be removed in a future
+   * release. The default value of
+   * {@code yarn.resourcemanager.state-store.max-completed-applications} is the
+   * value of {@link #RM_MAX_COMPLETED_APPLICATIONS}.
+   */
+  @Deprecated
   public static final int DEFAULT_RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS =
   DEFAULT_RM_MAX_COMPLETED_APPLICATIONS;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 524afec..f37c689 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -417,7 +417,7 @@
 the applications remembered in RM memory.
 Any values larger than ${yarn.resourcemanager.max-completed-applications} 
will
 be reset to ${yarn.resourcemanager.max-completed-applications}.
-Note that this value impacts the RM recovery performance.Typically,
+Note that this value impacts the RM recovery performance. Typically,
 a smaller value indicates better performance on RM recovery.
 
 yarn.resourcemanager.state-store.max-completed-applications
@@ -687,7 +687,7 @@
   
 The maximum number of completed applications RM keeps. 

 yarn.resourcemanager.max-completed-applications
-1
+1000
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6378845f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java

[49/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-12 Thread sunilg
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
deleted file mode 100644
index 447533e..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/simple-table.js
+++ /dev/null
@@ -1,58 +0,0 @@
-import Ember from 'ember';
-
-export default Ember.Component.extend({
-  didInsertElement: function() {
-var paging = this.get("paging") ? true : this.get("paging");
-var ordering = this.get("ordering") ? true : this.get("ordering");
-var info = this.get("info") ? true : this.get("info");
-var bFilter = this.get("bFilter") ? true : this.get("bFilter");
-
-// Defines sorter for the columns if not default.
-// Can also specify a custom sorter.
-var i;
-var colDefs = [];
-if (this.get("colTypes")) {
-  var typesArr = this.get("colTypes").split(' ');
-  var targetsArr = this.get("colTargets").split(' ');
-  for (i = 0; i < typesArr.length; i++) {
-console.log(typesArr[i] + " " + targetsArr[i]);
-colDefs.push({
-  type: typesArr[i],
-  targets: parseInt(targetsArr[i])
-});
-  }
-}
-// Defines initial column and sort order.
-var orderArr = [];
-if (this.get("colsOrder")) {
-  var cols = this.get("colsOrder").split(' ');
-  for (i = 0; i < cols.length; i++) {
-var col = cols[i].split(',');
-if (col.length != 2) {
-  continue;
-}
-var order = col[1].trim();
-if (order != 'asc' && order != 'desc') {
-  continue;
-}
-var colOrder = [];
-colOrder.push(parseInt(col[0]));
-colOrder.push(order);
-orderArr.push(colOrder);
-  }
-}
-if (orderArr.length == 0) {
-  var defaultOrder = [0, 'asc'];
-  orderArr.push(defaultOrder);
-}
-console.log(orderArr[0]);
-Ember.$('#' + this.get('table-id')).DataTable({
-  "paging":   paging,
-  "ordering": ordering, 
-  "info": info,
-  "bFilter": bFilter,
-  "order": orderArr,
-  "columnDefs": colDefs
-});
-  }
-});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9b93975/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
deleted file mode 100644
index fe402bb..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/components/timeline-view.js
+++ /dev/null
@@ -1,250 +0,0 @@
-import Ember from 'ember';
-import Converter from 'yarn-ui/utils/converter';
-
-export default Ember.Component.extend({
-  canvas: {
-svg: undefined,
-h: 0,
-w: 0,
-tooltip: undefined
-  },
-
-  clusterMetrics: undefined,
-  modelArr: [],
-  colors: d3.scale.category10().range(),
-  _selected: undefined,
-
-  selected: function() {
-return this._selected;
-  }.property(),
-
-  tableComponentName: function() {
-return "app-attempt-table";
-  }.property(),
-
-  setSelected: function(d) {
-if (this._selected == d) {
-  return;
-}
-
-// restore color
-if (this._selected) {
-  var dom = d3.select("#timeline-bar-" + this._selected.get("id"));
-  dom.attr("fill", this.colors[0]);
-}
-
-this._selected = d;
-this.set("selected", d);
-dom = d3.select("#timeline-bar-" + d.get("id"));
-dom.attr("fill", this.colors[1]);
-  },
-
-  getPerItemHeight: function() {
-var arrSize = this.modelArr.length;
-
-if (arrSize < 20) {
-  return 30;
-} else if (arrSize < 100) {
-  return 10;
-} else {
-  return 2;
-}
-  },
-
-  getPerItemGap: function() {
-var arrSize = this.modelArr.length;
-
-if (arrSize < 20) {
-  return 5;
-} else if (arrSize < 100) {
-  return 1;
-} else {
-  return 1;
-}
-  },
-
-  getCanvasHeight: function() {
-return (this.getPerItemHeight() + this.getPerItemGap()) * 
this.modelArr.length + 200;
-  },
-
-  draw: function(start, end) {
-// get w/h of the svg
-var bbox = d3.select("#" + this.get("parent-id"))
-  .node()
-  .getBoundingClientRect();
-this.canvas.w = bbox.width;
-this.canvas.h = this.getCanvasHeight();
-
-this.canvas.svg = d3.select("#" + this.get("parent-id"))
-  .append("svg")
-  .attr("width", this.canvas.w)
-  .attr("height", this.canvas.h)
-  .attr("id", this.get("my-id"));
-this.renderTimeline(start, end);
-  },
-
-  renderTimeline: function(start, end) {
-var border = 30;
-var singleBarHeight = 

  1   2   >