[hadoop] Git Push Summary

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Tags:  refs/tags/release-2.7.3-RC2 [created] e05a76e64

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Set the release date for 2.7.3-RC2

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7.3 51187688b -> baa91f7c6


Set the release date for 2.7.3-RC2


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/baa91f7c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/baa91f7c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/baa91f7c

Branch: refs/heads/branch-2.7.3
Commit: baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Parents: 5118768
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 17:55:21 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 17:55:21 2016 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 2 +-
 hadoop-mapreduce-project/CHANGES.txt| 2 +-
 hadoop-yarn-project/CHANGES.txt | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/baa91f7c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3460ccae..6d97302 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1,6 +1,6 @@
 Hadoop Change Log
 
-Release 2.7.3 - 2016-08-19
+Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/baa91f7c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a243834..db64d3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1,6 +1,6 @@
 Hadoop HDFS Change Log
 
-Release 2.7.3 - 2016-08-19
+Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/baa91f7c/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 26b71a5..389d36f 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -1,6 +1,6 @@
 Hadoop MapReduce Change Log
 
-Release 2.7.3 - 2016-08-19
+Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/baa91f7c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9f8b014..5f88bc1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1,6 +1,6 @@
 Hadoop YARN Change Log
 
-Release 2.7.3 - 2016-08-19
+Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4702. FairScheduler: Allow setting maxResources for ad hoc queues. (Daniel Templeton via kasha)

2016-08-17 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a20b943cf -> 07d5ab16d


YARN-4702. FairScheduler: Allow setting maxResources for ad hoc queues. (Daniel 
Templeton via kasha)

(cherry picked from commit 20f0eb871c57cc4c5a6d19aae0e3745b6175509b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07d5ab16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07d5ab16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07d5ab16

Branch: refs/heads/branch-2
Commit: 07d5ab16df490d65c2595e58fecfd7e51464ef88
Parents: a20b943
Author: Karthik Kambatla 
Authored: Wed Aug 17 17:40:20 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Aug 17 17:54:28 2016 -0700

--
 .../scheduler/fair/AllocationConfiguration.java |  85 ++---
 .../fair/AllocationFileLoaderService.java   |  92 +
 .../scheduler/fair/FairScheduler.java   |  31 +++
 .../scheduler/fair/QueueManager.java| 162 
 .../fair/TestAllocationFileLoaderService.java   |  43 -
 .../scheduler/fair/TestFairScheduler.java   | 187 ++-
 .../scheduler/fair/TestQueueManager.java| 166 +++-
 .../src/site/markdown/FairScheduler.md  |   3 +
 8 files changed, 657 insertions(+), 112 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07d5ab16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index f984fef..90d7d98 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -46,6 +46,8 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   // Maximum amount of resources per queue
   @VisibleForTesting
   final Map maxQueueResources;
+  // Maximum amount of resources for each queue's ad hoc children
+  private final Map maxChildQueueResources;
   // Sharing weights for each queue
   private final Map queueWeights;
   
@@ -107,6 +109,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
 
   public AllocationConfiguration(Map minQueueResources,
   Map maxQueueResources,
+  Map maxChildQueueResources,
   Map queueMaxApps, Map userMaxApps,
   Map queueWeights,
   Map queueMaxAMShares, int userMaxAppsDefault,
@@ -126,6 +129,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   Set nonPreemptableQueues) {
 this.minQueueResources = minQueueResources;
 this.maxQueueResources = maxQueueResources;
+this.maxChildQueueResources = maxChildQueueResources;
 this.queueMaxApps = queueMaxApps;
 this.userMaxApps = userMaxApps;
 this.queueMaxAMShares = queueMaxAMShares;
@@ -149,31 +153,32 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   }
   
   public AllocationConfiguration(Configuration conf) {
-minQueueResources = new HashMap();
-maxQueueResources = new HashMap();
-queueWeights = new HashMap();
-queueMaxApps = new HashMap();
-userMaxApps = new HashMap();
-queueMaxAMShares = new HashMap();
+minQueueResources = new HashMap<>();
+maxChildQueueResources = new HashMap<>();
+maxQueueResources = new HashMap<>();
+queueWeights = new HashMap<>();
+queueMaxApps = new HashMap<>();
+userMaxApps = new HashMap<>();
+queueMaxAMShares = new HashMap<>();
 userMaxAppsDefault = Integer.MAX_VALUE;
 queueMaxAppsDefault = Integer.MAX_VALUE;
 queueMaxResourcesDefault = Resources.unbounded();
 queueMaxAMShareDefault = 0.5f;
-queueAcls = new 

[2/2] hadoop git commit: Preparing for release 2.7.3: Updating release notes.

2016-08-17 Thread vinodkv
Preparing for release 2.7.3: Updating release notes.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51187688
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51187688
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51187688

Branch: refs/heads/branch-2.7.3
Commit: 51187688bf92731444d5e05e038c3706adec8c16
Parents: 2f1a438
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 14:45:05 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 17:50:29 2016 -0700

--
 .../src/main/docs/releasenotes.html | 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51187688/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html 
b/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
index 052da7c..8251cd9 100644
--- a/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
+++ b/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
@@ -220,6 +220,10 @@ These release notes include new developer and user-facing 
incompatibilities, fea
  Minor test reported by zhihai xu and fixed by zhihai xu (test)
  TestResourceLocalizationService.testPublicResourceInitializesLocalDir 
fails Intermittently due to IOException from cleanup
  
+ https://issues.apache.org/jira/browse/YARN-3426;>YARN-3426.
+ Blocker sub-task reported by Li Lu and fixed by Li Lu 
+ Add jdiff support to YARN
+ 
  https://issues.apache.org/jira/browse/YARN-3404;>YARN-3404.
  Minor improvement reported by Ryu Kobayashi and fixed by Ryu Kobayashi 

  View the queue name to YARN Application page
@@ -556,7 +560,7 @@ In this improvement, we add a new API 
isCommitJobRepeatable() to OutputCommitter
  seen_txid in the shared edits directory is modified during 
bootstrapping
  
  https://issues.apache.org/jira/browse/HDFS-9530;>HDFS-9530.
- Critical bug reported by Fei Hui and fixed by Brahma Reddy Battula 
+ Critical bug reported by Fei Hui and fixed by Brahma Reddy Battula 
(datanode)
  ReservedSpace is not cleared for abandoned Blocks
  
  https://issues.apache.org/jira/browse/HDFS-9516;>HDFS-9516.
@@ -583,17 +587,13 @@ In this improvement, we add a new API 
isCommitJobRepeatable() to OutputCommitter
  Major bug reported by Stanislav Antic and fixed by Yongjun Zhang 
(namenode)
  FSImage may get corrupted after deleting snapshot
  
- https://issues.apache.org/jira/browse/HDFS-9395;>HDFS-9395.
- Major bug reported by Kihwal Lee and fixed by Kuhu Shukla 
- Make HDFS audit logging consistant
- 
  https://issues.apache.org/jira/browse/HDFS-9383;>HDFS-9383.
  Major bug reported by Kihwal Lee and fixed by Tsz Wo Nicholas Sze 
  TestByteArrayManager#testByteArrayManager fails
  
  https://issues.apache.org/jira/browse/HDFS-9365;>HDFS-9365.
  Major bug reported by Tsz Wo Nicholas Sze and fixed by Tsz Wo Nicholas 
Sze (balancer  mover)
- Balaner does not work with the HDFS-6376 HA setup
+ Balancer does not work with the HDFS-6376 HA setup
  
  https://issues.apache.org/jira/browse/HDFS-9347;>HDFS-9347.
  Major bug reported by Wei-Chiu Chuang and fixed by Wei-Chiu Chuang 
(test)
@@ -671,6 +671,10 @@ In this improvement, we add a new API 
isCommitJobRepeatable() to OutputCommitter
  Major bug reported by Aaron T. Myers and fixed by Yiqun Lin (test)
  TestHFlush failing intermittently
  
+ https://issues.apache.org/jira/browse/HADOOP-13434;>HADOOP-13434.
+ Major bug reported by Owen O'Malley and fixed by Owen O'Malley 
+ Add quoting to Shell class
+ 
  https://issues.apache.org/jira/browse/HADOOP-13350;>HADOOP-13350.
  Blocker bug reported by Xiao Chen and fixed by Xiao Chen (build)
  Additional fix to LICENSE and NOTICE
@@ -875,6 +879,10 @@ In this improvement, we add a new API 
isCommitJobRepeatable() to OutputCommitter
  Critical bug reported by Sangjin Lee and fixed by Sangjin Lee (fs)
  long running apps may have a huge number of StatisticsData instances 
under FileSystem
  
+ https://issues.apache.org/jira/browse/HADOOP-11814;>HADOOP-11814.
+ Minor task reported by Li Lu and fixed by Li Lu 
+ Reformat hadoop-annotations, o.a.h.classification.tools
+ 
  https://issues.apache.org/jira/browse/HADOOP-11252;>HADOOP-11252.
  Critical bug reported by Wilfred Spiegelenburg and fixed by Masatake 
Iwasaki (ipc)
  RPC client does not time out by default



[1/2] hadoop git commit: HADOOP-13434. Add bash quoting to Shell class. (Owen O'Malley) Added the missing CHANGES.txt entry.

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7.3 22fd150e9 -> 51187688b


HADOOP-13434. Add bash quoting to Shell class. (Owen O'Malley)
Added the missing CHANGES.txt entry.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f1a4387
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f1a4387
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f1a4387

Branch: refs/heads/branch-2.7.3
Commit: 2f1a4387fdc7c48dfd117d86555bccac118d9e24
Parents: 22fd150
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 14:38:59 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 14:38:59 2016 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f1a4387/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3777e32..3460ccae 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -26,6 +26,8 @@ Release 2.7.3 - 2016-08-19
 HADOOP-13298. Fix the leftover L files in
 hadoop-build-tools/src/main/resources/META-INF/. (ozawa)
 
+HADOOP-13434. Added quoting to Shell class. (Owen O'Malley via Arpit 
Agarwal)
+
   OPTIMIZATIONS
 
 HADOOP-12810. FileSystem#listLocatedStatus causes unnecessary RPC calls


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4702. FairScheduler: Allow setting maxResources for ad hoc queues. (Daniel Templeton via kasha)

2016-08-17 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk ca13e7971 -> 20f0eb871


YARN-4702. FairScheduler: Allow setting maxResources for ad hoc queues. (Daniel 
Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/20f0eb87
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/20f0eb87
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/20f0eb87

Branch: refs/heads/trunk
Commit: 20f0eb871c57cc4c5a6d19aae0e3745b6175509b
Parents: ca13e79
Author: Karthik Kambatla 
Authored: Wed Aug 17 17:40:20 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Aug 17 17:40:20 2016 -0700

--
 .../scheduler/fair/AllocationConfiguration.java |  87 ++---
 .../fair/AllocationFileLoaderService.java   |  92 +
 .../scheduler/fair/FairScheduler.java   |  31 +++
 .../scheduler/fair/QueueManager.java| 162 
 .../fair/TestAllocationFileLoaderService.java   |  43 -
 .../scheduler/fair/TestFairScheduler.java   | 187 ++-
 .../scheduler/fair/TestQueueManager.java| 166 +++-
 .../src/site/markdown/FairScheduler.md  |   3 +
 8 files changed, 658 insertions(+), 113 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/20f0eb87/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index f984fef..5cf110f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -46,6 +46,8 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   // Maximum amount of resources per queue
   @VisibleForTesting
   final Map maxQueueResources;
+  // Maximum amount of resources for each queue's ad hoc children
+  private final Map maxChildQueueResources;
   // Sharing weights for each queue
   private final Map queueWeights;
   
@@ -107,6 +109,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
 
   public AllocationConfiguration(Map minQueueResources,
   Map maxQueueResources,
+  Map maxChildQueueResources,
   Map queueMaxApps, Map userMaxApps,
   Map queueWeights,
   Map queueMaxAMShares, int userMaxAppsDefault,
@@ -126,6 +129,7 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   Set nonPreemptableQueues) {
 this.minQueueResources = minQueueResources;
 this.maxQueueResources = maxQueueResources;
+this.maxChildQueueResources = maxChildQueueResources;
 this.queueMaxApps = queueMaxApps;
 this.userMaxApps = userMaxApps;
 this.queueMaxAMShares = queueMaxAMShares;
@@ -149,31 +153,32 @@ public class AllocationConfiguration extends 
ReservationSchedulerConfiguration {
   }
   
   public AllocationConfiguration(Configuration conf) {
-minQueueResources = new HashMap();
-maxQueueResources = new HashMap();
-queueWeights = new HashMap();
-queueMaxApps = new HashMap();
-userMaxApps = new HashMap();
-queueMaxAMShares = new HashMap();
+minQueueResources = new HashMap<>();
+maxChildQueueResources = new HashMap<>();
+maxQueueResources = new HashMap<>();
+queueWeights = new HashMap<>();
+queueMaxApps = new HashMap<>();
+userMaxApps = new HashMap<>();
+queueMaxAMShares = new HashMap<>();
 userMaxAppsDefault = Integer.MAX_VALUE;
 queueMaxAppsDefault = Integer.MAX_VALUE;
 queueMaxResourcesDefault = Resources.unbounded();
 queueMaxAMShareDefault = 0.5f;
-queueAcls = new HashMap>();
-resAcls = new HashMap

svn commit: r1756696 - in /hadoop/common/site/main/publish: bylaws.pdf committer_criteria.pdf index.pdf issue_tracking.pdf linkmap.pdf mailing_lists.pdf privacy_policy.pdf releases.pdf version_control

2016-08-17 Thread liuml07
Author: liuml07
Date: Thu Aug 18 00:12:17 2016
New Revision: 1756696

URL: http://svn.apache.org/viewvc?rev=1756696=rev
Log:
Addendum commit to add Mingliang Liu (liuml07) to committer list

Modified:
hadoop/common/site/main/publish/bylaws.pdf
hadoop/common/site/main/publish/committer_criteria.pdf
hadoop/common/site/main/publish/index.pdf
hadoop/common/site/main/publish/issue_tracking.pdf
hadoop/common/site/main/publish/linkmap.pdf
hadoop/common/site/main/publish/mailing_lists.pdf
hadoop/common/site/main/publish/privacy_policy.pdf
hadoop/common/site/main/publish/releases.pdf
hadoop/common/site/main/publish/version_control.pdf
hadoop/common/site/main/publish/versioning.pdf
hadoop/common/site/main/publish/who.html
hadoop/common/site/main/publish/who.pdf

Modified: hadoop/common/site/main/publish/bylaws.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/bylaws.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/committer_criteria.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/committer_criteria.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/index.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/index.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/issue_tracking.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/issue_tracking.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/linkmap.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/linkmap.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/mailing_lists.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/mailing_lists.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/privacy_policy.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/privacy_policy.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/releases.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/releases.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/version_control.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/version_control.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/versioning.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/versioning.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/who.html
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.html?rev=1756696=1756695=1756696=diff
==
--- hadoop/common/site/main/publish/who.html (original)
+++ hadoop/common/site/main/publish/who.html Thu Aug 18 00:12:17 2016
@@ -649,6 +649,7 @@ document.write("Last Published: " + docu
  -8
 
 
+
 
 
   
@@ -1643,6 +1644,17 @@ document.write("Last Published: " + docu

 
 
+  
+
+
+liuml07
+http://people.apache.org/~liuml07;>Mingliang Liu
+Hortonworks
+HDFS
+-8
+  
+
+

 
  
@@ -2275,7 +2287,7 @@ document.write("Last Published: " + docu
 
 

-
+
 Emeritus Hadoop Committers
 
 Hadoop committers who are no longer active include:

Modified: hadoop/common/site/main/publish/who.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.pdf?rev=1756696=1756695=1756696=diff
==
Binary files - no diff available.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

hadoop git commit: HDFS-10773. BlockSender should not synchronize on the dataset object. (Contributed by Chen Liang)

2016-08-17 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2aa5e2c40 -> ca13e7971


HDFS-10773. BlockSender should not synchronize on the dataset object. 
(Contributed by Chen Liang)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca13e797
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca13e797
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca13e797

Branch: refs/heads/trunk
Commit: ca13e7971d0db0705d5e36bcf03ead3cab5ab0d7
Parents: 2aa5e2c
Author: Arpit Agarwal 
Authored: Wed Aug 17 16:29:08 2016 -0700
Committer: Arpit Agarwal 
Committed: Wed Aug 17 16:29:08 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca13e797/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index 398935d..7c3d778 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.ReadaheadPool.ReadaheadRequest;
 import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.net.SocketOutputStream;
+import org.apache.hadoop.util.AutoCloseableLock;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.htrace.core.TraceScope;
 
@@ -239,7 +240,7 @@ class BlockSender implements java.io.Closeable {
   
   final Replica replica;
   final long replicaVisibleLength;
-  synchronized(datanode.data) { 
+  try(AutoCloseableLock lock = datanode.data.acquireDatasetLock()) {
 replica = getReplica(block, datanode);
 replicaVisibleLength = replica.getVisibleLength();
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10549. Correctly revoke file leases when closing files. Contributed by Yiqun Lin.

2016-08-17 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d65024edd -> b89d79ca1


HDFS-10549. Correctly revoke file leases when closing files. Contributed by 
Yiqun Lin.

(cherry picked from commit 85aacaadb5a3f8c78b191867c0bde09b3c4b3c3c)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java

(cherry picked from commit a20b943cf951cf38eb4950177c826bbcf424aade)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b89d79ca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b89d79ca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b89d79ca

Branch: refs/heads/branch-2.8
Commit: b89d79ca1daf390ceb48ed9ce35d801539ae3e77
Parents: d65024e
Author: Xiao Chen 
Authored: Wed Aug 17 15:22:42 2016 -0700
Committer: Xiao Chen 
Committed: Wed Aug 17 16:00:33 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 22 -
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 34 +++-
 3 files changed, 55 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b89d79ca/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 24fc364..4abf234 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -471,7 +471,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   }
 
   /** Stop renewal of lease for the file. */
-  void endFileLease(final long inodeId) throws IOException {
+  void endFileLease(final long inodeId) {
 getLeaseRenewer().closeFile(inodeId, this);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b89d79ca/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 472c41f..8439dc8 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -55,6 +55,7 @@ import 
org.apache.hadoop.hdfs.server.namenode.RetryStartFileException;
 import org.apache.hadoop.hdfs.server.namenode.SafeModeException;
 import org.apache.hadoop.hdfs.util.ByteArrayManager;
 import org.apache.hadoop.io.EnumSetWritable;
+import org.apache.hadoop.io.MultipleIOException;
 import org.apache.hadoop.ipc.RemoteException;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
@@ -709,6 +710,7 @@ public class DFSOutputStream extends FSOutputSummer
* resources associated with this stream.
*/
   void abort() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   if (isClosed()) {
 return;
@@ -717,9 +719,19 @@ public class DFSOutputStream extends FSOutputSummer
   new IOException("Lease timeout of "
   + (dfsClient.getConf().getHdfsTimeout() / 1000)
   + " seconds expired."));
-  closeThreads(true);
+
+  try {
+closeThreads(true);
+  } catch (IOException e) {
+b.add(e);
+  }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   boolean isClosed() {
@@ -752,13 +764,21 @@ public class DFSOutputStream extends FSOutputSummer
*/
   @Override
   public void close() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   try (TraceScope ignored = dfsClient.newPathTraceScope(
   "DFSOutputStream#close", src)) {
 closeImpl();
+  } catch (IOException e) {
+b.add(e);
   }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   protected synchronized void closeImpl() throws IOException {


hadoop git commit: HDFS-10549. Correctly revoke file leases when closing files. Contributed by Yiqun Lin.

2016-08-17 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ac8c3ae78 -> a20b943cf


HDFS-10549. Correctly revoke file leases when closing files. Contributed by 
Yiqun Lin.

(cherry picked from commit 85aacaadb5a3f8c78b191867c0bde09b3c4b3c3c)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a20b943c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a20b943c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a20b943c

Branch: refs/heads/branch-2
Commit: a20b943cf951cf38eb4950177c826bbcf424aade
Parents: ac8c3ae
Author: Xiao Chen 
Authored: Wed Aug 17 15:22:42 2016 -0700
Committer: Xiao Chen 
Committed: Wed Aug 17 15:50:13 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 22 -
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 34 +++-
 3 files changed, 55 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a20b943c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 6772c39..74276e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -471,7 +471,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   }
 
   /** Stop renewal of lease for the file. */
-  void endFileLease(final long inodeId) throws IOException {
+  void endFileLease(final long inodeId) {
 getLeaseRenewer().closeFile(inodeId, this);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a20b943c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 472c41f..8439dc8 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -55,6 +55,7 @@ import 
org.apache.hadoop.hdfs.server.namenode.RetryStartFileException;
 import org.apache.hadoop.hdfs.server.namenode.SafeModeException;
 import org.apache.hadoop.hdfs.util.ByteArrayManager;
 import org.apache.hadoop.io.EnumSetWritable;
+import org.apache.hadoop.io.MultipleIOException;
 import org.apache.hadoop.ipc.RemoteException;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
@@ -709,6 +710,7 @@ public class DFSOutputStream extends FSOutputSummer
* resources associated with this stream.
*/
   void abort() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   if (isClosed()) {
 return;
@@ -717,9 +719,19 @@ public class DFSOutputStream extends FSOutputSummer
   new IOException("Lease timeout of "
   + (dfsClient.getConf().getHdfsTimeout() / 1000)
   + " seconds expired."));
-  closeThreads(true);
+
+  try {
+closeThreads(true);
+  } catch (IOException e) {
+b.add(e);
+  }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   boolean isClosed() {
@@ -752,13 +764,21 @@ public class DFSOutputStream extends FSOutputSummer
*/
   @Override
   public void close() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   try (TraceScope ignored = dfsClient.newPathTraceScope(
   "DFSOutputStream#close", src)) {
 closeImpl();
+  } catch (IOException e) {
+b.add(e);
   }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   protected synchronized void closeImpl() throws IOException {


hadoop git commit: HDFS-10549. Correctly revoke file leases when closing files. Contributed by Yiqun Lin.

2016-08-17 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk c57523163 -> 2aa5e2c40


HDFS-10549. Correctly revoke file leases when closing files. Contributed by 
Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2aa5e2c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2aa5e2c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2aa5e2c4

Branch: refs/heads/trunk
Commit: 2aa5e2c40364cf1e90e6af7851801f5eda759002
Parents: c575231
Author: Xiao Chen 
Authored: Wed Aug 17 15:22:42 2016 -0700
Committer: Xiao Chen 
Committed: Wed Aug 17 15:52:38 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 22 +++-
 .../hadoop/hdfs/DFSStripedOutputStream.java | 13 +++-
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 35 +++-
 4 files changed, 68 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2aa5e2c4/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 833b1ee..ad86895 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -463,7 +463,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   }
 
   /** Stop renewal of lease for the file. */
-  void endFileLease(final long inodeId) throws IOException {
+  void endFileLease(final long inodeId) {
 getLeaseRenewer().closeFile(inodeId, this);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2aa5e2c4/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 93aee0e..a73ab95 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -57,6 +57,7 @@ import 
org.apache.hadoop.hdfs.server.namenode.RetryStartFileException;
 import org.apache.hadoop.hdfs.server.namenode.SafeModeException;
 import org.apache.hadoop.hdfs.util.ByteArrayManager;
 import org.apache.hadoop.io.EnumSetWritable;
+import org.apache.hadoop.io.MultipleIOException;
 import org.apache.hadoop.ipc.RemoteException;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
@@ -732,6 +733,7 @@ public class DFSOutputStream extends FSOutputSummer
* resources associated with this stream.
*/
   void abort() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   if (isClosed()) {
 return;
@@ -740,9 +742,19 @@ public class DFSOutputStream extends FSOutputSummer
   new IOException("Lease timeout of "
   + (dfsClient.getConf().getHdfsTimeout() / 1000)
   + " seconds expired."));
-  closeThreads(true);
+
+  try {
+closeThreads(true);
+  } catch (IOException e) {
+b.add(e);
+  }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   boolean isClosed() {
@@ -775,13 +787,21 @@ public class DFSOutputStream extends FSOutputSummer
*/
   @Override
   public void close() throws IOException {
+final MultipleIOException.Builder b = new MultipleIOException.Builder();
 synchronized (this) {
   try (TraceScope ignored = dfsClient.newPathTraceScope(
   "DFSOutputStream#close", src)) {
 closeImpl();
+  } catch (IOException e) {
+b.add(e);
   }
 }
+
 dfsClient.endFileLease(fileId);
+final IOException ioe = b.build();
+if (ioe != null) {
+  throw ioe;
+}
   }
 
   protected synchronized void closeImpl() throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2aa5e2c4/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 

hadoop git commit: HADOOP-11786. Fix Javadoc typos in org.apache.hadoop.fs.FileSystem. Contributed by Andras Bokor.

2016-08-17 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 986162f97 -> ac8c3ae78


HADOOP-11786. Fix Javadoc typos in org.apache.hadoop.fs.FileSystem. Contributed 
by Andras Bokor.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac8c3ae7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac8c3ae7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac8c3ae7

Branch: refs/heads/branch-2
Commit: ac8c3ae7848fc7172f3e5d0a614f3d858ead2782
Parents: 986162f
Author: Anu Engineer 
Authored: Wed Aug 17 14:58:50 2016 -0700
Committer: Anu Engineer 
Committed: Wed Aug 17 15:04:02 2016 -0700

--
 .../java/org/apache/hadoop/fs/FileSystem.java   | 50 ++--
 1 file changed, 25 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac8c3ae7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index d79419b..52b9484 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -934,13 +934,13 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with write-progress
* reporting.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param overwrite if a file with this name already exists, then if true,
*   the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -956,12 +956,12 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with write-progress
* reporting.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param flags {@link CreateFlag}s to use for this stream.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -980,12 +980,12 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with a custom
* checksum option
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param flags {@link CreateFlag}s to use for this stream.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @param checksumOpt checksum parameter. If null, the values
*found in conf will be used.
* @throws IOException
@@ -1094,8 +1094,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -1112,13 +1112,13 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* reporting. Same as create(), except fails if parent directory doesn't
* already exist.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param overwrite if a file with this name already exists, then if true,
* the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress 

hadoop git commit: HDFS-9530. ReservedSpace is not cleared for abandoned Blocks (Brahma Reddy Battula)

2016-08-17 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 2586b7500 -> 3f87efc0c


HDFS-9530. ReservedSpace is not cleared for abandoned Blocks (Brahma Reddy 
Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f87efc0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f87efc0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f87efc0

Branch: refs/heads/branch-2.6
Commit: 3f87efc0c8f189253493c927b3c39995c61d6849
Parents: 2586b75
Author: Arpit Agarwal 
Authored: Wed Aug 17 14:59:04 2016 -0700
Committer: Arpit Agarwal 
Committed: Wed Aug 17 14:59:04 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/datanode/DataNodeFaultInjector.java  |  2 +
 .../hdfs/server/datanode/DataXceiver.java   |  1 +
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  6 ++-
 .../fsdataset/impl/TestRbwSpaceReservation.java | 48 +++-
 5 files changed, 58 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f87efc0/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index cc27d77..da6abd9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -30,6 +30,9 @@ Release 2.6.5 - UNRELEASED
 HDFS-8581. ContentSummary on / skips further counts on yielding lock
 (J.Andreina via vinayakumarb)
 
+HDFS-9530. ReservedSpace is not cleared for abandoned Blocks.
+(Brahma Reddy Battula)
+
 Release 2.6.4 - 2016-02-11
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f87efc0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
index e66d66b..2627788 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
@@ -49,4 +49,6 @@ public class DataNodeFaultInjector {
   public void sendShortCircuitShmResponse() throws IOException {}
 
   public void noRegistration() throws IOException { }
+
+  public void failMirrorConnection() throws IOException { }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f87efc0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
index d7551cc..8e9628a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
@@ -665,6 +665,7 @@ class DataXceiver extends Receiver implements Runnable {
 mirrorTarget = NetUtils.createSocketAddr(mirrorNode);
 mirrorSock = datanode.newSocket();
 try {
+  DataNodeFaultInjector.get().failMirrorConnection();
   int timeoutValue = dnConf.socketTimeout
   + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length);
   int writeTimeout = dnConf.socketWriteTimeout + 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f87efc0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index f00416b..2008b3a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -86,6 +86,7 @@ import 
org.apache.hadoop.hdfs.server.datanode.UnexpectedReplicaStateException;
 import 

hadoop git commit: HADOOP-11786. Fix Javadoc typos in org.apache.hadoop.fs.FileSystem. Contributed by Andras Bokor.

2016-08-17 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/trunk 822d661b8 -> c57523163


HADOOP-11786. Fix Javadoc typos in org.apache.hadoop.fs.FileSystem. Contributed 
by Andras Bokor.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5752316
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5752316
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5752316

Branch: refs/heads/trunk
Commit: c57523163f8936aec74ccf3b5a8db6f73b428bbf
Parents: 822d661
Author: Anu Engineer 
Authored: Wed Aug 17 14:58:50 2016 -0700
Committer: Anu Engineer 
Committed: Wed Aug 17 15:00:14 2016 -0700

--
 .../java/org/apache/hadoop/fs/FileSystem.java   | 50 ++--
 1 file changed, 25 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5752316/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 1fdfac0..c366729 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -934,13 +934,13 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with write-progress
* reporting.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param overwrite if a file with this name already exists, then if true,
*   the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -956,12 +956,12 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with write-progress
* reporting.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param flags {@link CreateFlag}s to use for this stream.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -980,12 +980,12 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* Create an FSDataOutputStream at the indicated Path with a custom
* checksum option
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param flags {@link CreateFlag}s to use for this stream.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @param checksumOpt checksum parameter. If null, the values
*found in conf will be used.
* @throws IOException
@@ -1094,8 +1094,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
* @throws IOException
* @see #setPermission(Path, FsPermission)
*/
@@ -1112,13 +1112,13 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* reporting. Same as create(), except fails if parent directory doesn't
* already exist.
* @param f the file name to open
-   * @param permission
+   * @param permission file permission
* @param overwrite if a file with this name already exists, then if true,
* the file will be overwritten, and if false an error will be thrown.
* @param bufferSize the size of the buffer to be used.
* @param replication required block replication for the file.
-   * @param blockSize
-   * @param progress
+   * @param blockSize block size
+   * @param progress the progress reporter
  

[2/2] hadoop git commit: HADOOP-13208. S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories. Contributed by Steve Loughran.

2016-08-17 Thread cnauroth
HADOOP-13208. S3A listFiles(recursive=true) to do a bulk listObjects instead of 
walking the pseudo-tree of directories. Contributed by Steve Loughran.

(cherry picked from commit 822d661b8fcc42bec6eea958d9fd02ef1aaa4b6c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/986162f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/986162f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/986162f9

Branch: refs/heads/branch-2
Commit: 986162f9776837b1343073d721f89be8f3362085
Parents: 2486c4c
Author: Chris Nauroth 
Authored: Wed Aug 17 14:54:54 2016 -0700
Committer: Chris Nauroth 
Committed: Wed Aug 17 14:55:07 2016 -0700

--
 .../java/org/apache/hadoop/fs/s3a/Listing.java  | 594 +++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 372 +++-
 .../hadoop/fs/s3a/S3AInstrumentation.java   |   1 +
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java |  12 +
 .../org/apache/hadoop/fs/s3a/Statistic.java |   2 +
 .../s3a/TestS3AContractGetFileStatus.java   |  16 +
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |   8 +-
 .../s3a/scale/TestS3ADirectoryPerformance.java  |  58 +-
 8 files changed, 897 insertions(+), 166 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/986162f9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
new file mode 100644
index 000..4120b20
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
@@ -0,0 +1,594 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.model.ListObjectsRequest;
+import com.amazonaws.services.s3.model.ObjectListing;
+import com.amazonaws.services.s3.model.S3ObjectSummary;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.NoSuchElementException;
+
+import static org.apache.hadoop.fs.s3a.Constants.S3N_FOLDER_SUFFIX;
+import static org.apache.hadoop.fs.s3a.S3AUtils.createFileStatus;
+import static org.apache.hadoop.fs.s3a.S3AUtils.objectRepresentsDirectory;
+import static org.apache.hadoop.fs.s3a.S3AUtils.stringify;
+import static org.apache.hadoop.fs.s3a.S3AUtils.translateException;
+
+/**
+ * Place for the S3A listing classes; keeps all the small classes under 
control.
+ */
+public class Listing {
+
+  private final S3AFileSystem owner;
+  private static final Logger LOG = S3AFileSystem.LOG;
+
+  public Listing(S3AFileSystem owner) {
+this.owner = owner;
+  }
+
+  /**
+   * Create a FileStatus iterator against a path, with a given
+   * list object request.
+   * @param listPath path of the listing
+   * @param request initial request to make
+   * @param filter the filter on which paths to accept
+   * @param acceptor the class/predicate to decide which entries to accept
+   * in the listing based on the full file status.
+   * @return the iterator
+   * @throws IOException IO Problems
+   */
+  FileStatusListingIterator createFileStatusListingIterator(
+  Path listPath,
+  ListObjectsRequest request,
+  PathFilter filter,
+  Listing.FileStatusAcceptor acceptor) throws IOException {
+return new FileStatusListingIterator(
+new ObjectListingIterator(listPath, request),
+filter,
+acceptor);
+  }
+
+  /**
+   * Create a located status iterator over a file status iterator.
+   * @param statusIterator an iterator over the 

[1/2] hadoop git commit: HADOOP-13208. S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories. Contributed by Steve Loughran.

2016-08-17 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2486c4c63 -> 986162f97
  refs/heads/trunk 869393643 -> 822d661b8


HADOOP-13208. S3A listFiles(recursive=true) to do a bulk listObjects instead of 
walking the pseudo-tree of directories. Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/822d661b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/822d661b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/822d661b

Branch: refs/heads/trunk
Commit: 822d661b8fcc42bec6eea958d9fd02ef1aaa4b6c
Parents: 8693936
Author: Chris Nauroth 
Authored: Wed Aug 17 14:54:54 2016 -0700
Committer: Chris Nauroth 
Committed: Wed Aug 17 14:54:54 2016 -0700

--
 .../java/org/apache/hadoop/fs/s3a/Listing.java  | 594 +++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 372 +++-
 .../hadoop/fs/s3a/S3AInstrumentation.java   |   1 +
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java |  12 +
 .../org/apache/hadoop/fs/s3a/Statistic.java |   2 +
 .../s3a/TestS3AContractGetFileStatus.java   |  16 +
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |   8 +-
 .../s3a/scale/TestS3ADirectoryPerformance.java  |  58 +-
 8 files changed, 897 insertions(+), 166 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/822d661b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
new file mode 100644
index 000..4120b20
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
@@ -0,0 +1,594 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.model.ListObjectsRequest;
+import com.amazonaws.services.s3.model.ObjectListing;
+import com.amazonaws.services.s3.model.S3ObjectSummary;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.NoSuchElementException;
+
+import static org.apache.hadoop.fs.s3a.Constants.S3N_FOLDER_SUFFIX;
+import static org.apache.hadoop.fs.s3a.S3AUtils.createFileStatus;
+import static org.apache.hadoop.fs.s3a.S3AUtils.objectRepresentsDirectory;
+import static org.apache.hadoop.fs.s3a.S3AUtils.stringify;
+import static org.apache.hadoop.fs.s3a.S3AUtils.translateException;
+
+/**
+ * Place for the S3A listing classes; keeps all the small classes under 
control.
+ */
+public class Listing {
+
+  private final S3AFileSystem owner;
+  private static final Logger LOG = S3AFileSystem.LOG;
+
+  public Listing(S3AFileSystem owner) {
+this.owner = owner;
+  }
+
+  /**
+   * Create a FileStatus iterator against a path, with a given
+   * list object request.
+   * @param listPath path of the listing
+   * @param request initial request to make
+   * @param filter the filter on which paths to accept
+   * @param acceptor the class/predicate to decide which entries to accept
+   * in the listing based on the full file status.
+   * @return the iterator
+   * @throws IOException IO Problems
+   */
+  FileStatusListingIterator createFileStatusListingIterator(
+  Path listPath,
+  ListObjectsRequest request,
+  PathFilter filter,
+  Listing.FileStatusAcceptor acceptor) throws IOException {
+return new FileStatusListingIterator(
+new ObjectListingIterator(listPath, request),
+filter,
+acceptor);
+  }
+
+  /**
+   * Create a located status iterator over a file status 

hadoop git commit: HDFS-10745. Directly resolve paths into INodesInPath. Contributed by Daryn Sharp.

2016-08-17 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk e3037c564 -> 869393643


HDFS-10745. Directly resolve paths into INodesInPath. Contributed by Daryn 
Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/86939364
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/86939364
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/86939364

Branch: refs/heads/trunk
Commit: 869393643de23dcb010cc33091c8eb398de0fd6c
Parents: e3037c5
Author: Kihwal Lee 
Authored: Wed Aug 17 15:53:03 2016 -0500
Committer: Kihwal Lee 
Committed: Wed Aug 17 15:53:03 2016 -0500

--
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  29 +++--
 .../hdfs/server/namenode/FSDirAppendOp.java |   4 +-
 .../hdfs/server/namenode/FSDirAttrOp.java   |  23 ++--
 .../hdfs/server/namenode/FSDirDeleteOp.java |   4 +-
 .../server/namenode/FSDirEncryptionZoneOp.java  |   7 +-
 .../server/namenode/FSDirErasureCodingOp.java   |   8 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |   4 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  41 +++
 .../server/namenode/FSDirStatAndListingOp.java  |  51 -
 .../hdfs/server/namenode/FSDirSymlinkOp.java|   4 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |   4 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |  84 ---
 .../hdfs/server/namenode/FSDirXAttrOp.java  |  25 ++---
 .../hdfs/server/namenode/FSDirectory.java   | 107 +++
 .../hdfs/server/namenode/FSNamesystem.java  |  49 +++--
 .../hdfs/server/namenode/INodesInPath.java  |   8 ++
 .../hadoop/hdfs/server/namenode/TestFsck.java   |   4 +-
 17 files changed, 230 insertions(+), 226 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/86939364/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
index 296bed2..2153f02 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.AclException;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 
 import java.io.IOException;
@@ -39,11 +38,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = FSDirectory.resolveLastINode(iip);
   int snapshotId = iip.getLatestSnapshotId();
@@ -64,11 +63,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = FSDirectory.resolveLastINode(iip);
   int snapshotId = iip.getLatestSnapshotId();
@@ -88,11 +87,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = FSDirectory.resolveLastINode(iip);
   int snapshotId = iip.getLatestSnapshotId();
@@ -112,11 +111,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(src);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   

hadoop git commit: Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu Shukla."

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7.3 7a4746cbb -> 22fd150e9


Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu 
Shukla."

Reverting this on branch-2.* as it's an incompatible change.

This reverts commit 25b3531eb40a32a602574e5cfc1ffe028044bcc9.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/22fd150e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/22fd150e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/22fd150e

Branch: refs/heads/branch-2.7.3
Commit: 22fd150e99c0148d7e2c1cc77428c6181cc931ba
Parents: 7a4746c
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 13:41:15 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 13:41:15 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 -
 .../hdfs/server/namenode/FSNamesystem.java  | 181 ++
 .../namenode/TestAuditLoggerWithCommands.java   | 562 ---
 3 files changed, 59 insertions(+), 686 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/22fd150e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 748489d..a243834 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -36,8 +36,6 @@ Release 2.7.3 - 2016-08-19
 HDFS-8578. On upgrade, Datanode should process all storage/data dirs in
 parallel.  (vinayakumarb and szetszwo via szetszwo)
 
-HDFS-9395. Make HDFS audit logging consistant (Kuhu Shukla via kihwal)
-
 HDFS-9048. DistCp documentation is out-of-dated
 (Daisuke Kobayashi via iwasakims)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/22fd150e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 20824cf..81a4ea4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1883,16 +1883,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot concat " + target);
   stat = FSDirConcatOp.concat(dir, target, srcs, logRetryCache);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {
 getEditLog().logSync();
   }
+  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
 }
-logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
   }
 
   /**
@@ -3662,8 +3659,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 boolean success = ret != null && ret.success;
 if (success) {
   getEditLog().logSync();
-  logAuditEvent(success, "rename", src, dst, ret.auditStat);
 }
+logAuditEvent(success, "rename", src, dst,
+ret == null ? null : ret.auditStat);
 return success;
   }
 
@@ -3925,19 +3923,16 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 checkOperation(OperationCategory.READ);
 readLock();
 boolean success = true;
-ContentSummary cs;
 try {
   checkOperation(OperationCategory.READ);
-  cs = FSDirStatAndListingOp.getContentSummary(dir, src);
+  return FSDirStatAndListingOp.getContentSummary(dir, src);
 } catch (AccessControlException ace) {
   success = false;
-  logAuditEvent(success, "contentSummary", src);
   throw ace;
 } finally {
   readUnlock();
+  logAuditEvent(success, "contentSummary", src);
 }
-logAuditEvent(success, "contentSummary", src);
-return cs;
   }
 
   /**
@@ -3957,16 +3952,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot set quota on " + src);
   FSDirAttrOp.setQuota(dir, src, nsQuota, ssQuota, type);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "setQuota", src);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {
 getEditLog().logSync();
   }

hadoop git commit: Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu Shukla."

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 00eb79062 -> d65024edd


Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu 
Shukla."

Reverting this on branch-2.* as it's an incompatible change.

This reverts commit 83f7f62be379045ad6933689b21b76c7086f919d.

(cherry picked from commit 2486c4c63a35fcef7338ea63f0d8aafa778cd05d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d65024ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d65024ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d65024ed

Branch: refs/heads/branch-2.8
Commit: d65024eddc5e6c0590e99e02a41a3845594ef69f
Parents: 00eb790
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 13:28:00 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 13:34:35 2016 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 185 ++
 .../namenode/TestAuditLoggerWithCommands.java   | 585 ---
 2 files changed, 60 insertions(+), 710 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d65024ed/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 6cba82e..a8dc8fa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1855,16 +1855,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot concat " + target);
   stat = FSDirConcatOp.concat(dir, target, srcs, logRetryCache);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {
 getEditLog().logSync();
   }
+  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
 }
-logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
   }
 
   /**
@@ -2740,8 +2737,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 boolean success = ret != null && ret.success;
 if (success) {
   getEditLog().logSync();
-  logAuditEvent(success, "rename", src, dst, ret.auditStat);
 }
+logAuditEvent(success, "rename", src, dst,
+ret == null ? null : ret.auditStat);
 return success;
   }
 
@@ -3005,19 +3003,16 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 checkOperation(OperationCategory.READ);
 readLock();
 boolean success = true;
-ContentSummary cs;
 try {
   checkOperation(OperationCategory.READ);
-  cs = FSDirStatAndListingOp.getContentSummary(dir, src);
+  return FSDirStatAndListingOp.getContentSummary(dir, src);
 } catch (AccessControlException ace) {
   success = false;
-  logAuditEvent(success, "contentSummary", src);
   throw ace;
 } finally {
   readUnlock();
+  logAuditEvent(success, "contentSummary", src);
 }
-logAuditEvent(success, "contentSummary", src);
-return cs;
   }
 
   /**
@@ -3036,21 +3031,18 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   QuotaUsage getQuotaUsage(final String src) throws IOException {
 checkOperation(OperationCategory.READ);
-QuotaUsage quotaUsage;
 readLock();
 boolean success = true;
 try {
   checkOperation(OperationCategory.READ);
-  quotaUsage = FSDirStatAndListingOp.getQuotaUsage(dir, src);
+  return FSDirStatAndListingOp.getQuotaUsage(dir, src);
 } catch (AccessControlException ace) {
   success = false;
-  logAuditEvent(success, "quotaUsage", src);
   throw ace;
 } finally {
   readUnlock();
+  logAuditEvent(success, "quotaUsage", src);
 }
-logAuditEvent(success, "quotaUsage", src);
-return quotaUsage;
   }
 
   /**
@@ -3073,16 +3065,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot set quota on " + src);
   FSDirAttrOp.setQuota(dir, src, nsQuota, ssQuota, type);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "setQuota", src);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {

hadoop git commit: Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu Shukla."

2016-08-17 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9dcb7197d -> 2486c4c63


Revert "HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu 
Shukla."

Reverting this on branch-2.* as it's an incompatible change.

This reverts commit 83f7f62be379045ad6933689b21b76c7086f919d.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2486c4c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2486c4c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2486c4c6

Branch: refs/heads/branch-2
Commit: 2486c4c63a35fcef7338ea63f0d8aafa778cd05d
Parents: 9dcb719
Author: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Authored: Wed Aug 17 13:28:00 2016 -0700
Committer: Vinod Kumar Vavilapalli (I am also known as @tshooter.) 

Committed: Wed Aug 17 13:33:09 2016 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 185 ++
 .../namenode/TestAuditLoggerWithCommands.java   | 585 ---
 2 files changed, 60 insertions(+), 710 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2486c4c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 2d9a069..73c8c8b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1855,16 +1855,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot concat " + target);
   stat = FSDirConcatOp.concat(dir, target, srcs, logRetryCache);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {
 getEditLog().logSync();
   }
+  logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
 }
-logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
   }
 
   /**
@@ -2728,8 +2725,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 boolean success = ret != null && ret.success;
 if (success) {
   getEditLog().logSync();
-  logAuditEvent(success, "rename", src, dst, ret.auditStat);
 }
+logAuditEvent(success, "rename", src, dst,
+ret == null ? null : ret.auditStat);
 return success;
   }
 
@@ -2943,19 +2941,16 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 checkOperation(OperationCategory.READ);
 readLock();
 boolean success = true;
-ContentSummary cs;
 try {
   checkOperation(OperationCategory.READ);
-  cs = FSDirStatAndListingOp.getContentSummary(dir, src);
+  return FSDirStatAndListingOp.getContentSummary(dir, src);
 } catch (AccessControlException ace) {
   success = false;
-  logAuditEvent(success, "contentSummary", src);
   throw ace;
 } finally {
   readUnlock();
+  logAuditEvent(success, "contentSummary", src);
 }
-logAuditEvent(success, "contentSummary", src);
-return cs;
   }
 
   /**
@@ -2974,21 +2969,18 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
*/
   QuotaUsage getQuotaUsage(final String src) throws IOException {
 checkOperation(OperationCategory.READ);
-QuotaUsage quotaUsage;
 readLock();
 boolean success = true;
 try {
   checkOperation(OperationCategory.READ);
-  quotaUsage = FSDirStatAndListingOp.getQuotaUsage(dir, src);
+  return FSDirStatAndListingOp.getQuotaUsage(dir, src);
 } catch (AccessControlException ace) {
   success = false;
-  logAuditEvent(success, "quotaUsage", src);
   throw ace;
 } finally {
   readUnlock();
+  logAuditEvent(success, "quotaUsage", src);
 }
-logAuditEvent(success, "quotaUsage", src);
-return quotaUsage;
   }
 
   /**
@@ -3011,16 +3003,13 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   checkNameNodeSafeMode("Cannot set quota on " + src);
   FSDirAttrOp.setQuota(dir, src, nsQuota, ssQuota, type);
   success = true;
-} catch (AccessControlException ace) {
-  logAuditEvent(success, "setQuota", src);
-  throw ace;
 } finally {
   writeUnlock();
   if (success) {
 getEditLog().logSync();
   }
+  logAuditEvent(success, 

hadoop git commit: YARN-5523. Yarn running container log fetching causes OutOfMemoryError (Xuan Gong via Varun Saxena)

2016-08-17 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 848ae35a9 -> 9dcb7197d


YARN-5523. Yarn running container log fetching causes OutOfMemoryError (Xuan 
Gong via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9dcb7197
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9dcb7197
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9dcb7197

Branch: refs/heads/branch-2
Commit: 9dcb7197d4657829cbcefe6bdb89c236875b7902
Parents: 848ae35
Author: Varun Saxena 
Authored: Thu Aug 18 01:53:55 2016 +0530
Committer: Varun Saxena 
Committed: Thu Aug 18 01:53:55 2016 +0530

--
 .../apache/hadoop/yarn/client/cli/LogsCLI.java  | 74 +++
 .../hadoop/yarn/client/cli/TestLogsCLI.java | 95 
 2 files changed, 150 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9dcb7197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
index c0d8795..d2d9051 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.client.cli;
 
 import java.io.File;
 import java.io.IOException;
+import java.io.InputStream;
 import java.io.PrintStream;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -41,6 +42,7 @@ import org.apache.commons.cli.HelpFormatter;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
 import org.apache.commons.cli.ParseException;
+import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -463,15 +465,7 @@ public class LogsCLI extends Configured implements Tool {
 PrintStream out = logCliHelper.createPrintStream(localDir, nodeId,
 containerIdStr);
 try {
-  // fetch all the log files for the container
-  // filter the log files based on the given -log_files pattern
-  List allLogFileInfos=
-  getContainerLogFiles(getConf(), containerIdStr, nodeHttpAddress);
-  List fileNames = new ArrayList();
-  for (PerLogFileInfo fileInfo : allLogFileInfos) {
-fileNames.add(fileInfo.getFileName());
-  }
-  Set matchedFiles = getMatchedLogFiles(request, fileNames,
+  Set matchedFiles = getMatchedContainerLogFiles(request,
   useRegex);
   if (matchedFiles.isEmpty()) {
 System.err.println("Can not find any log file matching the pattern: "
@@ -488,22 +482,33 @@ public class LogsCLI extends Configured implements Tool {
   out.println(containerString);
   out.println(StringUtils.repeat("=", containerString.length()));
   boolean foundAnyLogs = false;
+  byte[] buffer = new byte[65536];
   for (String logFile : newOptions.getLogTypes()) {
 out.println("LogType:" + logFile);
 out.println("Log Upload Time:"
 + Times.format(System.currentTimeMillis()));
 out.println("Log Contents:");
+InputStream is = null;
 try {
-  WebResource webResource =
-  webServiceClient.resource(WebAppUtils.getHttpSchemePrefix(conf)
-  + nodeHttpAddress);
-  ClientResponse response =
-  webResource.path("ws").path("v1").path("node")
-.path("containers").path(containerIdStr).path("logs")
-.path(logFile)
-.queryParam("size", Long.toString(request.getBytes()))
-.accept(MediaType.TEXT_PLAIN).get(ClientResponse.class);
-  out.println(response.getEntity(String.class));
+  ClientResponse response = getResponeFromNMWebService(conf,
+  webServiceClient, request, logFile);
+  if (response != null && response.getClientResponseStatus().equals(
+  ClientResponse.Status.OK)) {
+is = response.getEntityInputStream();
+int len = 0;
+while((len = is.read(buffer)) != -1) {
+  out.write(buffer, 0, len);
+}
+out.println();
+  } else {
+out.println("Can not get any logs for the log file: " + logFile);
+String msg = "Response from 

hadoop git commit: HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by Sean Mackrory.

2016-08-17 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 f1d91cea2 -> 2586b7500


HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by 
Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2586b750
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2586b750
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2586b750

Branch: refs/heads/branch-2.6
Commit: 2586b750043fc6f48c61f8a5cbc7a944966e0c85
Parents: f1d91ce
Author: Andrew Wang 
Authored: Wed Aug 17 13:18:31 2016 -0700
Committer: Andrew Wang 
Committed: Wed Aug 17 13:18:44 2016 -0700

--
 .../org/apache/hadoop/conf/ConfigRedactor.java  | 84 
 .../apache/hadoop/conf/ReconfigurableBase.java  |  9 ++-
 .../fs/CommonConfigurationKeysPublic.java   | 10 +++
 .../src/main/resources/core-default.xml | 10 +++
 .../apache/hadoop/conf/TestConfigRedactor.java  | 72 +
 5 files changed, 183 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2586b750/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
new file mode 100644
index 000..0ba756c
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.conf;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeys.*;
+
+/**
+ * Tool for redacting sensitive information when displaying config parameters.
+ *
+ * Some config parameters contain sensitive information (for example, cloud
+ * storage keys). When these properties are displayed in plaintext, we should
+ * redactor their values as appropriate.
+ */
+public class ConfigRedactor {
+
+  private static final String REDACTED_TEXT = "";
+
+  private List compiledPatterns;
+
+  public ConfigRedactor(Configuration conf) {
+String sensitiveRegexList = conf.get(
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS,
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS_DEFAULT);
+List sensitiveRegexes = 
Arrays.asList(sensitiveRegexList.split(","));
+compiledPatterns = new ArrayList();
+for (String regex : sensitiveRegexes) {
+  Pattern p = Pattern.compile(regex);
+  compiledPatterns.add(p);
+}
+  }
+
+  /**
+   * Given a key / value pair, decides whether or not to redact and returns
+   * either the original value or text indicating it has been redacted.
+   *
+   * @param key
+   * @param value
+   * @return Original value, or text indicating it has been redacted
+   */
+  public String redact(String key, String value) {
+if (configIsSensitive(key)) {
+  return REDACTED_TEXT;
+}
+return value;
+  }
+
+  /**
+   * Matches given config key against patterns and determines whether or not
+   * it should be considered sensitive enough to redact in logs and other
+   * plaintext displays.
+   *
+   * @param key
+   * @return True if parameter is considered sensitive
+   */
+  private boolean configIsSensitive(String key) {
+for (Pattern regex : compiledPatterns) {
+  if (regex.matcher(key).find()) {
+return true;
+  }
+}
+return false;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2586b750/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
 

hadoop git commit: HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by Sean Mackrory.

2016-08-17 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 040a1b7b9 -> d59f68899


HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by 
Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d59f6889
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d59f6889
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d59f6889

Branch: refs/heads/branch-2.7
Commit: d59f688992fcb723c48451ee2db220baae6c9d8e
Parents: 040a1b7
Author: Andrew Wang 
Authored: Wed Aug 17 13:17:52 2016 -0700
Committer: Andrew Wang 
Committed: Wed Aug 17 13:17:52 2016 -0700

--
 .../org/apache/hadoop/conf/ConfigRedactor.java  | 84 
 .../apache/hadoop/conf/ReconfigurableBase.java  |  9 ++-
 .../fs/CommonConfigurationKeysPublic.java   | 10 +++
 .../src/main/resources/core-default.xml | 10 +++
 .../apache/hadoop/conf/TestConfigRedactor.java  | 72 +
 5 files changed, 183 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d59f6889/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
new file mode 100644
index 000..0ba756c
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.conf;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeys.*;
+
+/**
+ * Tool for redacting sensitive information when displaying config parameters.
+ *
+ * Some config parameters contain sensitive information (for example, cloud
+ * storage keys). When these properties are displayed in plaintext, we should
+ * redactor their values as appropriate.
+ */
+public class ConfigRedactor {
+
+  private static final String REDACTED_TEXT = "";
+
+  private List compiledPatterns;
+
+  public ConfigRedactor(Configuration conf) {
+String sensitiveRegexList = conf.get(
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS,
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS_DEFAULT);
+List sensitiveRegexes = 
Arrays.asList(sensitiveRegexList.split(","));
+compiledPatterns = new ArrayList();
+for (String regex : sensitiveRegexes) {
+  Pattern p = Pattern.compile(regex);
+  compiledPatterns.add(p);
+}
+  }
+
+  /**
+   * Given a key / value pair, decides whether or not to redact and returns
+   * either the original value or text indicating it has been redacted.
+   *
+   * @param key
+   * @param value
+   * @return Original value, or text indicating it has been redacted
+   */
+  public String redact(String key, String value) {
+if (configIsSensitive(key)) {
+  return REDACTED_TEXT;
+}
+return value;
+  }
+
+  /**
+   * Matches given config key against patterns and determines whether or not
+   * it should be considered sensitive enough to redact in logs and other
+   * plaintext displays.
+   *
+   * @param key
+   * @return True if parameter is considered sensitive
+   */
+  private boolean configIsSensitive(String key) {
+for (Pattern regex : compiledPatterns) {
+  if (regex.matcher(key).find()) {
+return true;
+  }
+}
+return false;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d59f6889/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
 

hadoop git commit: YARN-5523. Yarn running container log fetching causes OutOfMemoryError (Xuan Gong via Varun Saxena)

2016-08-17 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/trunk f80a72983 -> e3037c564


YARN-5523. Yarn running container log fetching causes OutOfMemoryError (Xuan 
Gong via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e3037c56
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e3037c56
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e3037c56

Branch: refs/heads/trunk
Commit: e3037c564117fe53742c130665b047dd17eff6d0
Parents: f80a729
Author: Varun Saxena 
Authored: Thu Aug 18 01:45:33 2016 +0530
Committer: Varun Saxena 
Committed: Thu Aug 18 01:45:33 2016 +0530

--
 .../apache/hadoop/yarn/client/cli/LogsCLI.java  | 74 +++
 .../hadoop/yarn/client/cli/TestLogsCLI.java | 95 
 2 files changed, 150 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e3037c56/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
index 908d379..25e3a46 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.client.cli;
 
 import java.io.File;
 import java.io.IOException;
+import java.io.InputStream;
 import java.io.PrintStream;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -41,6 +42,7 @@ import org.apache.commons.cli.HelpFormatter;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
 import org.apache.commons.cli.ParseException;
+import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -462,15 +464,7 @@ public class LogsCLI extends Configured implements Tool {
 PrintStream out = logCliHelper.createPrintStream(localDir, nodeId,
 containerIdStr);
 try {
-  // fetch all the log files for the container
-  // filter the log files based on the given -log_files pattern
-  List allLogFileInfos=
-  getContainerLogFiles(getConf(), containerIdStr, nodeHttpAddress);
-  List fileNames = new ArrayList();
-  for (PerLogFileInfo fileInfo : allLogFileInfos) {
-fileNames.add(fileInfo.getFileName());
-  }
-  Set matchedFiles = getMatchedLogFiles(request, fileNames,
+  Set matchedFiles = getMatchedContainerLogFiles(request,
   useRegex);
   if (matchedFiles.isEmpty()) {
 System.err.println("Can not find any log file matching the pattern: "
@@ -487,22 +481,33 @@ public class LogsCLI extends Configured implements Tool {
   out.println(containerString);
   out.println(StringUtils.repeat("=", containerString.length()));
   boolean foundAnyLogs = false;
+  byte[] buffer = new byte[65536];
   for (String logFile : newOptions.getLogTypes()) {
 out.println("LogType:" + logFile);
 out.println("Log Upload Time:"
 + Times.format(System.currentTimeMillis()));
 out.println("Log Contents:");
+InputStream is = null;
 try {
-  WebResource webResource =
-  webServiceClient.resource(WebAppUtils.getHttpSchemePrefix(conf)
-  + nodeHttpAddress);
-  ClientResponse response =
-  webResource.path("ws").path("v1").path("node")
-.path("containers").path(containerIdStr).path("logs")
-.path(logFile)
-.queryParam("size", Long.toString(request.getBytes()))
-.accept(MediaType.TEXT_PLAIN).get(ClientResponse.class);
-  out.println(response.getEntity(String.class));
+  ClientResponse response = getResponeFromNMWebService(conf,
+  webServiceClient, request, logFile);
+  if (response != null && response.getStatusInfo().getStatusCode() ==
+  ClientResponse.Status.OK.getStatusCode()) {
+is = response.getEntityInputStream();
+int len = 0;
+while((len = is.read(buffer)) != -1) {
+  out.write(buffer, 0, len);
+}
+out.println();
+  } else {
+out.println("Can not get any logs for the log file: " + logFile);
+String msg = 

[23/50] [abbrv] hadoop git commit: HDFS-10747. o.a.h.hdfs.tools.DebugAdmin usage message is misleading. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HDFS-10747. o.a.h.hdfs.tools.DebugAdmin usage message is misleading. 
(Contributed by Mingliang Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef55fe17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef55fe17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef55fe17

Branch: refs/heads/YARN-2915
Commit: ef55fe171691446d38e6a14e92c1fd4d3d0c64c5
Parents: 382d615
Author: Mingliang Liu 
Authored: Mon Aug 15 20:23:47 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:23:47 2016 -0700

--
 .../src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java   | 4 ++--
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md| 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef55fe17/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
index d179a5c..a2b91ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
@@ -86,7 +86,7 @@ public class DebugAdmin extends Configured implements Tool {
   private class VerifyBlockChecksumCommand extends DebugCommand {
 VerifyBlockChecksumCommand() {
   super("verify",
-"verify [-meta ] [-block ]",
+"verify -meta  [-block ]",
 "  Verify HDFS metadata and block files.  If a block file is specified, we\n" +
 "  will verify that the checksums in the metadata file match the block\n" +
 "  file.");
@@ -200,7 +200,7 @@ public class DebugAdmin extends Configured implements Tool {
   private class RecoverLeaseCommand extends DebugCommand {
 RecoverLeaseCommand() {
   super("recoverLease",
-"recoverLease [-path ] [-retries ]",
+"recoverLease -path  [-retries ]",
 "  Recover the lease on the specified path.  The path must reside on an\n" +
 "  HDFS filesystem.  The default number of retries is 1.");
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef55fe17/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index cbc293f..22886d3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -604,7 +604,7 @@ Useful commands to help administrators debug HDFS issues, 
like validating block
 
 ### `verify`
 
-Usage: `hdfs debug verify [-meta ] [-block ]`
+Usage: `hdfs debug verify -meta  [-block ]`
 
 | COMMAND\_OPTION | Description |
 |: |: |
@@ -615,7 +615,7 @@ Verify HDFS metadata and block files. If a block file is 
specified, we will veri
 
 ### `recoverLease`
 
-Usage: `hdfs debug recoverLease [-path ] [-retries ]`
+Usage: `hdfs debug recoverLease -path  [-retries ]`
 
 | COMMAND\_OPTION | Description |
 |: |: |


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/50] [abbrv] hadoop git commit: HDFS-10559. DiskBalancer: Use SHA1 for Plan ID. Contributed by Xiaobing Zhou.

2016-08-17 Thread subru
HDFS-10559. DiskBalancer: Use SHA1 for Plan ID. Contributed by Xiaobing Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5628b36c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5628b36c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5628b36c

Branch: refs/heads/YARN-2915
Commit: 5628b36c0872d58c9b25f23da3dab4eafad9bca3
Parents: 02abd13
Author: Anu Engineer 
Authored: Mon Aug 15 20:10:21 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 20:10:21 2016 -0700

--
 .../hadoop/hdfs/protocol/ClientDatanodeProtocol.java  |  2 +-
 .../ClientDatanodeProtocolTranslatorPB.java   |  2 +-
 .../src/main/proto/ClientDatanodeProtocol.proto   |  2 +-
 .../hadoop/hdfs/server/datanode/DiskBalancer.java | 14 +++---
 .../server/diskbalancer/command/CancelCommand.java|  2 +-
 .../server/diskbalancer/command/ExecuteCommand.java   |  2 +-
 .../hdfs/server/diskbalancer/TestDiskBalancer.java|  4 ++--
 .../hdfs/server/diskbalancer/TestDiskBalancerRPC.java |  2 +-
 .../diskbalancer/TestDiskBalancerWithMockMover.java   |  8 
 9 files changed, 19 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
index 477d308..10041f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
@@ -175,7 +175,7 @@ public interface ClientDatanodeProtocol {
   /**
* Cancel an executing plan.
*
-   * @param planID - A SHA512 hash of the plan string.
+   * @param planID - A SHA-1 hash of the plan string.
*/
   void cancelDiskBalancePlan(String planID) throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
index 045ccd5..0cf006c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
@@ -369,7 +369,7 @@ public class ClientDatanodeProtocolTranslatorPB implements
   /**
* Cancels an executing disk balancer plan.
*
-   * @param planID - A SHA512 hash of the plan string.
+   * @param planID - A SHA-1 hash of the plan string.
* @throws IOException on error
*/
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
index 11d04af..e4333cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
@@ -154,7 +154,7 @@ message GetBalancerBandwidthResponseProto {
  * balancer plan to a data node.
  */
 message SubmitDiskBalancerPlanRequestProto {
-  required string planID = 1; // A hash of the plan like SHA512
+  required string planID = 1; // A hash of the plan like SHA-1
   required string plan = 2;   // Plan file data in Json format
   optional uint64 planVersion = 3;// Plan version number
   optional bool ignoreDateCheck = 4;  // Ignore date checks on this plan.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
--
diff --git 

[45/50] [abbrv] hadoop git commit: YARN-3672. Create Facade for Federation State and Policy Store. Contributed by Subru Krishnan

2016-08-17 Thread subru
YARN-3672. Create Facade for Federation State and Policy Store. Contributed by 
Subru Krishnan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/267f3f22
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/267f3f22
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/267f3f22

Branch: refs/heads/YARN-2915
Commit: 267f3f22c3fd14a785233931c9de9d7d06bdd161
Parents: bb4369d
Author: Jian He 
Authored: Wed Aug 17 11:13:19 2016 +0800
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 hadoop-project/pom.xml  |  14 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  13 +
 .../yarn/conf/TestYarnConfigurationFields.java  |   4 +
 .../src/main/resources/yarn-default.xml |  20 +-
 .../hadoop-yarn-server-common/pom.xml   |  10 +
 .../utils/FederationStateStoreFacade.java   | 532 +++
 .../server/federation/utils/package-info.java   |  17 +
 .../utils/FederationStateStoreTestUtil.java | 149 ++
 .../utils/TestFederationStateStoreFacade.java   | 148 ++
 9 files changed, 906 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/267f3f22/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index dee79f7..257fd28 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -89,6 +89,10 @@
 6.0.44
 4.0
 
+1.0.0
+3.0.3
+
+
 
 1.8
 
@@ -1151,6 +1155,16 @@
   kerb-simplekdc
   1.0.0-RC2
 
+
+  javax.cache
+  cache-api
+  ${jcache.version}
+
+
+  org.ehcache
+  ehcache
+  ${ehcache.version}
+
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/267f3f22/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 4b9793c..272d14c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2460,6 +2460,19 @@ public class YarnConfiguration extends Configuration {
   
 
   public static final String FEDERATION_PREFIX = YARN_PREFIX + "federation.";
+
+  public static final String FEDERATION_STATESTORE_CLIENT_CLASS =
+  FEDERATION_PREFIX + "state-store.class";
+
+  public static final String DEFAULT_FEDERATION_STATESTORE_CLIENT_CLASS =
+  
"org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore";
+
+  public static final String FEDERATION_CACHE_TIME_TO_LIVE_SECS =
+  FEDERATION_PREFIX + "cache-ttl.secs";
+
+  // 5 minutes
+  public static final int DEFAULT_FEDERATION_CACHE_TIME_TO_LIVE_SECS = 5 * 60;
+
   public static final String FEDERATION_MACHINE_LIST =
   FEDERATION_PREFIX + "machine-list";
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/267f3f22/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index 668821d..000f5de 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -92,6 +92,10 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 .add(YarnConfiguration.DEFAULT_AMRM_PROXY_INTERCEPTOR_CLASS_PIPELINE);
 
configurationPropsToSkipCompare.add(YarnConfiguration.CURATOR_LEADER_ELECTOR);
 
+// Federation default configs to be ignored
+configurationPropsToSkipCompare
+.add(YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_CLIENT_CLASS);
+
 // Ignore blacklisting nodes for AM failures feature since it is still a
 // "work in progress"
 configurationPropsToSkipCompare.add(YarnConfiguration.


[39/50] [abbrv] hadoop git commit: YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni Matteo Fumarola via Subru)

2016-08-17 Thread subru
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8abb0a8c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
new file mode 100644
index 000..13175ae
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
@@ -0,0 +1,1265 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.utils;
+
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterHeartbeatRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
+import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Unit tests for FederationApplicationInputValidator,
+ * FederationMembershipInputValidator, and FederationPolicyInputValidator.
+ */
+public class TestFederationStateStoreInputValidator {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestFederationStateStoreInputValidator.class);
+
+  private static SubClusterId subClusterId;
+  private static String amRMServiceAddress;
+  private static String clientRMServiceAddress;
+  private static String rmAdminServiceAddress;
+  private static String rmWebServiceAddress;
+  private static int lastHeartBeat;
+  private static SubClusterState stateNew;
+  private static SubClusterState stateLost;
+  private static ApplicationId appId;
+  private static int lastStartTime;
+  private static String capability;
+  private static String queue;
+  private static String type;
+  private static ByteBuffer params;
+
+  private static SubClusterId subClusterIdInvalid;
+  private static SubClusterId subClusterIdNull;
+
+  private static int lastHeartBeatNegative;
+  private static int lastStartTimeNegative;
+
+  private static SubClusterState stateNull;
+  private static ApplicationId appIdNull;
+
+  private static String capabilityNull;
+  private static String capabilityEmpty;
+
+  private static String addressNull;
+  private static String addressEmpty;
+  private static String addressWrong;
+  private static String addressWrongPort;
+
+  private static String queueEmpty;
+  private static String 

[34/50] [abbrv] hadoop git commit: HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang 
Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/23532716
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/23532716
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/23532716

Branch: refs/heads/YARN-2915
Commit: 23532716fcd3f7e5e20b8f9fc66188041638510a
Parents: 3808876
Author: Mingliang Liu 
Authored: Tue Aug 16 16:30:43 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Aug 16 17:33:04 2016 -0700

--
 .../apache/hadoop/test/GenericTestUtils.java| 31 --
 .../hadoop/test/TestGenericTestUtils.java   | 44 
 2 files changed, 63 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/23532716/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 116a111..0b73cf5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -42,10 +42,12 @@ import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Time;
+import org.apache.log4j.Appender;
 import org.apache.log4j.Layout;
 import org.apache.log4j.Level;
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
+import org.apache.log4j.PatternLayout;
 import org.apache.log4j.WriterAppender;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -275,36 +277,41 @@ public abstract class GenericTestUtils {
 private StringWriter sw = new StringWriter();
 private WriterAppender appender;
 private Logger logger;
-
+
 public static LogCapturer captureLogs(Log l) {
   Logger logger = ((Log4JLogger)l).getLogger();
-  LogCapturer c = new LogCapturer(logger);
-  return c;
+  return new LogCapturer(logger);
+}
+
+public static LogCapturer captureLogs(org.slf4j.Logger logger) {
+  return new LogCapturer(toLog4j(logger));
 }
-
 
 private LogCapturer(Logger logger) {
   this.logger = logger;
-  Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
-  WriterAppender wa = new WriterAppender(layout, sw);
-  logger.addAppender(wa);
+  Appender defaultAppender = Logger.getRootLogger().getAppender("stdout");
+  if (defaultAppender == null) {
+defaultAppender = Logger.getRootLogger().getAppender("console");
+  }
+  final Layout layout = (defaultAppender == null) ? new PatternLayout() :
+  defaultAppender.getLayout();
+  this.appender = new WriterAppender(layout, sw);
+  logger.addAppender(this.appender);
 }
-
+
 public String getOutput() {
   return sw.toString();
 }
-
+
 public void stopCapturing() {
   logger.removeAppender(appender);
-
 }
 
 public void clearOutput() {
   sw.getBuffer().setLength(0);
 }
   }
-  
-  
+
   /**
* Mockito answer helper that triggers one latch as soon as the
* method is called, then waits on another before continuing.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23532716/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
index 8a7b5f6..86df5d5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
@@ -18,8 +18,16 @@
 
 package org.apache.hadoop.test;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
 import org.junit.Test;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertTrue;
+
 public class TestGenericTestUtils extends GenericTestUtils {
 
   @Test
@@ -75,4 +83,40 @@ public class TestGenericTestUtils extends GenericTestUtils {
 }
   }
 
+  @Test(timeout = 1)
+  public void testLogCapturer() {
+final Log log = LogFactory.getLog(TestGenericTestUtils.class);
+

[48/50] [abbrv] hadoop git commit: YARN-5408. Compose Federation membership/application/policy APIs into an uber FederationStateStore API. (Ellen Hui via Subru).

2016-08-17 Thread subru
YARN-5408. Compose Federation membership/application/policy APIs into an uber 
FederationStateStore API. (Ellen Hui via Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e46c7ea7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e46c7ea7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e46c7ea7

Branch: refs/heads/YARN-2915
Commit: e46c7ea718916eb6a35567a2c6f4ab7bad93824c
Parents: bfdb58a
Author: Subru Krishnan 
Authored: Mon Aug 8 14:53:38 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 ...ederationApplicationHomeSubClusterStore.java | 18 ++
 .../store/FederationMembershipStateStore.java   | 14 +
 .../federation/store/FederationStateStore.java  | 64 
 .../store/impl/MemoryFederationStateStore.java  | 19 --
 .../impl/FederationStateStoreBaseTest.java  | 57 +
 .../impl/TestMemoryFederationStateStore.java| 21 +--
 6 files changed, 99 insertions(+), 94 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e46c7ea7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
index 217ee2e..22bb88a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
@@ -30,7 +30,6 @@ import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHom
 import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterResponse;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterResponse;
-import org.apache.hadoop.yarn.server.records.Version;
 
 /**
  * FederationApplicationHomeSubClusterStore maintains the state of all
@@ -50,15 +49,6 @@ import org.apache.hadoop.yarn.server.records.Version;
 public interface FederationApplicationHomeSubClusterStore {
 
   /**
-   * Get the {@link Version} of the underlying federation application state
-   * store.
-   *
-   * @return the {@link Version} of the underlying federation application state
-   * store
-   */
-  Version getApplicationStateStoreVersion();
-
-  /**
* Register the home {@code SubClusterId} of the newly submitted
* {@code ApplicationId}. Currently response is empty if the operation was
* successful, if not an exception reporting reason for a failure.
@@ -91,16 +81,16 @@ public interface FederationApplicationHomeSubClusterStore {
* {@code ApplicationId}.
*
* @param request contains the application queried
-   * @return {@code ApplicationHomeSubCluster} containing the application's
-   * home subcluster
+   * @return {@code ApplicationHomeSubCluster} containing the application's 
home
+   * subcluster
* @throws YarnException if the request is invalid/fails
*/
   GetApplicationHomeSubClusterResponse getApplicationHomeSubClusterMap(
   GetApplicationHomeSubClusterRequest request) throws YarnException;
 
   /**
-   * Get the {@code ApplicationHomeSubCluster} list representing the mapping
-   * of all submitted applications to it's home sub-cluster.
+   * Get the {@code ApplicationHomeSubCluster} list representing the mapping of
+   * all submitted applications to it's home sub-cluster.
*
* @param request empty representing all applications
* @return the mapping of all submitted application to it's home sub-cluster

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e46c7ea7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationMembershipStateStore.java
--
diff --git 

[38/50] [abbrv] hadoop git commit: YARN-5307. Federation Application State Store internal APIs

2016-08-17 Thread subru
YARN-5307. Federation Application State Store internal APIs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12b495ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12b495ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12b495ea

Branch: refs/heads/YARN-2915
Commit: 12b495ea014edb9dc55a6e45843a53c2767ac8d9
Parents: aa098e2
Author: Subru Krishnan 
Authored: Fri Aug 5 11:52:44 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 ...ederationApplicationHomeSubClusterStore.java | 126 
 .../AddApplicationHomeSubClusterRequest.java|  72 +++
 .../AddApplicationHomeSubClusterResponse.java   |  44 +
 .../records/ApplicationHomeSubCluster.java  | 124 
 .../DeleteApplicationHomeSubClusterRequest.java |  65 +++
 ...DeleteApplicationHomeSubClusterResponse.java |  43 +
 .../GetApplicationHomeSubClusterRequest.java|  64 +++
 .../GetApplicationHomeSubClusterResponse.java   |  73 +++
 .../GetApplicationsHomeSubClusterRequest.java   |  40 
 .../GetApplicationsHomeSubClusterResponse.java  |  75 
 .../UpdateApplicationHomeSubClusterRequest.java |  74 
 ...UpdateApplicationHomeSubClusterResponse.java |  43 +
 ...dApplicationHomeSubClusterRequestPBImpl.java | 132 +
 ...ApplicationHomeSubClusterResponsePBImpl.java |  78 
 .../pb/ApplicationHomeSubClusterPBImpl.java | 167 
 ...eApplicationHomeSubClusterRequestPBImpl.java | 130 +
 ...ApplicationHomeSubClusterResponsePBImpl.java |  78 
 ...tApplicationHomeSubClusterRequestPBImpl.java | 135 +
 ...ApplicationHomeSubClusterResponsePBImpl.java | 132 +
 ...ApplicationsHomeSubClusterRequestPBImpl.java |  78 
 ...pplicationsHomeSubClusterResponsePBImpl.java | 190 +++
 .../pb/GetSubClustersInfoResponsePBImpl.java|   6 +-
 ...eApplicationHomeSubClusterRequestPBImpl.java | 132 +
 ...ApplicationHomeSubClusterResponsePBImpl.java |  78 
 .../proto/yarn_server_federation_protos.proto   |  45 -
 .../records/TestFederationProtocolRecords.java  |  81 
 26 files changed, 2301 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12b495ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
new file mode 100644
index 000..217ee2e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
@@ -0,0 +1,126 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterResponse;
+import 

[46/50] [abbrv] hadoop git commit: YARN-5519. Add SubClusterId in AddApplicationHomeSubClusterResponse for Router Failover. (Ellen Hui via Subru)

2016-08-17 Thread subru
YARN-5519. Add SubClusterId in AddApplicationHomeSubClusterResponse for Router 
Failover. (Ellen Hui via Subru)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb4369d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb4369d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb4369d7

Branch: refs/heads/YARN-2915
Commit: bb4369d7504caa58c646d6b693c1217eca31fa7d
Parents: 15379b8
Author: Subru Krishnan 
Authored: Mon Aug 15 14:47:02 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 ...ederationApplicationHomeSubClusterStore.java | 21 +++---
 .../store/impl/MemoryFederationStateStore.java  | 22 +++---
 .../AddApplicationHomeSubClusterResponse.java   | 29 ++--
 ...ApplicationHomeSubClusterResponsePBImpl.java | 39 +++
 .../proto/yarn_server_federation_protos.proto   |  1 +
 .../impl/FederationStateStoreBaseTest.java  | 71 +---
 6 files changed, 120 insertions(+), 63 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb4369d7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
index 22bb88a..ace2457 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationApplicationHomeSubClusterStore.java
@@ -51,15 +51,20 @@ public interface FederationApplicationHomeSubClusterStore {
   /**
* Register the home {@code SubClusterId} of the newly submitted
* {@code ApplicationId}. Currently response is empty if the operation was
-   * successful, if not an exception reporting reason for a failure.
+   * successful, if not an exception reporting reason for a failure. If a
+   * mapping for the application already existed, the {@code SubClusterId} in
+   * this response will return the existing mapping which might be different
+   * from that in the {@code AddApplicationHomeSubClusterRequest}.
*
* @param request the request to register a new application with its home
*  sub-cluster
-   * @return empty on successful registration of the application in the
-   * StateStore, if not an exception reporting reason for a failure
+   * @return upon successful registration of the application in the StateStore,
+   * {@code AddApplicationHomeSubClusterRequest} containing the home
+   * sub-cluster of the application. Otherwise, an exception reporting
+   * reason for a failure
* @throws YarnException if the request is invalid/fails
*/
-  AddApplicationHomeSubClusterResponse addApplicationHomeSubClusterMap(
+  AddApplicationHomeSubClusterResponse addApplicationHomeSubCluster(
   AddApplicationHomeSubClusterRequest request) throws YarnException;
 
   /**
@@ -73,7 +78,7 @@ public interface FederationApplicationHomeSubClusterStore {
* not an exception reporting reason for a failure
* @throws YarnException if the request is invalid/fails
*/
-  UpdateApplicationHomeSubClusterResponse updateApplicationHomeSubClusterMap(
+  UpdateApplicationHomeSubClusterResponse updateApplicationHomeSubCluster(
   UpdateApplicationHomeSubClusterRequest request) throws YarnException;
 
   /**
@@ -85,7 +90,7 @@ public interface FederationApplicationHomeSubClusterStore {
* subcluster
* @throws YarnException if the request is invalid/fails
*/
-  GetApplicationHomeSubClusterResponse getApplicationHomeSubClusterMap(
+  GetApplicationHomeSubClusterResponse getApplicationHomeSubCluster(
   GetApplicationHomeSubClusterRequest request) throws YarnException;
 
   /**
@@ -96,7 +101,7 @@ public interface FederationApplicationHomeSubClusterStore {
* @return the mapping of all submitted application to it's home sub-cluster
* @throws YarnException if the request is invalid/fails
*/
-  GetApplicationsHomeSubClusterResponse getApplicationsHomeSubClusterMap(
+  GetApplicationsHomeSubClusterResponse getApplicationsHomeSubCluster(
   

[43/50] [abbrv] hadoop git commit: YARN-3662. Federation Membership State Store internal APIs.

2016-08-17 Thread subru
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0130c857/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetSubClusterInfoRequestPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetSubClusterInfoRequestPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetSubClusterInfoRequestPBImpl.java
new file mode 100644
index 000..c61c419
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetSubClusterInfoRequestPBImpl.java
@@ -0,0 +1,125 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.records.impl.pb;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.GetSubClusterInfoRequestProto;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.GetSubClusterInfoRequestProtoOrBuilder;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterIdProto;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+
+import com.google.protobuf.TextFormat;
+
+/**
+ * Protocol buffer based implementation of {@link GetSubClusterInfoRequest}.
+ */
+@Private
+@Unstable
+public class GetSubClusterInfoRequestPBImpl extends GetSubClusterInfoRequest {
+
+  private GetSubClusterInfoRequestProto proto =
+  GetSubClusterInfoRequestProto.getDefaultInstance();
+  private GetSubClusterInfoRequestProto.Builder builder = null;
+  private boolean viaProto = false;
+
+  public GetSubClusterInfoRequestPBImpl() {
+builder = GetSubClusterInfoRequestProto.newBuilder();
+  }
+
+  public GetSubClusterInfoRequestPBImpl(GetSubClusterInfoRequestProto proto) {
+this.proto = proto;
+viaProto = true;
+  }
+
+  public GetSubClusterInfoRequestProto getProto() {
+mergeLocalToProto();
+proto = viaProto ? proto : builder.build();
+viaProto = true;
+return proto;
+  }
+
+  private void mergeLocalToProto() {
+if (viaProto) {
+  maybeInitBuilder();
+}
+mergeLocalToBuilder();
+proto = builder.build();
+viaProto = true;
+  }
+
+  private void maybeInitBuilder() {
+if (viaProto || builder == null) {
+  builder = GetSubClusterInfoRequestProto.newBuilder(proto);
+}
+viaProto = false;
+  }
+
+  private void mergeLocalToBuilder() {
+  }
+
+  @Override
+  public int hashCode() {
+return getProto().hashCode();
+  }
+
+  @Override
+  public boolean equals(Object other) {
+if (other == null) {
+  return false;
+}
+if (other.getClass().isAssignableFrom(this.getClass())) {
+  return this.getProto().equals(this.getClass().cast(other).getProto());
+}
+return false;
+  }
+
+  @Override
+  public String toString() {
+return TextFormat.shortDebugString(getProto());
+  }
+
+  @Override
+  public SubClusterId getSubClusterId() {
+GetSubClusterInfoRequestProtoOrBuilder p = viaProto ? proto : builder;
+if (!p.hasSubClusterId()) {
+  return null;
+}
+return convertFromProtoFormat(p.getSubClusterId());
+  }
+
+  @Override
+  public void setSubClusterId(SubClusterId subClusterId) {
+maybeInitBuilder();
+if (subClusterId == null) {
+  builder.clearSubClusterId();
+  return;
+}
+builder.setSubClusterId(convertToProtoFormat(subClusterId));
+  }
+
+  private SubClusterId convertFromProtoFormat(SubClusterIdProto sc) {
+return new SubClusterIdPBImpl(sc);
+  }
+
+  private SubClusterIdProto convertToProtoFormat(SubClusterId sc) {
+return ((SubClusterIdPBImpl) 

[44/50] [abbrv] hadoop git commit: YARN-3662. Federation Membership State Store internal APIs.

2016-08-17 Thread subru
YARN-3662. Federation Membership State Store internal APIs.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0130c857
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0130c857
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0130c857

Branch: refs/heads/YARN-2915
Commit: 0130c857b12c9106a710d097b8338ca3376e2f8a
Parents: 084c757
Author: Subru Krishnan 
Authored: Fri Jul 29 16:53:40 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../hadoop/yarn/api/BasePBImplRecordsTest.java  | 263 ++
 .../hadoop/yarn/api/TestPBImplRecords.java  | 259 +-
 .../hadoop-yarn-server-common/pom.xml   |   8 +
 .../store/FederationMembershipStateStore.java   | 126 +
 .../server/federation/store/package-info.java   |  17 ++
 .../store/records/GetSubClusterInfoRequest.java |  62 +
 .../records/GetSubClusterInfoResponse.java  |  62 +
 .../records/GetSubClustersInfoRequest.java  |  66 +
 .../records/GetSubClustersInfoResponse.java |  66 +
 .../records/SubClusterDeregisterRequest.java|  89 +++
 .../records/SubClusterDeregisterResponse.java   |  42 +++
 .../records/SubClusterHeartbeatRequest.java | 149 +++
 .../records/SubClusterHeartbeatResponse.java|  45 
 .../federation/store/records/SubClusterId.java  | 100 +++
 .../store/records/SubClusterInfo.java   | 263 ++
 .../records/SubClusterRegisterRequest.java  |  74 +
 .../records/SubClusterRegisterResponse.java |  44 +++
 .../store/records/SubClusterState.java  |  60 +
 .../impl/pb/GetSubClusterInfoRequestPBImpl.java | 125 +
 .../pb/GetSubClusterInfoResponsePBImpl.java | 134 ++
 .../pb/GetSubClustersInfoRequestPBImpl.java | 108 
 .../pb/GetSubClustersInfoResponsePBImpl.java| 184 +
 .../pb/SubClusterDeregisterRequestPBImpl.java   | 156 +++
 .../pb/SubClusterDeregisterResponsePBImpl.java  |  77 ++
 .../pb/SubClusterHeartbeatRequestPBImpl.java| 192 +
 .../pb/SubClusterHeartbeatResponsePBImpl.java   |  77 ++
 .../records/impl/pb/SubClusterIdPBImpl.java |  75 ++
 .../records/impl/pb/SubClusterInfoPBImpl.java   | 267 +++
 .../pb/SubClusterRegisterRequestPBImpl.java | 134 ++
 .../pb/SubClusterRegisterResponsePBImpl.java|  77 ++
 .../store/records/impl/pb/package-info.java |  17 ++
 .../federation/store/records/package-info.java  |  17 ++
 .../proto/yarn_server_federation_protos.proto   |  93 +++
 .../records/TestFederationProtocolRecords.java  | 133 +
 34 files changed, 3409 insertions(+), 252 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0130c857/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java
new file mode 100644
index 000..98d8222
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.api;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.junit.Assert;
+
+import java.lang.reflect.*;
+import java.nio.ByteBuffer;
+import java.util.*;
+
+/**
+ * Generic helper class to validate protocol records.
+ */
+public class BasePBImplRecordsTest {
+  static final Log LOG = 

[32/50] [abbrv] hadoop git commit: Revert "HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang Liu)"

2016-08-17 Thread subru
Revert "HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by 
Mingliang Liu)"

This reverts commit 9336a0495f99cd3fbc7ecef452eb37cfbaf57440.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27a6e09c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27a6e09c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27a6e09c

Branch: refs/heads/YARN-2915
Commit: 27a6e09c4e22b9b5fee4e8ced7321eed92d566a4
Parents: 4b689e7
Author: Mingliang Liu 
Authored: Tue Aug 16 16:25:37 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Aug 16 16:25:37 2016 -0700

--
 .../apache/hadoop/test/GenericTestUtils.java| 25 +--
 .../hadoop/test/TestGenericTestUtils.java   | 44 
 2 files changed, 13 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27a6e09c/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 6b5135c..116a111 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -42,10 +42,10 @@ import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Time;
+import org.apache.log4j.Layout;
 import org.apache.log4j.Level;
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
-import org.apache.log4j.PatternLayout;
 import org.apache.log4j.WriterAppender;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -275,35 +275,36 @@ public abstract class GenericTestUtils {
 private StringWriter sw = new StringWriter();
 private WriterAppender appender;
 private Logger logger;
-
+
 public static LogCapturer captureLogs(Log l) {
   Logger logger = ((Log4JLogger)l).getLogger();
-  return new LogCapturer(logger);
-}
-
-public static LogCapturer captureLogs(org.slf4j.Logger logger) {
-  return new LogCapturer(toLog4j(logger));
+  LogCapturer c = new LogCapturer(logger);
+  return c;
 }
+
 
 private LogCapturer(Logger logger) {
   this.logger = logger;
-  this.appender = new WriterAppender(new PatternLayout(), sw);
-  logger.addAppender(appender);
+  Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
+  WriterAppender wa = new WriterAppender(layout, sw);
+  logger.addAppender(wa);
 }
-
+
 public String getOutput() {
   return sw.toString();
 }
-
+
 public void stopCapturing() {
   logger.removeAppender(appender);
+
 }
 
 public void clearOutput() {
   sw.getBuffer().setLength(0);
 }
   }
-
+  
+  
   /**
* Mockito answer helper that triggers one latch as soon as the
* method is called, then waits on another before continuing.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27a6e09c/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
index 86df5d5..8a7b5f6 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
@@ -18,16 +18,8 @@
 
 package org.apache.hadoop.test;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
 import org.junit.Test;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.junit.Assert.assertTrue;
-
 public class TestGenericTestUtils extends GenericTestUtils {
 
   @Test
@@ -83,40 +75,4 @@ public class TestGenericTestUtils extends GenericTestUtils {
 }
   }
 
-  @Test(timeout = 1)
-  public void testLogCapturer() {
-final Log log = LogFactory.getLog(TestGenericTestUtils.class);
-LogCapturer logCapturer = LogCapturer.captureLogs(log);
-final String infoMessage = "info message";
-// test get output message
-log.info(infoMessage);
-assertTrue(logCapturer.getOutput().endsWith(
-String.format(infoMessage + "%n")));
-// test clear 

[28/50] [abbrv] hadoop git commit: YARN-5475. Fix test failure of TestAggregatedLogFormat#testReadAcontainerLogs1 (Jun Gong via Varun Saxena)

2016-08-17 Thread subru
YARN-5475. Fix test failure of TestAggregatedLogFormat#testReadAcontainerLogs1 
(Jun Gong via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b427ce12
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b427ce12
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b427ce12

Branch: refs/heads/YARN-2915
Commit: b427ce12bcdd35e12b212c18430f0114fbbc1fea
Parents: ffe1fff
Author: Varun Saxena 
Authored: Tue Aug 16 20:24:53 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 20:24:53 2016 +0530

--
 .../hadoop/yarn/logaggregation/TestAggregatedLogFormat.java   | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b427ce12/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
index 45dd8ab..8cbec10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
@@ -140,7 +140,8 @@ public class TestAggregatedLogFormat {
 final int ch = filler;
 
 UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
-LogWriter logWriter = new LogWriter(conf, remoteAppLogFile, ugi);
+LogWriter logWriter = new LogWriter(new Configuration(), remoteAppLogFile,
+ugi);
 
 LogKey logKey = new LogKey(testContainerId);
 LogValue logValue =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[37/50] [abbrv] hadoop git commit: YARN-5307. Federation Application State Store internal APIs

2016-08-17 Thread subru
http://git-wip-us.apache.org/repos/asf/hadoop/blob/12b495ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetApplicationsHomeSubClusterResponsePBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetApplicationsHomeSubClusterResponsePBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetApplicationsHomeSubClusterResponsePBImpl.java
new file mode 100644
index 000..8b72a1e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/GetApplicationsHomeSubClusterResponsePBImpl.java
@@ -0,0 +1,190 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.records.impl.pb;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.ApplicationHomeSubClusterProto;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.GetApplicationsHomeSubClusterResponseProto;
+import 
org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.GetApplicationsHomeSubClusterResponseProtoOrBuilder;
+import 
org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterResponse;
+
+import com.google.protobuf.TextFormat;
+
+/**
+ * Protocol buffer based implementation of
+ * {@link GetApplicationsHomeSubClusterResponse}.
+ */
+@Private
+@Unstable
+public class GetApplicationsHomeSubClusterResponsePBImpl
+extends GetApplicationsHomeSubClusterResponse {
+
+  private GetApplicationsHomeSubClusterResponseProto proto =
+  GetApplicationsHomeSubClusterResponseProto.getDefaultInstance();
+  private GetApplicationsHomeSubClusterResponseProto.Builder builder = null;
+  private boolean viaProto = false;
+
+  private List appsHomeSubCluster;
+
+  public GetApplicationsHomeSubClusterResponsePBImpl() {
+builder = GetApplicationsHomeSubClusterResponseProto.newBuilder();
+  }
+
+  public GetApplicationsHomeSubClusterResponsePBImpl(
+  GetApplicationsHomeSubClusterResponseProto proto) {
+this.proto = proto;
+viaProto = true;
+  }
+
+  public GetApplicationsHomeSubClusterResponseProto getProto() {
+mergeLocalToProto();
+proto = viaProto ? proto : builder.build();
+viaProto = true;
+return proto;
+  }
+
+  private void mergeLocalToProto() {
+if (viaProto) {
+  maybeInitBuilder();
+}
+mergeLocalToBuilder();
+proto = builder.build();
+viaProto = true;
+  }
+
+  private void maybeInitBuilder() {
+if (viaProto || builder == null) {
+  builder = GetApplicationsHomeSubClusterResponseProto.newBuilder(proto);
+}
+viaProto = false;
+  }
+
+  private void mergeLocalToBuilder() {
+if (this.appsHomeSubCluster != null) {
+  addSubClustersInfoToProto();
+}
+  }
+
+  @Override
+  public int hashCode() {
+return getProto().hashCode();
+  }
+
+  @Override
+  public boolean equals(Object other) {
+if (other == null) {
+  return false;
+}
+if (other.getClass().isAssignableFrom(this.getClass())) {
+  return this.getProto().equals(this.getClass().cast(other).getProto());
+}
+return false;
+  }
+
+  @Override
+  public String toString() {
+return TextFormat.shortDebugString(getProto());
+  }
+
+  @Override
+  public List getAppsHomeSubClusters() {
+initSubClustersInfoList();
+return appsHomeSubCluster;
+  }
+
+  @Override
+  public void setAppsHomeSubClusters(
+  List appsHomeSubClusters) {
+maybeInitBuilder();
+if 

[36/50] [abbrv] hadoop git commit: MAPREDUCE-6690. Limit the number of resources a single map reduce job can submit for localization. Contributed by Chris Trezzo

2016-08-17 Thread subru
MAPREDUCE-6690. Limit the number of resources a single map reduce job can 
submit for localization. Contributed by Chris Trezzo


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f80a7298
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f80a7298
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f80a7298

Branch: refs/heads/YARN-2915
Commit: f80a7298325a4626638ee24467e2012442e480d4
Parents: 7f05ff7
Author: Jason Lowe 
Authored: Wed Aug 17 16:21:20 2016 +
Committer: Jason Lowe 
Committed: Wed Aug 17 16:22:31 2016 +

--
 .../hadoop/mapreduce/JobResourceUploader.java   | 214 +--
 .../apache/hadoop/mapreduce/MRJobConfig.java|  28 ++
 .../ClientDistributedCacheManager.java  |  15 +-
 .../src/main/resources/mapred-default.xml   |  30 ++
 .../mapreduce/TestJobResourceUploader.java  | 355 +++
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 166 -
 6 files changed, 776 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f80a7298/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
index fa4dd86..15dbc13 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
@@ -21,12 +21,16 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
@@ -34,6 +38,8 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager;
 import org.apache.hadoop.mapreduce.filecache.DistributedCache;
 
+import com.google.common.annotations.VisibleForTesting;
+
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
 class JobResourceUploader {
@@ -86,31 +92,37 @@ class JobResourceUploader {
 FsPermission mapredSysPerms =
 new FsPermission(JobSubmissionFiles.JOB_DIR_PERMISSION);
 FileSystem.mkdirs(jtFs, submitJobDir, mapredSysPerms);
-// add all the command line files/ jars and archive
-// first copy them to jobtrackers filesystem
 
-uploadFiles(conf, submitJobDir, mapredSysPerms, replication);
-uploadLibJars(conf, submitJobDir, mapredSysPerms, replication);
-uploadArchives(conf, submitJobDir, mapredSysPerms, replication);
-uploadJobJar(job, submitJobDir, replication);
+Collection files = conf.getStringCollection("tmpfiles");
+Collection libjars = conf.getStringCollection("tmpjars");
+Collection archives = conf.getStringCollection("tmparchives");
+String jobJar = job.getJar();
+
+Map statCache = new HashMap();
+checkLocalizationLimits(conf, files, libjars, archives, jobJar, statCache);
+
+uploadFiles(conf, files, submitJobDir, mapredSysPerms, replication);
+uploadLibJars(conf, libjars, submitJobDir, mapredSysPerms, replication);
+uploadArchives(conf, archives, submitJobDir, mapredSysPerms, replication);
+uploadJobJar(job, jobJar, submitJobDir, replication);
 addLog4jToDistributedCache(job, submitJobDir);
 
 // set the timestamps of the archives and files
 // set the public/private visibility of the archives and files
-
ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(conf);
+ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(conf,
+statCache);
 // get DelegationToken for cached file
 ClientDistributedCacheManager.getDelegationTokens(conf,
 job.getCredentials());
   }
 
-  private void uploadFiles(Configuration conf, Path submitJobDir,
-  FsPermission 

[35/50] [abbrv] hadoop git commit: YARN-5455. Update Javadocs for LinuxContainerExecutor. Contributed by Daniel Templeton.

2016-08-17 Thread subru
YARN-5455. Update Javadocs for LinuxContainerExecutor. Contributed by Daniel 
Templeton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f05ff7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f05ff7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f05ff7a

Branch: refs/heads/YARN-2915
Commit: 7f05ff7a4e654693eaaa216ee5fc6e24112e0e23
Parents: 2353271
Author: Varun Vasudev 
Authored: Wed Aug 17 15:34:58 2016 +0530
Committer: Varun Vasudev 
Committed: Wed Aug 17 15:34:58 2016 +0530

--
 .../nodemanager/LinuxContainerExecutor.java |  80 ++-
 .../runtime/DefaultLinuxContainerRuntime.java   |  20 ++-
 .../DelegatingLinuxContainerRuntime.java|  10 ++
 .../runtime/DockerLinuxContainerRuntime.java| 143 ---
 .../linux/runtime/LinuxContainerRuntime.java|  10 +-
 .../runtime/ContainerRuntime.java   |  40 +-
 6 files changed, 267 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f05ff7a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
index 8f5ee6b..2c27f6c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
@@ -38,7 +38,9 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileg
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandler;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.LinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException;
@@ -64,11 +66,36 @@ import java.util.regex.Pattern;
 
 import static 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.LinuxContainerRuntimeConstants.*;
 
-/** Container execution for Linux. Provides linux-specific localization
- * mechanisms, resource management via cgroups and can switch between multiple
- * container runtimes - e.g Standard "Process Tree", Docker etc
+/**
+ * This class provides {@link Container} execution using a native
+ * {@code container-executor} binary. By using a helper written it native code,
+ * this class is able to do several things that the
+ * {@link DefaultContainerExecutor} cannot, such as execution of applications
+ * as the applications' owners, provide localization that takes advantage of
+ * mapping the application owner to a UID on the execution host, resource
+ * management through Linux CGROUPS, and Docker support.
+ *
+ * If {@code hadoop.security.authetication} is set to {@code simple},
+ * then the
+ * {@code yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}
+ * property will determine whether the {@code LinuxContainerExecutor} runs
+ * processes as the application owner or as the default user, as set in the
+ * {@code yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user}
+ * property.
+ *
+ * The {@code LinuxContainerExecutor} will manage applications through an
+ * appropriate {@link LinuxContainerRuntime} instance. This class uses a
+ * {@link DelegatingLinuxContainerRuntime} instance, which will delegate calls
+ * to either a {@link DefaultLinuxContainerRuntime} instance or a
+ * {@link DockerLinuxContainerRuntime} instance, depending on the job's
+ * configuration.
+ *
+ * 

[27/50] [abbrv] hadoop git commit: YARN-5514. Clarify DecommissionType.FORCEFUL comment (Vrushali C via Varun Saxena)

2016-08-17 Thread subru
YARN-5514. Clarify DecommissionType.FORCEFUL comment (Vrushali C via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffe1fff5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffe1fff5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffe1fff5

Branch: refs/heads/YARN-2915
Commit: ffe1fff5262bb1d77f68059f12ae43f9849b530f
Parents: b8a446b
Author: Varun Saxena 
Authored: Tue Aug 16 14:05:41 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 14:05:41 2016 +0530

--
 .../hadoop/yarn/api/records/DecommissionType.java   | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe1fff5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
index 988fd51..ba5609d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
@@ -17,13 +17,19 @@
  */
 package org.apache.hadoop.yarn.api.records;
 
+/**
+ * Specifies the different types of decommissioning of nodes.
+ */
 public enum DecommissionType {
-  /** Decomissioning nodes in normal way **/
+  /** Decomissioning nodes in normal way. **/
   NORMAL,
 
-  /** Graceful decommissioning of nodes **/
+  /** Graceful decommissioning of nodes. **/
   GRACEFUL,
 
-  /** Forceful decommissioning of nodes which are already in progress **/
+  /**
+   * Forceful decommissioning of nodes whose decommissioning is already in
+   * progress.
+   **/
   FORCEFUL
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: YARN-5300. Exclude generated federation protobuf sources from YARN Javadoc/findbugs build

2016-08-17 Thread subru
YARN-5300. Exclude generated federation protobuf sources from YARN 
Javadoc/findbugs build


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/084c7571
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/084c7571
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/084c7571

Branch: refs/heads/YARN-2915
Commit: 084c7571c1d3e051d90a7f84019926987e2af8a5
Parents: f80a729
Author: Subru Krishnan 
Authored: Tue Jul 19 15:08:25 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml | 3 +++
 hadoop-yarn-project/hadoop-yarn/pom.xml  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/084c7571/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a5c0f71..7dabc8e 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -21,6 +21,9 @@
 
   
   
+
+  
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/084c7571/hadoop-yarn-project/hadoop-yarn/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/pom.xml
index 3353e33..b276801 100644
--- a/hadoop-yarn-project/hadoop-yarn/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/pom.xml
@@ -76,7 +76,7 @@
 org.apache.maven.plugins
 maven-javadoc-plugin
 
-  
org.apache.hadoop.yarn.proto
+  
org.apache.hadoop.yarn.proto:org.apache.hadoop.yarn.federation.proto
 
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[42/50] [abbrv] hadoop git commit: YARN-5390. Federation Subcluster Resolver. Contributed by Ellen Hui.

2016-08-17 Thread subru
YARN-5390. Federation Subcluster Resolver. Contributed by Ellen Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa098e26
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa098e26
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa098e26

Branch: refs/heads/YARN-2915
Commit: aa098e2603ff87a644b734ae36d9a2cf989f8da8
Parents: ba3e426
Author: Subru Krishnan 
Authored: Thu Aug 4 15:58:31 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java |   8 +
 .../src/main/resources/yarn-default.xml |   7 +
 .../hadoop-yarn-server-common/pom.xml   |  10 +
 .../resolver/AbstractSubClusterResolver.java|  67 +++
 .../resolver/DefaultSubClusterResolverImpl.java | 164 +
 .../federation/resolver/SubClusterResolver.java |  58 ++
 .../federation/resolver/package-info.java   |  17 ++
 .../resolver/TestDefaultSubClusterResolver.java | 184 +++
 .../src/test/resources/nodes|   4 +
 .../src/test/resources/nodes-malformed  |   3 +
 10 files changed, 522 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa098e26/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 8899ccd..4b9793c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2456,6 +2456,14 @@ public class YarnConfiguration extends Configuration {
   public static final int DEFAULT_SHARED_CACHE_NM_UPLOADER_THREAD_COUNT = 20;
 
   
+  // Federation Configs
+  
+
+  public static final String FEDERATION_PREFIX = YARN_PREFIX + "federation.";
+  public static final String FEDERATION_MACHINE_LIST =
+  FEDERATION_PREFIX + "machine-list";
+
+  
   // Other Configs
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa098e26/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index e77e990..f46378d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -2600,6 +2600,13 @@
   
 
   
+
+  Machine list file to be loaded by the FederationSubCluster Resolver
+
+yarn.federation.machine-list
+  
+
+  
 The interval that the yarn client library uses to poll the
 completion status of the asynchronous API of application client protocol.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa098e26/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
index 4216f76..c16747a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
@@ -180,6 +180,16 @@
   
 
   
+  
+org.apache.rat
+apache-rat-plugin
+
+  
+src/test/resources/nodes
+src/test/resources/nodes-malformed
+  
+
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa098e26/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/resolver/AbstractSubClusterResolver.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/resolver/AbstractSubClusterResolver.java

[40/50] [abbrv] hadoop git commit: YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni Matteo Fumarola via Subru)

2016-08-17 Thread subru
YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni 
Matteo Fumarola via Subru)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8abb0a8c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8abb0a8c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8abb0a8c

Branch: refs/heads/YARN-2915
Commit: 8abb0a8c03ec2f29a3274416e101485af3b66ae9
Parents: 267f3f2
Author: Subru Krishnan 
Authored: Wed Aug 17 12:07:06 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../store/impl/MemoryFederationStateStore.java  |   30 +
 ...cationHomeSubClusterStoreInputValidator.java |  183 +++
 ...ationMembershipStateStoreInputValidator.java |  317 +
 .../FederationPolicyStoreInputValidator.java|  144 ++
 ...derationStateStoreInvalidInputException.java |   48 +
 .../federation/store/utils/package-info.java|   17 +
 .../impl/FederationStateStoreBaseTest.java  |6 +-
 .../TestFederationStateStoreInputValidator.java | 1265 ++
 8 files changed, 2007 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8abb0a8c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
index 8144435..6e564dc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
@@ -57,6 +57,9 @@ import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegister
 import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterResponse;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationPolicyStoreInputValidator;
 import org.apache.hadoop.yarn.server.records.Version;
 import org.apache.hadoop.yarn.util.MonotonicClock;
 
@@ -88,6 +91,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public SubClusterRegisterResponse registerSubCluster(
   SubClusterRegisterRequest request) throws YarnException {
+FederationMembershipStateStoreInputValidator
+.validateSubClusterRegisterRequest(request);
 SubClusterInfo subClusterInfo = request.getSubClusterInfo();
 membership.put(subClusterInfo.getSubClusterId(), subClusterInfo);
 return SubClusterRegisterResponse.newInstance();
@@ -96,6 +101,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public SubClusterDeregisterResponse deregisterSubCluster(
   SubClusterDeregisterRequest request) throws YarnException {
+FederationMembershipStateStoreInputValidator
+.validateSubClusterDeregisterRequest(request);
 SubClusterInfo subClusterInfo = membership.get(request.getSubClusterId());
 if (subClusterInfo == null) {
   throw new YarnException(
@@ -111,6 +118,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   public SubClusterHeartbeatResponse subClusterHeartbeat(
   SubClusterHeartbeatRequest request) throws YarnException {
 
+FederationMembershipStateStoreInputValidator
+.validateSubClusterHeartbeatRequest(request);
 SubClusterId subClusterId = request.getSubClusterId();
 SubClusterInfo subClusterInfo = membership.get(subClusterId);
 
@@ -129,6 +138,9 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public GetSubClusterInfoResponse getSubCluster(
   GetSubClusterInfoRequest request) throws YarnException {
+
+FederationMembershipStateStoreInputValidator
+

[47/50] [abbrv] hadoop git commit: YARN-5407. In-memory based implementation of the FederationApplicationStateStore/FederationPolicyStateStore. (Ellen Hui via Subru)

2016-08-17 Thread subru
YARN-5407. In-memory based implementation of the 
FederationApplicationStateStore/FederationPolicyStateStore. (Ellen Hui via 
Subru)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15379b89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15379b89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15379b89

Branch: refs/heads/YARN-2915
Commit: 15379b899825830a018e35e585a0cad64c9edcff
Parents: e46c7ea
Author: Subru Krishnan 
Authored: Tue Aug 9 16:07:55 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../store/impl/MemoryFederationStateStore.java  | 158 +++-
 ...SubClusterPoliciesConfigurationsRequest.java |   2 +-
 ...ubClusterPoliciesConfigurationsResponse.java |   2 +-
 ...GetSubClusterPolicyConfigurationRequest.java |   3 +-
 ...etSubClusterPolicyConfigurationResponse.java |   2 +-
 ...SetSubClusterPolicyConfigurationRequest.java |  20 +-
 ...etSubClusterPolicyConfigurationResponse.java |   2 +-
 .../records/SubClusterPolicyConfiguration.java  |  27 +-
 ...tApplicationHomeSubClusterRequestPBImpl.java |   4 +
 ...ClusterPolicyConfigurationRequestPBImpl.java |  17 -
 .../pb/SubClusterPolicyConfigurationPBImpl.java |  17 +
 .../proto/yarn_server_federation_protos.proto   |   8 +-
 .../impl/FederationStateStoreBaseTest.java  | 367 ++-
 .../impl/TestMemoryFederationStateStore.java|   4 +-
 14 files changed, 558 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15379b89/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
index cea4ac2..a540dff 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
@@ -20,35 +20,72 @@ package org.apache.hadoop.yarn.server.federation.store.impl;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.concurrent.ConcurrentHashMap;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
-import 
org.apache.hadoop.yarn.server.federation.store.FederationMembershipStateStore;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import org.apache.hadoop.yarn.server.federation.store.FederationStateStore;
 import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
 import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterResponse;
 import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest;
 import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPoliciesConfigurationsRequest;
+import 

[21/50] [abbrv] hadoop git commit: HDFS-10724. Document caller context config keys. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HDFS-10724. Document caller context config keys. (Contributed by Mingliang Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4bcbef39
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4bcbef39
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4bcbef39

Branch: refs/heads/YARN-2915
Commit: 4bcbef39f7ca07601092919a7f2bea531a2dfa07
Parents: 12ad63d
Author: Mingliang Liu 
Authored: Mon Aug 15 20:20:33 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:20:33 2016 -0700

--
 .../src/main/resources/core-default.xml | 28 
 1 file changed, 28 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4bcbef39/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 992b464..f9c3f72 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -2325,4 +2325,32 @@
 org.apache.hadoop.fs.adl.Adl
   
 
+  
+hadoop.caller.context.enabled
+false
+When the feature is enabled, additional fields are written 
into
+  name-node audit log records for auditing coarse granularity operations.
+
+  
+  
+hadoop.caller.context.max.size
+128
+The maximum bytes a caller context string can have. If the
+  passed caller context is longer than this maximum bytes, client will
+  truncate it before sending to server. Note that the server may have a
+  different maximum size, and will truncate the caller context to the
+  maximum size it allows.
+
+  
+  
+hadoop.caller.context.signature.max.size
+40
+
+  The caller's signature (optional) is for offline validation. If the
+  signature exceeds the maximum allowed bytes in server, the caller context
+  will be abandoned, in which case the caller context will not be recorded
+  in audit logs.
+
+  
+
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/50] [abbrv] hadoop git commit: HADOOP-11588. Benchmark framework and test for erasure coders. Contributed by Rui Li

2016-08-17 Thread subru
HADOOP-11588. Benchmark framework and test for erasure coders. Contributed by 
Rui Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8fbb57fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8fbb57fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8fbb57fb

Branch: refs/heads/YARN-2915
Commit: 8fbb57fbd903a838684fa87cf15767d13695e4ed
Parents: 5199db3
Author: Kai Zheng 
Authored: Fri Aug 12 15:05:52 2016 +0800
Committer: Kai Zheng 
Committed: Fri Aug 12 15:05:52 2016 +0800

--
 .../rawcoder/RSRawDecoderLegacy.java|  56 +--
 .../rawcoder/RawErasureCoderBenchmark.java  | 408 +++
 .../rawcoder/TestRawErasureCoderBenchmark.java  |  65 +++
 3 files changed, 490 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8fbb57fb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
index 0183760..c8deec9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
@@ -39,28 +39,6 @@ public class RSRawDecoderLegacy extends RawErasureDecoder {
   private int[] errSignature;
   private int[] primitivePower;
 
-  /**
-   * We need a set of reusable buffers either for the bytes array
-   * decoding version or direct buffer decoding version. Normally not both.
-   *
-   * For output, in addition to the valid buffers from the caller
-   * passed from above, we need to provide extra buffers for the internal
-   * decoding implementation. For output, the caller should provide no more
-   * than numParityUnits but at least one buffers. And the left buffers will be
-   * borrowed from either bytesArrayBuffers, for the bytes array version.
-   *
-   */
-  // Reused buffers for decoding with bytes arrays
-  private byte[][] bytesArrayBuffers = new byte[getNumParityUnits()][];
-  private byte[][] adjustedByteArrayOutputsParameter =
-  new byte[getNumParityUnits()][];
-  private int[] adjustedOutputOffsets = new int[getNumParityUnits()];
-
-  // Reused buffers for decoding with direct ByteBuffers
-  private ByteBuffer[] directBuffers = new ByteBuffer[getNumParityUnits()];
-  private ByteBuffer[] adjustedDirectBufferOutputsParameter =
-  new ByteBuffer[getNumParityUnits()];
-
   public RSRawDecoderLegacy(ErasureCoderOptions coderOptions) {
 super(coderOptions);
 if (getNumAllUnits() >= RSUtil.GF.getFieldSize()) {
@@ -139,16 +117,14 @@ public class RSRawDecoderLegacy extends RawErasureDecoder 
{
  * implementations, so we have to adjust them before calling doDecodeImpl.
  */
 
+byte[][] bytesArrayBuffers = new byte[getNumParityUnits()][];
+byte[][] adjustedByteArrayOutputsParameter =
+new byte[getNumParityUnits()][];
+int[] adjustedOutputOffsets = new int[getNumParityUnits()];
+
 int[] erasedOrNotToReadIndexes =
 CoderUtil.getNullIndexes(decodingState.inputs);
 
-// Prepare for adjustedOutputsParameter
-
-// First reset the positions needed this time
-for (int i = 0; i < erasedOrNotToReadIndexes.length; i++) {
-  adjustedByteArrayOutputsParameter[i] = null;
-  adjustedOutputOffsets[i] = 0;
-}
 // Use the caller passed buffers in erasedIndexes positions
 for (int outputIdx = 0, i = 0;
  i < decodingState.erasedIndexes.length; i++) {
@@ -174,7 +150,8 @@ public class RSRawDecoderLegacy extends RawErasureDecoder {
 for (int bufferIdx = 0, i = 0; i < erasedOrNotToReadIndexes.length; i++) {
   if (adjustedByteArrayOutputsParameter[i] == null) {
 adjustedByteArrayOutputsParameter[i] = CoderUtil.resetBuffer(
-checkGetBytesArrayBuffer(bufferIdx, dataLen), 0, dataLen);
+checkGetBytesArrayBuffer(bytesArrayBuffers, bufferIdx, dataLen),
+0, dataLen);
 adjustedOutputOffsets[i] = 0; // Always 0 for such temp output
 bufferIdx++;
   }
@@ -198,12 +175,10 @@ public class RSRawDecoderLegacy extends RawErasureDecoder 
{
 int[] erasedOrNotToReadIndexes =
 CoderUtil.getNullIndexes(decodingState.inputs);
 
-// Prepare for adjustedDirectBufferOutputsParameter
+ByteBuffer[] directBuffers = new ByteBuffer[getNumParityUnits()];
+ByteBuffer[] adjustedDirectBufferOutputsParameter 

[30/50] [abbrv] hadoop git commit: MAPREDUCE-6751. Add debug log message when splitting is not possible due to unsplittable compression. (Peter Vary via rchiang)

2016-08-17 Thread subru
MAPREDUCE-6751. Add debug log message when splitting is not possible due to 
unsplittable compression. (Peter Vary via rchiang)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6c154abd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6c154abd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6c154abd

Branch: refs/heads/YARN-2915
Commit: 6c154abd33279475315b5f7f78dc47f1b0aa7028
Parents: b047bc7
Author: Ray Chiang 
Authored: Tue Aug 16 12:13:22 2016 -0700
Committer: Ray Chiang 
Committed: Tue Aug 16 12:13:22 2016 -0700

--
 .../main/java/org/apache/hadoop/mapred/FileInputFormat.java   | 7 +++
 .../hadoop/mapreduce/lib/input/CombineFileInputFormat.java| 4 
 .../apache/hadoop/mapreduce/lib/input/FileInputFormat.java| 7 +++
 3 files changed, 18 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
index 2c58ebe..5803d60 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
@@ -369,6 +369,13 @@ public abstract class FileInputFormat implements 
InputFormat {
 splitHosts[0], splitHosts[1]));
   }
 } else {
+  if (LOG.isDebugEnabled()) {
+// Log only if the file is big enough to be splitted
+if (length > Math.min(file.getBlockSize(), minSize)) {
+  LOG.debug("File is not splittable so no parallelization "
+  + "is possible: " + file.getPath());
+}
+  }
   String[][] splitHosts = 
getSplitHostsAndCachedHosts(blkLocations,0,length,clusterMap);
   splits.add(makeSplit(path, 0, length, splitHosts[0], splitHosts[1]));
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
index b2b7656..8f9699e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
@@ -600,6 +600,10 @@ public abstract class CombineFileInputFormat
 if (!isSplitable) {
   // if the file is not splitable, just create the one block with
   // full file length
+  if (LOG.isDebugEnabled()) {
+LOG.debug("File is not splittable so no parallelization "
++ "is possible: " + stat.getPath());
+  }
   blocks = new OneBlockInfo[1];
   fileSize = stat.getLen();
   blocks[0] = new OneBlockInfo(stat.getPath(), 0, fileSize,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
index 0c5ede9..7ec882f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
+++ 

[09/50] [abbrv] hadoop git commit: YARN-5491. Fix random failure of TestCapacityScheduler#testCSQueueBlocked (Bibin A Chundatt via Varun Saxena)

2016-08-17 Thread subru
YARN-5491. Fix random failure of TestCapacityScheduler#testCSQueueBlocked 
(Bibin A Chundatt via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d677b68c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d677b68c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d677b68c

Branch: refs/heads/YARN-2915
Commit: d677b68c2599445fff56db4df26448a8bad0f5dd
Parents: 23c6e3c
Author: Varun Saxena 
Authored: Mon Aug 15 03:31:21 2016 +0530
Committer: Varun Saxena 
Committed: Mon Aug 15 03:31:21 2016 +0530

--
 .../scheduler/capacity/TestCapacityScheduler.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d677b68c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index 2da7adb..0b52b86 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -3635,7 +3635,7 @@ public class TestCapacityScheduler {
 }
 assertEquals("A Used Resource should be 2 GB", 2 * GB,
 cs.getQueue("a").getUsedResources().getMemorySize());
-assertEquals("B Used Resource should be 2 GB", 13 * GB,
+assertEquals("B Used Resource should be 13 GB", 13 * GB,
 cs.getQueue("b").getUsedResources().getMemorySize());
 r1 = TestUtils.createResourceRequest(
 ResourceRequest.ANY, 2 * GB, 1, true, priority, recordFactory);
@@ -3659,10 +3659,14 @@ public class TestCapacityScheduler {
 ContainerId containerId2 =ContainerId.newContainerId(appAttemptId2, 11);
 
 cs.handle(new ContainerExpiredSchedulerEvent(containerId1));
+rm.drainEvents();
+CapacityScheduler.schedule(cs);
+
 cs.handle(new ContainerExpiredSchedulerEvent(containerId2));
 CapacityScheduler.schedule(cs);
 rm.drainEvents();
-assertEquals("A Used Resource should be 2 GB", 4 * GB,
+
+assertEquals("A Used Resource should be 4 GB", 4 * GB,
 cs.getQueue("a").getUsedResources().getMemorySize());
 assertEquals("B Used Resource should be 12 GB", 12 * GB,
 cs.getQueue("b").getUsedResources().getMemorySize());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[17/50] [abbrv] hadoop git commit: HADOOP-13437. KMS should reload whitelist and default key ACLs when hot-reloading. Contributed by Xiao Chen.

2016-08-17 Thread subru
HADOOP-13437. KMS should reload whitelist and default key ACLs when 
hot-reloading. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9daa9979
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9daa9979
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9daa9979

Branch: refs/heads/YARN-2915
Commit: 9daa9979a1f92fb3230361c10ddfcc1633795c0e
Parents: 864f878
Author: Xiao Chen 
Authored: Mon Aug 15 18:13:58 2016 -0700
Committer: Xiao Chen 
Committed: Mon Aug 15 18:14:45 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMSACLs.java   |  75 
 .../crypto/key/kms/server/TestKMSACLs.java  | 174 ++-
 2 files changed, 207 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9daa9979/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
index 5b67950..c36fcf8 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
@@ -34,7 +34,6 @@ import java.util.Map;
 import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
-import java.util.regex.Pattern;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -74,10 +73,10 @@ public class KMSACLs implements Runnable, KeyACLs {
   private volatile Map blacklistedAcls;
   @VisibleForTesting
   volatile Map> keyAcls;
-  private final Map defaultKeyAcls =
-  new HashMap();
-  private final Map whitelistKeyAcls =
-  new HashMap();
+  @VisibleForTesting
+  volatile Map defaultKeyAcls = new HashMap<>();
+  @VisibleForTesting
+  volatile Map whitelistKeyAcls = new 
HashMap<>();
   private ScheduledExecutorService executorService;
   private long lastReload;
 
@@ -111,7 +110,8 @@ public class KMSACLs implements Runnable, KeyACLs {
 blacklistedAcls = tempBlacklist;
   }
 
-  private void setKeyACLs(Configuration conf) {
+  @VisibleForTesting
+  void setKeyACLs(Configuration conf) {
 Map> tempKeyAcls =
 new HashMap>();
 Map allKeyACLS =
@@ -148,38 +148,43 @@ public class KMSACLs implements Runnable, KeyACLs {
 }
   }
 }
-
 keyAcls = tempKeyAcls;
+
+final Map tempDefaults = new HashMap<>();
+final Map tempWhitelists = new HashMap<>();
 for (KeyOpType keyOp : KeyOpType.values()) {
-  if (!defaultKeyAcls.containsKey(keyOp)) {
-String confKey = KMSConfiguration.DEFAULT_KEY_ACL_PREFIX + keyOp;
-String aclStr = conf.get(confKey);
-if (aclStr != null) {
-  if (keyOp == KeyOpType.ALL) {
-// Ignore All operation for default key acl
-LOG.warn("Should not configure default key ACL for KEY_OP '{}'", 
keyOp);
-  } else {
-if (aclStr.equals("*")) {
-  LOG.info("Default Key ACL for KEY_OP '{}' is set to '*'", keyOp);
-}
-defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
-  }
-}
-  }
-  if (!whitelistKeyAcls.containsKey(keyOp)) {
-String confKey = KMSConfiguration.WHITELIST_KEY_ACL_PREFIX + keyOp;
-String aclStr = conf.get(confKey);
-if (aclStr != null) {
-  if (keyOp == KeyOpType.ALL) {
-// Ignore All operation for whitelist key acl
-LOG.warn("Should not configure whitelist key ACL for KEY_OP '{}'", 
keyOp);
-  } else {
-if (aclStr.equals("*")) {
-  LOG.info("Whitelist Key ACL for KEY_OP '{}' is set to '*'", 
keyOp);
-}
-whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
-  }
+  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
+  keyOp, tempDefaults);
+  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
+  keyOp, tempWhitelists);
+}
+defaultKeyAcls = tempDefaults;
+whitelistKeyAcls 

[29/50] [abbrv] hadoop git commit: HDFS-10560. DiskBalancer: Reuse ObjectMapper instance to improve the performance. Contributed by Yiqun Lin.

2016-08-17 Thread subru
HDFS-10560. DiskBalancer: Reuse ObjectMapper instance to improve the 
performance. Contributed by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b047bc72
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b047bc72
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b047bc72

Branch: refs/heads/YARN-2915
Commit: b047bc7270f3461156e4d08423c728ee9c67dba5
Parents: b427ce1
Author: Anu Engineer 
Authored: Tue Aug 16 10:20:08 2016 -0700
Committer: Anu Engineer 
Committed: Tue Aug 16 10:20:08 2016 -0700

--
 .../server/datanode/DiskBalancerWorkItem.java   | 11 ++---
 .../server/datanode/DiskBalancerWorkStatus.java | 26 +++-
 .../hdfs/server/datanode/DiskBalancer.java  |  5 ++--
 .../server/diskbalancer/command/Command.java|  6 +++--
 .../connectors/JsonNodeConnector.java   |  8 +++---
 .../datamodel/DiskBalancerCluster.java  | 11 ++---
 .../datamodel/DiskBalancerVolume.java   | 11 ++---
 7 files changed, 46 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b047bc72/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
index f46a987..592a89f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
 import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.ObjectReader;
 
 import java.io.IOException;
 
@@ -34,6 +35,10 @@ import java.io.IOException;
 @InterfaceStability.Unstable
 @JsonInclude(JsonInclude.Include.NON_DEFAULT)
 public class DiskBalancerWorkItem {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
+  private static final ObjectReader READER =
+  new ObjectMapper().reader(DiskBalancerWorkItem.class);
+
   private  long startTime;
   private long secondsElapsed;
   private long bytesToCopy;
@@ -74,8 +79,7 @@ public class DiskBalancerWorkItem {
*/
   public static DiskBalancerWorkItem parseJson(String json) throws IOException 
{
 Preconditions.checkNotNull(json);
-ObjectMapper mapper = new ObjectMapper();
-return mapper.readValue(json, DiskBalancerWorkItem.class);
+return READER.readValue(json);
   }
 
   /**
@@ -169,8 +173,7 @@ public class DiskBalancerWorkItem {
* @throws IOException
*/
   public String toJson() throws IOException {
-ObjectMapper mapper = new ObjectMapper();
-return mapper.writeValueAsString(this);
+return MAPPER.writeValueAsString(this);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b047bc72/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
index 14789b6..94bf6a6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
@@ -24,6 +24,7 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.ObjectReader;
 import org.codehaus.jackson.map.SerializationConfig;
 
 import static org.codehaus.jackson.map.type.TypeFactory.defaultInstance;
@@ -38,6 +39,15 @@ import java.util.LinkedList;
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
 public class DiskBalancerWorkStatus {
+  private static final ObjectMapper MAPPER = new ObjectMapper();
+  private static final ObjectMapper MAPPER_WITH_INDENT_OUTPUT =
+  new ObjectMapper().enable(
+  

[50/50] [abbrv] hadoop git commit: YARN-5406. In-memory based implementation of the FederationMembershipStateStore. Contributed by Ellen Hui.

2016-08-17 Thread subru
YARN-5406. In-memory based implementation of the 
FederationMembershipStateStore. Contributed by Ellen Hui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba3e426a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba3e426a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba3e426a

Branch: refs/heads/YARN-2915
Commit: ba3e426a9740995adce75ddac693ec67ad6fce16
Parents: 0130c85
Author: Subru Krishnan 
Authored: Thu Aug 4 15:54:38 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../store/impl/MemoryFederationStateStore.java  | 138 
 .../federation/store/impl/package-info.java |  17 ++
 .../records/GetSubClustersInfoRequest.java  |   4 +
 .../store/records/SubClusterState.java  |   4 +
 .../impl/FederationStateStoreBaseTest.java  | 221 +++
 .../impl/TestMemoryFederationStateStore.java|  49 
 6 files changed, 433 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba3e426a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
new file mode 100644
index 000..7fdc4a9
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
@@ -0,0 +1,138 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.impl;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import 
org.apache.hadoop.yarn.server.federation.store.FederationMembershipStateStore;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterHeartbeatRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterHeartbeatResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterResponse;
+import org.apache.hadoop.yarn.server.records.Version;
+import org.apache.hadoop.yarn.util.MonotonicClock;
+
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * In-memory implementation of FederationMembershipStateStore.
+ */
+public class MemoryFederationStateStore
+implements FederationMembershipStateStore {
+
+  private final Map membership =
+  new ConcurrentHashMap();
+  private final MonotonicClock clock = new MonotonicClock();
+
+  @Override
+  public Version getMembershipStateStoreVersion() {
+return null;
+  }
+
+  @Override
+  public 

[07/50] [abbrv] hadoop git commit: YARN-5476. Non existent application reported as ACCEPTED by YarnClientImpl (Junping Du via Varun Saxena)

2016-08-17 Thread subru
YARN-5476. Non existent application reported as ACCEPTED by YarnClientImpl 
(Junping Du via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/23c6e3c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/23c6e3c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/23c6e3c4

Branch: refs/heads/YARN-2915
Commit: 23c6e3c4e41fecc61d062542cb61e68898235006
Parents: 9019606
Author: Varun Saxena 
Authored: Fri Aug 12 20:37:58 2016 +0530
Committer: Varun Saxena 
Committed: Fri Aug 12 20:37:58 2016 +0530

--
 .../server/resourcemanager/rmapp/RMAppImpl.java | 16 +++
 .../rmapp/TestRMAppTransitions.java | 21 +---
 2 files changed, 22 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c6e3c4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
index 82e669a..e5bde32 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
@@ -902,8 +902,9 @@ public class RMAppImpl implements RMApp, Recoverable {
 //TODO recover collector address.
 //this.collectorAddr = appState.getCollectorAddr();
 
-// send the ATS create Event
-sendATSCreateEvent(this, this.startTime);
+// send the ATS create Event during RM recovery.
+// NOTE: it could be duplicated with events sent before RM get restarted.
+sendATSCreateEvent();
 RMAppAttemptImpl preAttempt = null;
 for (ApplicationAttemptId attemptId :
 new TreeSet<>(appState.attempts.keySet())) {
@@ -1134,6 +1135,8 @@ public class RMAppImpl implements RMApp, Recoverable {
 public void transition(RMAppImpl app, RMAppEvent event) {
   app.handler.handle(new AppAddedSchedulerEvent(app.user,
   app.submissionContext, false));
+  // send the ATS create Event
+  app.sendATSCreateEvent();
 }
   }
 
@@ -1212,9 +1215,6 @@ public class RMAppImpl implements RMApp, Recoverable {
   // communication
   LOG.info("Storing application with id " + app.applicationId);
   app.rmContext.getStateStore().storeNewApplication(app);
-
-  // send the ATS create Event
-  app.sendATSCreateEvent(app, app.startTime);
 }
   }
 
@@ -1912,9 +1912,9 @@ public class RMAppImpl implements RMApp, Recoverable {
 return callerContext;
   }
 
-  private void sendATSCreateEvent(RMApp app, long startTime) {
-rmContext.getRMApplicationHistoryWriter().applicationStarted(app);
-rmContext.getSystemMetricsPublisher().appCreated(app, startTime);
+  private void sendATSCreateEvent() {
+rmContext.getRMApplicationHistoryWriter().applicationStarted(this);
+rmContext.getSystemMetricsPublisher().appCreated(this, this.startTime);
   }
 
   @Private

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c6e3c4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
index 9d2f89d..6bf5ecf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.rmapp;
 
 import static org.mockito.Matchers.any;
 import static 

[41/50] [abbrv] hadoop git commit: YARN-3664. Federation PolicyStore internal APIs

2016-08-17 Thread subru
YARN-3664. Federation PolicyStore internal APIs


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bfdb58a2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bfdb58a2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bfdb58a2

Branch: refs/heads/YARN-2915
Commit: bfdb58a264d6a27e7e68dd85c64fc7658595feac
Parents: 12b495e
Author: Subru Krishnan 
Authored: Fri Aug 5 12:34:58 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:47 2016 -0700

--
 .../federation/store/FederationPolicyStore.java |  76 
 ...SubClusterPoliciesConfigurationsRequest.java |  35 
 ...ubClusterPoliciesConfigurationsResponse.java |  66 +++
 ...GetSubClusterPolicyConfigurationRequest.java |  62 ++
 ...etSubClusterPolicyConfigurationResponse.java |  65 +++
 ...SetSubClusterPolicyConfigurationRequest.java |  79 
 ...etSubClusterPolicyConfigurationResponse.java |  36 
 .../records/SubClusterPolicyConfiguration.java  | 130 +
 ...sterPoliciesConfigurationsRequestPBImpl.java |  95 +
 ...terPoliciesConfigurationsResponsePBImpl.java | 191 +++
 ...ClusterPolicyConfigurationRequestPBImpl.java | 103 ++
 ...lusterPolicyConfigurationResponsePBImpl.java | 143 ++
 .../pb/GetSubClustersInfoResponsePBImpl.java|   4 +-
 ...ClusterPolicyConfigurationRequestPBImpl.java | 159 +++
 ...lusterPolicyConfigurationResponsePBImpl.java |  93 +
 .../pb/SubClusterPolicyConfigurationPBImpl.java | 121 
 .../proto/yarn_server_federation_protos.proto   |  28 +++
 .../records/TestFederationProtocolRecords.java  |  53 -
 18 files changed, 1536 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bfdb58a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationPolicyStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationPolicyStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationPolicyStore.java
new file mode 100644
index 000..9d9bd9b
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationPolicyStore.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPoliciesConfigurationsRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPoliciesConfigurationsResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationResponse;
+
+/**
+ * The FederationPolicyStore provides a key-value interface to access the
+ * policies configured for the system. The key is a "queue" name, i.e., the
+ * system allows to configure a different policy for each queue in the system
+ * (though each policy can make dynamic run-time decisions on a 
per-job/per-task
+ * basis). The value is a {@code SubClusterPolicyConfiguration}, a serialized
+ * representation of the policy type and its 

[11/50] [abbrv] hadoop git commit: HDFS-9696. Garbage snapshot records linger forever. Contributed by Kihwal Lee

2016-08-17 Thread subru
HDFS-9696. Garbage snapshot records linger forever. Contributed by Kihwal Lee


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83e57e08
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83e57e08
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83e57e08

Branch: refs/heads/YARN-2915
Commit: 83e57e083f2cf6c0de8a46966c5492faeabd8f2a
Parents: 9f29f42
Author: Kihwal Lee 
Authored: Mon Aug 15 13:01:23 2016 -0500
Committer: Kihwal Lee 
Committed: Mon Aug 15 13:01:23 2016 -0500

--
 .../server/namenode/FSImageFormatProtobuf.java  |  6 ++-
 .../hdfs/server/namenode/TestSaveNamespace.java | 42 
 2 files changed, 47 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/83e57e08/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
index 05087d1..7a81f9e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
@@ -459,7 +459,11 @@ public final class FSImageFormatProtobuf {
   this, summary, context, context.getSourceNamesystem());
 
   snapshotSaver.serializeSnapshotSection(sectionOutputStream);
-  snapshotSaver.serializeSnapshotDiffSection(sectionOutputStream);
+  // Skip snapshot-related sections when there is no snapshot.
+  if (context.getSourceNamesystem().getSnapshotManager()
+  .getNumSnapshots() > 0) {
+snapshotSaver.serializeSnapshotDiffSection(sectionOutputStream);
+  }
   snapshotSaver.serializeINodeReferenceSection(sectionOutputStream);
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83e57e08/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index a374585..0c86ef4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -647,6 +647,48 @@ public class TestSaveNamespace {
 }
   }
 
+
+  @Test
+  public void testSkipSnapshotSection() throws Exception {
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(new Configuration())
+.numDataNodes(1).build();
+cluster.waitActive();
+DistributedFileSystem fs = cluster.getFileSystem();
+OutputStream out = null;
+try {
+  String path = "/skipSnapshot";
+  out = fs.create(new Path(path));
+  out.close();
+
+  // add a bogus filediff
+  FSDirectory dir = cluster.getNamesystem().getFSDirectory();
+  INodeFile file = dir.getINode(path).asFile();
+  file.addSnapshotFeature(null).getDiffs()
+  .saveSelf2Snapshot(-1, file, null, false);
+
+  // make sure it has a diff
+  assertTrue("Snapshot fileDiff is missing.",
+  file.getFileWithSnapshotFeature().getDiffs() != null);
+
+  // saveNamespace
+  fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+  cluster.getNameNodeRpc().saveNamespace(0, 0);
+  fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
+
+  // restart namenode
+  cluster.restartNameNode(true);
+  dir = cluster.getNamesystem().getFSDirectory();
+  file = dir.getINode(path).asFile();
+
+  // there should be no snapshot feature for the inode, when there is
+  // no snapshot.
+  assertTrue("There should be no snapshot feature for this INode.",
+  file.getFileWithSnapshotFeature() == null);
+} finally {
+  cluster.shutdown();
+}
+  }
+
   @Test
   public void testSaveNamespaceBeforeShutdown() throws Exception {
 Configuration conf = new HdfsConfiguration();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: HADOOP-13441. Document LdapGroupsMapping keystore password properties. Contributed by Yuanbo Liu. [Forced Update!]

2016-08-17 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2915 08ff90603 -> 8abb0a8c0 (forced update)


HADOOP-13441. Document LdapGroupsMapping keystore password properties. 
Contributed by Yuanbo Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d892ae95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d892ae95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d892ae95

Branch: refs/heads/YARN-2915
Commit: d892ae9576d07d01927443b6dc6c934a6c2f317f
Parents: 8fbb57f
Author: Wei-Chiu Chuang 
Authored: Thu Aug 11 11:57:20 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Aug 11 11:57:43 2016 -0700

--
 .../org/apache/hadoop/conf/Configuration.java   |  5 +-
 .../fs/CommonConfigurationKeysPublic.java   | 26 
 .../alias/AbstractJavaKeyStoreProvider.java |  4 +-
 .../security/alias/CredentialProvider.java  |  6 +-
 .../alias/CredentialProviderFactory.java|  3 +-
 .../src/main/resources/core-default.xml | 64 ++--
 .../src/site/markdown/CredentialProviderAPI.md  |  2 +-
 .../src/site/markdown/GroupsMapping.md  |  6 +-
 8 files changed, 104 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d892ae95/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 0420c6b..29733eb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -78,6 +78,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
@@ -2053,7 +2054,9 @@ public class Configuration implements 
Iterable>,
*/
   protected char[] getPasswordFromConfig(String name) {
 char[] pass = null;
-if (getBoolean(CredentialProvider.CLEAR_TEXT_FALLBACK, true)) {
+if (getBoolean(CredentialProvider.CLEAR_TEXT_FALLBACK,
+CommonConfigurationKeysPublic.
+HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK_DEFAULT)) {
   String passStr = get(name);
   if (passStr != null) {
 pass = passStr.toCharArray();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d892ae95/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 8781f24..e746f2b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -727,5 +727,31 @@ public class CommonConfigurationKeysPublic {
   "hadoop.http.logs.enabled";
   /** Defalt value for HADOOP_HTTP_LOGS_ENABLED */
   public static final boolean HADOOP_HTTP_LOGS_ENABLED_DEFAULT = true;
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String HADOOP_SECURITY_CREDENTIAL_PROVIDER_PATH =
+  "hadoop.security.credential.provider.path";
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK =
+  "hadoop.security.credential.clear-text-fallback";
+  public static final boolean
+  HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK_DEFAULT = true;
+
+  /**
+   * @see
+   * 
+   * core-default.xml
+   */
+  public static final String  HADOOP_SECURITY_CREDENTIAL_PASSWORD_FILE_KEY =
+  "hadoop.security.credstore.java-keystore-provider.password-file";
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d892ae95/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
--
diff --git 

[13/50] [abbrv] hadoop git commit: HDFS-10580. DiskBalancer: Make use of unused methods in GreedyPlanner to print debug info. Contributed by Yiqun Lin

2016-08-17 Thread subru
HDFS-10580. DiskBalancer: Make use of unused methods in GreedyPlanner to print 
debug info. Contributed by Yiqun Lin


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bed69d18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bed69d18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bed69d18

Branch: refs/heads/YARN-2915
Commit: bed69d18e6c84583cbe5fd765f068d9faa807617
Parents: 2424911
Author: Anu Engineer 
Authored: Mon Aug 15 12:40:29 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 12:40:29 2016 -0700

--
 .../diskbalancer/planner/GreedyPlanner.java | 45 +++-
 1 file changed, 25 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bed69d18/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
index 0df9843..fb83eeb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
@@ -158,6 +158,7 @@ public class GreedyPlanner implements Planner {
 
 // since the volume data changed , we need to recompute the DataDensity.
 currentSet.computeVolumeDataDensity();
+printQueue(currentSet.getSortedQueue());
   }
 
   /**
@@ -184,7 +185,7 @@ public class GreedyPlanner implements Planner {
 if (maxLowVolumeCanReceive <= 0) {
   LOG.debug("{} Skipping disk from computation. Maximum data size " +
   "achieved.", lowVolume.getPath());
-  lowVolume.setSkip(true);
+  skipVolume(currentSet, lowVolume);
 }
 
 long maxHighVolumeCanGive = highVolume.getUsed() -
@@ -195,7 +196,7 @@ public class GreedyPlanner implements Planner {
 if (maxHighVolumeCanGive <= 0) {
   LOG.debug(" {} Skipping disk from computation. Minimum data size " +
   "achieved.", highVolume.getPath());
-  highVolume.setSkip(true);
+  skipVolume(currentSet, highVolume);
 }
 
 
@@ -219,16 +220,19 @@ public class GreedyPlanner implements Planner {
*/
   private void skipVolume(DiskBalancerVolumeSet currentSet,
   DiskBalancerVolume volume) {
-
-String message = String.format(
-"Skipping volume. Volume : %s " +
-"Type : %s Target " +
-"Number of bytes : %f lowVolume dfsUsed : %d. Skipping this " +
-"volume from all future balancing calls.", volume.getPath(),
-volume.getStorageType(),
-currentSet.getIdealUsed() * volume.getCapacity(), volume.getUsed());
+if (LOG.isDebugEnabled()) {
+  String message =
+  String.format(
+  "Skipping volume. Volume : %s " +
+  "Type : %s Target " +
+  "Number of bytes : %f lowVolume dfsUsed : %d. Skipping this " +
+  "volume from all future balancing calls.", volume.getPath(),
+  volume.getStorageType(),
+  currentSet.getIdealUsed() * volume.getCapacity(),
+  volume.getUsed());
+  LOG.debug(message);
+}
 volume.setSkip(true);
-LOG.debug(message);
   }
 
   // Removes all volumes which are part of the volumeSet but skip flag is set.
@@ -242,6 +246,7 @@ public class GreedyPlanner implements Planner {
   }
 }
 currentSet.computeVolumeDataDensity();
+printQueue(currentSet.getSortedQueue());
   }
 
   /**
@@ -251,14 +256,14 @@ public class GreedyPlanner implements Planner {
* @param queue - Queue
*/
   private void printQueue(TreeSet queue) {
-String format = String.format("First Volume : %s, DataDensity : %f",
-queue.first().getPath(),
-queue.first().getVolumeDataDensity());
-LOG.info(format);
-
-format = String
-.format("Last Volume : %s, DataDensity : %f%n", queue.last().getPath(),
-queue.last().getVolumeDataDensity());
-LOG.info(format);
+if (LOG.isDebugEnabled()) {
+  String format =
+  String.format(
+  "First Volume : %s, DataDensity : %f, " +
+  "Last Volume : %s, DataDensity : %f",
+  queue.first().getPath(), queue.first().getVolumeDataDensity(),
+  queue.last().getPath(), queue.last().getVolumeDataDensity());
+  LOG.debug(format);
+}
   }
 }


-
To 

[24/50] [abbrv] hadoop git commit: HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang 
Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9336a049
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9336a049
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9336a049

Branch: refs/heads/YARN-2915
Commit: 9336a0495f99cd3fbc7ecef452eb37cfbaf57440
Parents: ef55fe1
Author: Mingliang Liu 
Authored: Mon Aug 15 20:24:54 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:24:54 2016 -0700

--
 .../apache/hadoop/test/GenericTestUtils.java| 25 ++-
 .../hadoop/test/TestGenericTestUtils.java   | 44 
 2 files changed, 56 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9336a049/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 116a111..6b5135c 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -42,10 +42,10 @@ import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Time;
-import org.apache.log4j.Layout;
 import org.apache.log4j.Level;
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
+import org.apache.log4j.PatternLayout;
 import org.apache.log4j.WriterAppender;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -275,36 +275,35 @@ public abstract class GenericTestUtils {
 private StringWriter sw = new StringWriter();
 private WriterAppender appender;
 private Logger logger;
-
+
 public static LogCapturer captureLogs(Log l) {
   Logger logger = ((Log4JLogger)l).getLogger();
-  LogCapturer c = new LogCapturer(logger);
-  return c;
+  return new LogCapturer(logger);
+}
+
+public static LogCapturer captureLogs(org.slf4j.Logger logger) {
+  return new LogCapturer(toLog4j(logger));
 }
-
 
 private LogCapturer(Logger logger) {
   this.logger = logger;
-  Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
-  WriterAppender wa = new WriterAppender(layout, sw);
-  logger.addAppender(wa);
+  this.appender = new WriterAppender(new PatternLayout(), sw);
+  logger.addAppender(appender);
 }
-
+
 public String getOutput() {
   return sw.toString();
 }
-
+
 public void stopCapturing() {
   logger.removeAppender(appender);
-
 }
 
 public void clearOutput() {
   sw.getBuffer().setLength(0);
 }
   }
-  
-  
+
   /**
* Mockito answer helper that triggers one latch as soon as the
* method is called, then waits on another before continuing.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9336a049/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
index 8a7b5f6..86df5d5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
@@ -18,8 +18,16 @@
 
 package org.apache.hadoop.test;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
 import org.junit.Test;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertTrue;
+
 public class TestGenericTestUtils extends GenericTestUtils {
 
   @Test
@@ -75,4 +83,40 @@ public class TestGenericTestUtils extends GenericTestUtils {
 }
   }
 
+  @Test(timeout = 1)
+  public void testLogCapturer() {
+final Log log = LogFactory.getLog(TestGenericTestUtils.class);
+LogCapturer logCapturer = LogCapturer.captureLogs(log);
+final String infoMessage = "info message";
+// test get output message
+log.info(infoMessage);
+assertTrue(logCapturer.getOutput().endsWith(
+String.format(infoMessage + "%n")));
+// test clear output
+logCapturer.clearOutput();
+

[15/50] [abbrv] hadoop git commit: HDFS-10744. Internally optimize path component resolution. Contributed by Daryn Sharp.

2016-08-17 Thread subru
HDFS-10744. Internally optimize path component resolution. Contributed by Daryn 
Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03dea65e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03dea65e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03dea65e

Branch: refs/heads/YARN-2915
Commit: 03dea65e0b17ca2f9460bb6110f6ab3a321b8bf2
Parents: d714030
Author: Kihwal Lee 
Authored: Mon Aug 15 16:44:18 2016 -0500
Committer: Kihwal Lee 
Committed: Mon Aug 15 16:45:44 2016 -0500

--
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  18 ++--
 .../hdfs/server/namenode/FSDirAppendOp.java |   4 +-
 .../hdfs/server/namenode/FSDirAttrOp.java   |  22 ++--
 .../hdfs/server/namenode/FSDirDeleteOp.java |   5 +-
 .../server/namenode/FSDirEncryptionZoneOp.java  |   8 +-
 .../server/namenode/FSDirErasureCodingOp.java   |   8 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |   3 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  12 +--
 .../server/namenode/FSDirStatAndListingOp.java  |  27 ++---
 .../hdfs/server/namenode/FSDirSymlinkOp.java|   3 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |   4 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |  15 +--
 .../hdfs/server/namenode/FSDirXAttrOp.java  |  13 +--
 .../hdfs/server/namenode/FSDirectory.java   | 100 +--
 .../hdfs/server/namenode/FSNamesystem.java  |  16 +--
 .../hdfs/server/namenode/TestINodeFile.java |  53 +-
 16 files changed, 118 insertions(+), 193 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03dea65e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
index 0c572b5..296bed2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
@@ -39,8 +39,7 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
@@ -65,8 +64,7 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
@@ -90,8 +88,7 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
@@ -115,8 +112,7 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
@@ -135,9 +131,8 @@ class FSDirAclOp {
   throws IOException {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
@@ -155,8 +150,7 @@ class FSDirAclOp {
   FSDirectory fsd, String src) throws IOException {
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
-src = fsd.resolvePath(pc, src, pathComponents);
+src = fsd.resolvePath(pc, src);
 String srcs = FSDirectory.normalizePath(src);
 fsd.readLock();
 try {


[18/50] [abbrv] hadoop git commit: HDFS-10567. Improve plan command help message. Contributed by Xiaobing Zhou.

2016-08-17 Thread subru
HDFS-10567. Improve plan command help message. Contributed by Xiaobing Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02abd131
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02abd131
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02abd131

Branch: refs/heads/YARN-2915
Commit: 02abd131b857a89d9fc21507296603120bb50810
Parents: 9daa997
Author: Anu Engineer 
Authored: Mon Aug 15 19:54:06 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 19:58:57 2016 -0700

--
 .../apache/hadoop/hdfs/tools/DiskBalancer.java  | 29 
 1 file changed, 18 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02abd131/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
index 70912d0..1ed2fdc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
@@ -266,33 +266,40 @@ public class DiskBalancer extends Configured implements 
Tool {
   private void addPlanCommands(Options opt) {
 
 Option plan = OptionBuilder.withLongOpt(PLAN)
-.withDescription("creates a plan for datanode.")
+.withDescription("Hostname, IP address or UUID of datanode " +
+"for which a plan is created.")
 .hasArg()
 .create();
 getPlanOptions().addOption(plan);
 opt.addOption(plan);
 
 
-Option outFile = OptionBuilder.withLongOpt(OUTFILE)
-.hasArg()
-.withDescription("File to write output to, if not specified " +
-"defaults will be used.")
+Option outFile = OptionBuilder.withLongOpt(OUTFILE).hasArg()
+.withDescription(
+"Local path of file to write output to, if not specified "
++ "defaults will be used.")
 .create();
 getPlanOptions().addOption(outFile);
 opt.addOption(outFile);
 
-Option bandwidth = OptionBuilder.withLongOpt(BANDWIDTH)
-.hasArg()
-.withDescription("Maximum disk bandwidth to be consumed by " +
-"diskBalancer. e.g. 10")
+Option bandwidth = OptionBuilder.withLongOpt(BANDWIDTH).hasArg()
+.withDescription(
+"Maximum disk bandwidth (MB/s) in integer to be consumed by "
++ "diskBalancer. e.g. 10 MB/s.")
 .create();
 getPlanOptions().addOption(bandwidth);
 opt.addOption(bandwidth);
 
 Option threshold = OptionBuilder.withLongOpt(THRESHOLD)
 .hasArg()
-.withDescription("Percentage skew that we" +
-"tolerate before diskbalancer starts working e.g. 10")
+.withDescription("Percentage of data skew that is tolerated before"
++ " disk balancer starts working. For example, if"
++ " total data on a 2 disk node is 100 GB then disk"
++ " balancer calculates the expected value on each disk,"
++ " which is 50 GB. If the tolerance is 10% then data"
++ " on a single disk needs to be more than 60 GB"
++ " (50 GB + 10% tolerance value) for Disk balancer to"
++ " balance the disks.")
 .create();
 getPlanOptions().addOption(threshold);
 opt.addOption(threshold);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[12/50] [abbrv] hadoop git commit: YARN-5521. Fix random failure of TestCapacityScheduler#testKillAllAppsInQueue (sandflee via Varun Saxena)

2016-08-17 Thread subru
YARN-5521. Fix random failure of TestCapacityScheduler#testKillAllAppsInQueue 
(sandflee via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24249115
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24249115
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24249115

Branch: refs/heads/YARN-2915
Commit: 24249115bff3162c4202387da5bdd8eba13e6961
Parents: 83e57e0
Author: Varun Saxena 
Authored: Tue Aug 16 00:03:16 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 00:03:29 2016 +0530

--
 .../resourcemanager/scheduler/capacity/TestCapacityScheduler.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24249115/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index 0b52b86..09c16d0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -2184,6 +2184,7 @@ public class TestCapacityScheduler {
 
 // check postconditions
 rm.waitForState(app.getApplicationId(), RMAppState.KILLED);
+rm.waitForAppRemovedFromScheduler(app.getApplicationId());
 appsInRoot = scheduler.getAppsInQueue("root");
 assertTrue(appsInRoot.isEmpty());
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/50] [abbrv] hadoop git commit: HADOOP-13190. Mention LoadBalancingKMSClientProvider in KMS HA documentation. Contributed by Wei-Chiu Chuang.

2016-08-17 Thread subru
HADOOP-13190. Mention LoadBalancingKMSClientProvider in KMS HA documentation. 
Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db719ef1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db719ef1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db719ef1

Branch: refs/heads/YARN-2915
Commit: db719ef125b11b01eab3353e2dc4b48992bf88d5
Parents: d892ae9
Author: Wei-Chiu Chuang 
Authored: Thu Aug 11 12:27:09 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Aug 11 12:27:09 2016 -0700

--
 .../hadoop-kms/src/site/markdown/index.md.vm| 66 +---
 1 file changed, 59 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db719ef1/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm 
b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
index d50b0f4..889dbaf1 100644
--- a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
@@ -19,6 +19,8 @@
 Hadoop Key Management Server (KMS) - Documentation Sets
 ===
 
+
+
 Hadoop KMS is a cryptographic key management server based on Hadoop's 
**KeyProvider** API.
 
 It provides a client and a server components which communicate over HTTP using 
a REST API.
@@ -34,6 +36,18 @@ KMS Client Configuration
 
 The KMS client `KeyProvider` uses the **kms** scheme, and the embedded URL 
must be the URL of the KMS. For example, for a KMS running on 
`http://localhost:9600/kms`, the KeyProvider URI is 
`kms://http@localhost:9600/kms`. And, for a KMS running on 
`https://localhost:9600/kms`, the KeyProvider URI is 
`kms://https@localhost:9600/kms`
 
+The following is an example to configure HDFS NameNode as a KMS client in
+`hdfs-site.xml`:
+
+
+  dfs.encryption.key.provider.uri
+  kms://http@localhost:9600/kms
+  
+The KeyProvider to use when interacting with encryption keys used
+when reading and writing to an encryption zone.
+  
+
+
 KMS
 ---
 
@@ -623,13 +637,51 @@ Additionally, KMS delegation token secret manager can be 
configured with the fol
   
 ```
 
-$H3 Using Multiple Instances of KMS Behind a Load-Balancer or VIP
-
-KMS supports multiple KMS instances behind a load-balancer or VIP for 
scalability and for HA purposes.
-
-When using multiple KMS instances behind a load-balancer or VIP, requests from 
the same user may be handled by different KMS instances.
-
-KMS instances behind a load-balancer or VIP must be specially configured to 
work properly as a single logical service.
+$H3 High Availability
+
+Multiple KMS instances may be used to provide high availability and 
scalability.
+Currently there are two approaches to supporting multiple KMS instances:
+running KMS instances behind a load-balancer/VIP,
+or using LoadBalancingKMSClientProvider.
+
+In both approaches, KMS instances must be specially configured to work properly
+as a single logical service, because requests from the same client may be
+handled by different KMS instances. In particular,
+Kerberos Principals Configuration, HTTP Authentication Signature and Delegation
+Tokens require special attention.
+
+$H4 Behind a Load-Balancer or VIP
+
+Because KMS clients and servers communicate via a REST API over HTTP,
+Load-balancer or VIP may be used to distribute incoming traffic to achieve
+scalability and HA. In this mode, clients are unaware of multiple KMS instances
+at the server-side.
+
+$H4 Using LoadBalancingKMSClientProvider
+
+An alternative to running multiple KMS instances behind a load-balancer or VIP,
+is to use LoadBalancingKMSClientProvider. Using this approach, a KMS client
+(for example, a HDFS NameNode) is aware of multiple KMS instances, and it sends
+requests to them in a round-robin fashion. LoadBalancingKMSClientProvider is
+implicitly used when more than one URI is specified in
+`dfs.encryption.key.provider.uri`.
+
+The following example in `hdfs-site.xml` configures two KMS
+instances, `kms01.example.com` and `kms02.example.com`.
+The hostnames are separated by semi-colons, and all KMS instances must run
+on the same port.
+
+
+  dfs.encryption.key.provider.uri
+  kms://ht...@kms01.example.com;kms02.example.com:9600/kms
+  
+The KeyProvider to use when interacting with encryption keys used
+when reading and writing to an encryption zone.
+  
+
+
+If a request to a KMS instance fails, clients retry with the next instance. The
+request is returned as failure only if all instances 

[04/50] [abbrv] hadoop git commit: HADOOP-13410. RunJar adds the content of the jar twice to the classpath (Yuanbo Liu via sjlee)

2016-08-17 Thread subru
HADOOP-13410. RunJar adds the content of the jar twice to the classpath (Yuanbo 
Liu via sjlee)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d3ea92f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d3ea92f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d3ea92f

Branch: refs/heads/YARN-2915
Commit: 4d3ea92f4fe4be2a0ee9849c65cb1c91b0c5711b
Parents: 874577a
Author: Sangjin Lee 
Authored: Thu Aug 11 19:56:58 2016 -0700
Committer: Sangjin Lee 
Committed: Thu Aug 11 19:56:58 2016 -0700

--
 .../src/main/java/org/apache/hadoop/util/RunJar.java   | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d3ea92f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
index 19b51ad..d91a78b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
@@ -226,7 +226,7 @@ public class RunJar {
 
 unJar(file, workDir);
 
-ClassLoader loader = createClassLoader(file, workDir);
+ClassLoader loader = createClassLoader(workDir);
 
 Thread.currentThread().setContextClassLoader(loader);
 Class mainClass = Class.forName(mainClassName, true, loader);
@@ -250,14 +250,13 @@ public class RunJar {
* the user jar as well as the HADOOP_CLASSPATH. Otherwise, it creates a
* classloader that simply adds the user jar to the classpath.
*/
-  private ClassLoader createClassLoader(File file, final File workDir)
+  private ClassLoader createClassLoader(final File workDir)
   throws MalformedURLException {
 ClassLoader loader;
 // see if the client classloader is enabled
 if (useClientClassLoader()) {
   StringBuilder sb = new StringBuilder();
   sb.append(workDir).append("/").
-  append(File.pathSeparator).append(file).
   append(File.pathSeparator).append(workDir).append("/classes/").
   append(File.pathSeparator).append(workDir).append("/lib/*");
   // HADOOP_CLASSPATH is added to the client classpath
@@ -277,7 +276,6 @@ public class RunJar {
 } else {
   List classPath = new ArrayList<>();
   classPath.add(new File(workDir + "/").toURI().toURL());
-  classPath.add(file.toURI().toURL());
   classPath.add(new File(workDir, "classes/").toURI().toURL());
   File[] libs = new File(workDir, "lib").listFiles();
   if (libs != null) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[26/50] [abbrv] hadoop git commit: HADOOP-13419. Fix javadoc warnings by JDK8 in hadoop-common package. Contributed by Kai Sasaki.

2016-08-17 Thread subru
HADOOP-13419. Fix javadoc warnings by JDK8 in hadoop-common package. 
Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8a446ba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8a446ba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8a446ba

Branch: refs/heads/YARN-2915
Commit: b8a446ba57d89c0896ec2d56dd919b0101e69f44
Parents: 4d4d95f
Author: Masatake Iwasaki 
Authored: Tue Aug 16 13:30:40 2016 +0900
Committer: Masatake Iwasaki 
Committed: Tue Aug 16 13:30:40 2016 +0900

--
 .../java/org/apache/hadoop/fs/FileContext.java  |  4 +-
 .../apache/hadoop/io/retry/package-info.java| 22 +
 .../org/apache/hadoop/io/retry/package.html | 48 
 .../org/apache/hadoop/ipc/package-info.java |  4 ++
 .../java/org/apache/hadoop/ipc/package.html | 23 --
 5 files changed, 28 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a446ba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index e6a4cf4..f235773 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -303,7 +303,7 @@ public class FileContext {
* 
* @throws UnsupportedFileSystemException If the file system for
*   absOrFqPath is not supported.
-   * @throws IOExcepton If the file system for absOrFqPath could
+   * @throws IOException If the file system for absOrFqPath could
* not be instantiated.
*/
   protected AbstractFileSystem getFSofPath(final Path absOrFqPath)
@@ -2713,7 +2713,7 @@ public class FileContext {
   /**
* Query the effective storage policy ID for the given file or directory.
*
-   * @param src file or directory path.
+   * @param path file or directory path.
* @return storage policy for give file.
* @throws IOException
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a446ba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
index 693065f..089cf6f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
@@ -15,6 +15,28 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
+/**
+ * A mechanism for selectively retrying methods that throw exceptions under
+ * certain circumstances.
+ * Typical usage is
+ *  UnreliableImplementation unreliableImpl = new UnreliableImplementation();
+ *  UnreliableInterface unreliable = (UnreliableInterface)
+ *  RetryProxy.create(UnreliableInterface.class, unreliableImpl,
+ *  RetryPolicies.retryUpToMaximumCountWithFixedSleep(4, 10,
+ *  TimeUnit.SECONDS));
+ *  unreliable.call();
+ *
+ * This will retry any method called on unreliable four times -
+ * in this case the call() method - sleeping 10 seconds between
+ * each retry. There are a number of
+ * {@link org.apache.hadoop.io.retry.RetryPolicies retry policies}
+ * available, or you can implement a custom one by implementing
+ * {@link org.apache.hadoop.io.retry.RetryPolicy}.
+ * It is also possible to specify retry policies on a
+ * {@link org.apache.hadoop.io.retry.RetryProxy#create(Class, Object, Map)
+ * per-method basis}.
+ */
 @InterfaceAudience.LimitedPrivate({"HBase", "HDFS", "MapReduce"})
 @InterfaceStability.Evolving
 package org.apache.hadoop.io.retry;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8a446ba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package.html
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package.html
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package.html
deleted file mode 100644
index ae553fc..000
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package.html
+++ /dev/null
@@ -1,48 +0,0 @@
-
-

[20/50] [abbrv] hadoop git commit: HDFS-10725. Caller context should always be constructed by a builder. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HDFS-10725. Caller context should always be constructed by a builder. 
(Contributed by Mingliang Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12ad63d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12ad63d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12ad63d7

Branch: refs/heads/YARN-2915
Commit: 12ad63d7232ca72be9eff5680d974fc16999aac3
Parents: 5628b36
Author: Mingliang Liu 
Authored: Mon Aug 15 20:13:20 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:14:05 2016 -0700

--
 .../src/main/java/org/apache/hadoop/ipc/CallerContext.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12ad63d7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
index b197575..3d21bfe 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
@@ -35,7 +35,7 @@ import java.util.Arrays;
 @InterfaceAudience.LimitedPrivate({"HBase", "HDFS", "Hive", "MapReduce",
 "Pig", "YARN"})
 @InterfaceStability.Evolving
-public class CallerContext {
+public final class CallerContext {
   public static final Charset SIGNATURE_ENCODING = StandardCharsets.UTF_8;
   /** The caller context.
*
@@ -54,7 +54,7 @@ public class CallerContext {
*/
   private final byte[] signature;
 
-  public CallerContext(Builder builder) {
+  private CallerContext(Builder builder) {
 this.context = builder.context;
 this.signature = builder.signature;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[10/50] [abbrv] hadoop git commit: HDFS-10737. disk balancer add volume path to report command. Contributed by Yuanbo Liu.

2016-08-17 Thread subru
HDFS-10737. disk balancer add volume path to report command. Contributed by 
Yuanbo Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9f29f423
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9f29f423
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9f29f423

Branch: refs/heads/YARN-2915
Commit: 9f29f423e426e2d42e650cbed88e46c1c29a2a63
Parents: d677b68
Author: Anu Engineer 
Authored: Mon Aug 15 09:47:30 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 09:47:30 2016 -0700

--
 .../server/diskbalancer/command/Command.java| 35 ++
 .../diskbalancer/command/PlanCommand.java   | 34 --
 .../diskbalancer/command/ReportCommand.java |  4 +-
 .../command/TestDiskBalancerCommand.java| 48 
 4 files changed, 86 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f29f423/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
index de77365..3110c1a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
@@ -33,13 +33,17 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol;
+import org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerConstants;
 import org.apache.hadoop.hdfs.server.diskbalancer.connectors.ClusterConnector;
 import org.apache.hadoop.hdfs.server.diskbalancer.connectors.ConnectorFactory;
 import 
org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster;
 import 
org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode;
+import org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume;
+import 
org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolumeSet;
 import org.apache.hadoop.hdfs.tools.DiskBalancer;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.codehaus.jackson.map.ObjectMapper;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -421,6 +425,37 @@ public abstract class Command extends Configured {
   }
 
   /**
+   * Reads the Physical path of the disks we are balancing. This is needed to
+   * make the disk balancer human friendly and not used in balancing.
+   *
+   * @param node - Disk Balancer Node.
+   */
+  protected void populatePathNames(
+  DiskBalancerDataNode node) throws IOException {
+// if the cluster is a local file system, there is no need to
+// invoke rpc call to dataNode.
+if (getClusterURI().getScheme().startsWith("file")) {
+  return;
+}
+String dnAddress = node.getDataNodeIP() + ":" + node.getDataNodePort();
+ClientDatanodeProtocol dnClient = getDataNodeProxy(dnAddress);
+String volumeNameJson = dnClient.getDiskBalancerSetting(
+DiskBalancerConstants.DISKBALANCER_VOLUME_NAME);
+ObjectMapper mapper = new ObjectMapper();
+
+@SuppressWarnings("unchecked")
+Map volumeMap =
+mapper.readValue(volumeNameJson, HashMap.class);
+for (DiskBalancerVolumeSet set : node.getVolumeSets().values()) {
+  for (DiskBalancerVolume vol : set.getVolumes()) {
+if (volumeMap.containsKey(vol.getUuid())) {
+  vol.setPath(volumeMap.get(vol.getUuid()));
+}
+  }
+}
+  }
+
+  /**
* Set top number of nodes to be processed.
* */
   public void setTopNodes(int topNodes) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f29f423/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
index 3159312..72ad2c6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
+++ 

[25/50] [abbrv] hadoop git commit: HDFS-10641. TestBlockManager#testBlockReportQueueing fails intermittently. (Contributed by Daryn Sharp)

2016-08-17 Thread subru
HDFS-10641. TestBlockManager#testBlockReportQueueing fails intermittently. 
(Contributed by Daryn Sharp)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d4d95fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d4d95fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d4d95fd

Branch: refs/heads/YARN-2915
Commit: 4d4d95fdd5e1e985c16005adc45517cc8b549ae8
Parents: 9336a04
Author: Mingliang Liu 
Authored: Mon Aug 15 20:28:40 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:28:40 2016 -0700

--
 .../hadoop/hdfs/server/blockmanagement/TestBlockManager.java | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d4d95fd/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 535acd7..1f58f99 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -1013,6 +1013,7 @@ public class TestBlockManager {
 
   final CyclicBarrier startBarrier = new CyclicBarrier(2);
   final CountDownLatch endLatch = new CountDownLatch(3);
+  final CountDownLatch doneLatch = new CountDownLatch(1);
 
   // create a task intended to block while processing, thus causing
   // the queue to backup.  simulates how a full BR is processed.
@@ -1020,7 +1021,7 @@ public class TestBlockManager {
   new Callable(){
 @Override
 public Void call() throws IOException {
-  return bm.runBlockOp(new Callable() {
+  bm.runBlockOp(new Callable() {
 @Override
 public Void call()
 throws InterruptedException, BrokenBarrierException {
@@ -1030,6 +1031,9 @@ public class TestBlockManager {
   return null;
 }
   });
+  // signal that runBlockOp returned
+  doneLatch.countDown();
+  return null;
 }
   });
 
@@ -1074,7 +1078,7 @@ public class TestBlockManager {
   startBarrier.await(1, TimeUnit.SECONDS);
   assertTrue(endLatch.await(1, TimeUnit.SECONDS));
   assertEquals(0, bm.getBlockOpQueueLength());
-  assertTrue(blockingOp.isDone());
+  assertTrue(doneLatch.await(1, TimeUnit.SECONDS));
 } finally {
   cluster.shutdown();
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/50] [abbrv] hadoop git commit: HDFS-10731. FSDirectory#verifyMaxDirItems does not log path name. Contributed by Wei-Chiu Chuang.

2016-08-17 Thread subru
HDFS-10731. FSDirectory#verifyMaxDirItems does not log path name. Contributed 
by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9019606b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9019606b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9019606b

Branch: refs/heads/YARN-2915
Commit: 9019606b69bfb7019c8642b6cbcbb93645cc19e3
Parents: b5af9be
Author: Wei-Chiu Chuang 
Authored: Thu Aug 11 14:43:48 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Fri Aug 12 05:58:13 2016 -0700

--
 .../java/org/apache/hadoop/hdfs/protocol/FSLimitException.java   | 4 +++-
 .../java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java | 4 ++--
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9019606b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSLimitException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSLimitException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSLimitException.java
index 2ee11f0..347d892 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSLimitException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FSLimitException.java
@@ -87,8 +87,10 @@ public abstract class FSLimitException extends 
QuotaExceededException {
   super(msg);
 }
 
-public MaxDirectoryItemsExceededException(long quota, long count) {
+public MaxDirectoryItemsExceededException(String path, long quota,
+long count) {
   super(quota, count);
+  setPathName(path);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9019606b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index b71b8d6..11a7899 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -991,9 +991,9 @@ public class FSDirectory implements Closeable {
 final int count = parent.getChildrenList(CURRENT_STATE_ID).size();
 if (count >= maxDirItems) {
   final MaxDirectoryItemsExceededException e
-  = new MaxDirectoryItemsExceededException(maxDirItems, count);
+  = new MaxDirectoryItemsExceededException(parentPath, maxDirItems,
+  count);
   if (namesystem.isImageLoaded()) {
-e.setPathName(parentPath);
 throw e;
   } else {
 // Do not throw if edits log is still being processed


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/50] [abbrv] hadoop git commit: YARN-4833. For Queue AccessControlException client retries multiple times on both RM. Contributed by Bibin A Chundatt

2016-08-17 Thread subru
YARN-4833. For Queue AccessControlException client retries multiple times on 
both RM. Contributed by Bibin A Chundatt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/874577a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/874577a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/874577a6

Branch: refs/heads/YARN-2915
Commit: 874577a67df8a49243586909d866034df4e3b276
Parents: db719ef
Author: Naganarasimha 
Authored: Fri Aug 12 01:08:24 2016 +0530
Committer: Naganarasimha 
Committed: Fri Aug 12 01:09:41 2016 +0530

--
 .../server/resourcemanager/RMAppManager.java|  9 ++--
 .../server/resourcemanager/TestAppManager.java  | 43 
 2 files changed, 47 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/874577a6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index 49daedb..136dee0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -285,7 +285,7 @@ public class RMAppManager implements 
EventHandler,
   @SuppressWarnings("unchecked")
   protected void submitApplication(
   ApplicationSubmissionContext submissionContext, long submitTime,
-  String user) throws YarnException, AccessControlException {
+  String user) throws YarnException {
 ApplicationId applicationId = submissionContext.getApplicationId();
 
 // Passing start time as -1. It will be eventually set in RMAppImpl
@@ -336,8 +336,7 @@ public class RMAppManager implements 
EventHandler,
 
   private RMAppImpl createAndPopulateNewRMApp(
   ApplicationSubmissionContext submissionContext, long submitTime,
-  String user, boolean isRecovery, long startTime)
-  throws YarnException, AccessControlException {
+  String user, boolean isRecovery, long startTime) throws YarnException {
 // Do queue mapping
 if (!isRecovery) {
   if (rmContext.getQueuePlacementManager() != null) {
@@ -380,9 +379,9 @@ public class RMAppManager implements 
EventHandler,
   SchedulerUtils.toAccessType(QueueACL.ADMINISTER_QUEUE),
   applicationId.toString(), appName, Server.getRemoteAddress(),
   null))) {
-throw new AccessControlException(
+throw RPCUtil.getRemoteException(new AccessControlException(
 "User " + user + " does not have permission to submit "
-+ applicationId + " to queue " + submissionContext.getQueue());
++ applicationId + " to queue " + 
submissionContext.getQueue()));
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/874577a6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
index c3d04de..68f1e22 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
@@ -39,14 +39,18 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.service.Service;
 import 

[14/50] [abbrv] hadoop git commit: HADOOP-13333. testConf.xml ls comparators in wrong order (Vrushali C via Varun Saxena)

2016-08-17 Thread subru
HADOOP-1. testConf.xml ls comparators in wrong order (Vrushali C via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d714030b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d714030b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d714030b

Branch: refs/heads/YARN-2915
Commit: d714030b5d4124f307c09d716d72a9f5a4a25995
Parents: bed69d1
Author: Varun Saxena 
Authored: Tue Aug 16 03:03:44 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 03:03:44 2016 +0530

--
 .../hadoop-common/src/test/resources/testConf.xml| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d714030b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml 
b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
index bbbc1ec..82bc789 100644
--- a/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
+++ b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
@@ -106,11 +106,11 @@
 
 
   RegexpComparator
-  ^\s*-q\s+Print \? instead of non-printable 
characters\.( )*
+  ^\s*rather than a number of bytes\.( 
)*
 
 
   RegexpComparator
-  ^\s*rather than a number of bytes\.( 
)*
+  ^\s*-q\s+Print \? instead of non-printable 
characters\.( )*
 
 
   RegexpComparator


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[22/50] [abbrv] hadoop git commit: HDFS-10678. Documenting NNThroughputBenchmark tool. (Contributed by Mingliang Liu)

2016-08-17 Thread subru
HDFS-10678. Documenting NNThroughputBenchmark tool. (Contributed by Mingliang 
Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/382d6152
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/382d6152
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/382d6152

Branch: refs/heads/YARN-2915
Commit: 382d6152602339fe58169b2918ec74e7a7cd5581
Parents: 4bcbef3
Author: Mingliang Liu 
Authored: Mon Aug 15 20:22:14 2016 -0700
Committer: Mingliang Liu 
Committed: Mon Aug 15 20:22:14 2016 -0700

--
 .../src/site/markdown/Benchmarking.md   | 106 +++
 .../server/namenode/NNThroughputBenchmark.java  |  32 +-
 hadoop-project/src/site/site.xml|   1 +
 3 files changed, 110 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/382d6152/hadoop-common-project/hadoop-common/src/site/markdown/Benchmarking.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/Benchmarking.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Benchmarking.md
new file mode 100644
index 000..678dcee
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Benchmarking.md
@@ -0,0 +1,106 @@
+
+
+# Hadoop Benchmarking
+
+
+
+This page is to discuss benchmarking Hadoop using tools it provides.
+
+## NNThroughputBenchmark
+
+### Overview
+
+**NNThroughputBenchmark**, as its name indicates, is a name-node throughput 
benchmark, which runs a series of client threads on a single node against a 
name-node. If no name-node is configured, it will firstly start a name-node in 
the same process (_standalone mode_), in which case each client repetitively 
performs the same operation by directly calling the respective name-node 
methods. Otherwise, the benchmark will perform the operations against a remote 
name-node via client protocol RPCs (_remote mode_). Either way, all clients are 
running locally in a single process rather than remotely across different 
nodes. The reason is to avoid communication overhead caused by RPC connections 
and serialization, and thus reveal the upper bound of pure name-node 
performance.
+
+The benchmark first generates inputs for each thread so that the input 
generation overhead does not effect the resulting statistics. The number of 
operations performed by threads is practically the same. Precisely, the 
difference between the number of operations performed by any two threads does 
not exceed 1. Then the benchmark executes the specified number of operations 
using the specified number of threads and outputs the resulting stats by 
measuring the number of operations performed by the name-node per second.
+
+### Commands
+
+The general command line syntax is:
+
+`hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark 
[genericOptions] [commandOptions]`
+
+ Generic Options
+
+This benchmark honors the [Hadoop command-line Generic 
Options](CommandsManual.html#Generic_Options) to alter its behavior. The 
benchmark, as other tools, will rely on the `fs.defaultFS` config, which is 
overridable by `-fs` command option, to run standalone mode or remote mode. If 
the `fs.defaultFS` scheme is not specified or is `file` (local), the benchmark 
will run in _standalone mode_. Specially, the _remote_ name-node config 
`dfs.namenode.fs-limits.min-block-size` should be set as 16 while in 
_standalone mode_ the benchmark turns off minimum block size verification for 
its internal name-node.
+
+ Command Options
+
+The following are all supported command options:
+
+| COMMAND\_OPTION| Description |
+|: |: |
+|`-op` | Specify the operation. This option must be provided and should be the 
first option. |
+|`-logLevel` | Specify the logging level when the benchmark runs. The default 
logging level is ERROR. |
+|`-UGCacheRefreshCount` | After every specified number of operations, the 
benchmark purges the name-node's user group cache. By default the refresh is 
never called. |
+|`-keepResults` | If specified, do not clean up the name-space after 
execution. By default the name-space will be removed after test. |
+
+# Operations Supported
+
+Following are all the operations supported along with their respective 
operation-specific parameters (all optional) and default values.
+
+| OPERATION\_OPTION| Operation-specific parameters |
+|: |: |
+|`all` | _options for other operations_ |
+|`create` | [`-threads 3`] [`-files 10`] [`-filesPerDir 4`] [`-close`] |
+|`mkdirs` | [`-threads 3`] [`-dirs 10`] [`-dirsPerDir 2`] |
+|`open` | [`-threads 3`] [`-files 10`] [`-filesPerDir 4`] [`-useExisting`] |
+|`delete` | [`-threads 3`] [`-files 10`] [`-filesPerDir 4`] 

[16/50] [abbrv] hadoop git commit: HDFS-10763. Open files can leak permanently due to inconsistent lease update. Contributed by Kihwal Lee.

2016-08-17 Thread subru
HDFS-10763. Open files can leak permanently due to inconsistent lease update. 
Contributed by Kihwal Lee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/864f878d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/864f878d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/864f878d

Branch: refs/heads/YARN-2915
Commit: 864f878d5912c82f3204f1582cfb7eb7c9f1a1da
Parents: 03dea65
Author: Kihwal Lee 
Authored: Mon Aug 15 17:28:09 2016 -0500
Committer: Kihwal Lee 
Committed: Mon Aug 15 17:28:09 2016 -0500

--
 .../server/namenode/FSImageFormatPBINode.java   | 10 ++---
 .../hdfs/server/namenode/FSNamesystem.java  |  3 +-
 .../hdfs/server/namenode/TestLeaseManager.java  | 47 
 3 files changed, 53 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/864f878d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
index 1ecd947..1456ecf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
@@ -281,18 +281,14 @@ public final class FSImageFormatPBINode {
  * Load the under-construction files section, and update the lease map
  */
 void loadFilesUnderConstructionSection(InputStream in) throws IOException {
+  // Leases are added when the inode section is loaded. This section is
+  // still read in for compatibility reasons.
   while (true) {
 FileUnderConstructionEntry entry = FileUnderConstructionEntry
 .parseDelimitedFrom(in);
 if (entry == null) {
   break;
 }
-// update the lease manager
-INodeFile file = dir.getInode(entry.getInodeId()).asFile();
-FileUnderConstructionFeature uc = 
file.getFileUnderConstructionFeature();
-Preconditions.checkState(uc != null); // file must be 
under-construction
-fsn.leaseManager.addLease(uc.getClientName(),
-entry.getInodeId());
   }
 }
 
@@ -371,6 +367,8 @@ public final class FSImageFormatPBINode {
   if (f.hasFileUC()) {
 INodeSection.FileUnderConstructionFeature uc = f.getFileUC();
 file.toUnderConstruction(uc.getClientName(), uc.getClientMachine());
+// update the lease manager
+fsn.leaseManager.addLease(uc.getClientName(), file.getId());
 if (blocks.length > 0) {
   BlockInfo lastBlk = file.getLastBlock();
   // replace the last block of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/864f878d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index be084c5..0621a77 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -3329,7 +3329,6 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   throw new IOException("Cannot finalize file " + src
   + " because it is not under construction");
 }
-leaseManager.removeLease(uc.getClientName(), pendingFile);
 
 pendingFile.recordModification(latestSnapshot);
 
@@ -3340,6 +3339,8 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 allowCommittedBlock? numCommittedAllowed: 0,
 blockManager.getMinReplication());
 
+leaseManager.removeLease(uc.getClientName(), pendingFile);
+
 // close file and persist block allocations for this file
 closeFile(src, pendingFile);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/864f878d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
 

[33/50] [abbrv] hadoop git commit: HADOOP-13324. s3a tests don't authenticate with S3 frankfurt (or other V4 auth only endpoints). Contributed by Steve Loughran.

2016-08-17 Thread subru
HADOOP-13324. s3a tests don't authenticate with S3 frankfurt (or other V4 auth 
only endpoints). Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3808876c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3808876c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3808876c

Branch: refs/heads/YARN-2915
Commit: 3808876c7397ea68906bc5cc18fdf690c9c42131
Parents: 27a6e09
Author: Chris Nauroth 
Authored: Tue Aug 16 17:05:52 2016 -0700
Committer: Chris Nauroth 
Committed: Tue Aug 16 17:05:52 2016 -0700

--
 .../src/site/markdown/tools/hadoop-aws/index.md | 247 ---
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  15 ++
 .../fs/s3a/TestS3AAWSCredentialsProvider.java   |   1 +
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java   |  15 +-
 .../scale/TestS3AInputStreamPerformance.java|   2 +
 5 files changed, 247 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3808876c/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
--
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
index 01a2bae..cb1df83 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
@@ -634,6 +634,60 @@ this capability.
   any call to setReadahead() is made to an open stream.
 
 
+### Working with buckets in different regions
+
+S3 Buckets are hosted in different regions, the default being US-East.
+The client talks to it by default, under the URL `s3.amazonaws.com`
+
+S3A can work with buckets from any region. Each region has its own
+S3 endpoint, documented [by 
Amazon](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
+
+1. Applications running in EC2 infrastructure do not pay for IO to/from
+*local S3 buckets*. They will be billed for access to remote buckets. Always
+use local buckets and local copies of data, wherever possible.
+1. The default S3 endpoint can support data IO with any bucket when the V1 
request
+signing protocol is used.
+1. When the V4 signing protocol is used, AWS requires the explicit region 
endpoint
+to be used —hence S3A must be configured to use the specific endpoint. This
+is done in the configuration option `fs.s3a.endpoint`.
+1. All endpoints other than the default endpoint only support interaction
+with buckets local to that S3 instance.
+
+While it is generally simpler to use the default endpoint, working with
+V4-signing-only regions (Frankfurt, Seoul) requires the endpoint to be 
identified.
+Expect better performance from direct connections —traceroute will give you 
some insight.
+
+Examples:
+
+The default endpoint:
+
+```xml
+
+  fs.s3a.endpoint
+  s3.amazonaws.com
+
+```
+
+Frankfurt
+
+```xml
+
+  fs.s3a.endpoint
+  s3.eu-central-1.amazonaws.com
+
+```
+
+Seoul
+```xml
+
+  fs.s3a.endpoint
+  s3.ap-northeast-2.amazonaws.com
+
+```
+
+If the wrong endpoint is used, the request may fail. This may be reported as a 
301/redirect error,
+or as a 400 Bad Request.
+
 ### S3AFastOutputStream
  **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk**
 
@@ -819,8 +873,6 @@ of environment-variable authentication by attempting to use 
the `hdfs fs` comman
 to read or write data on S3. That is: comment out the `fs.s3a` secrets and 
rely on
 the environment variables.
 
-S3 Frankfurt is a special case. It uses the V4 authentication API.
-
 ### Authentication failures running on Java 8u60+
 
 A change in the Java 8 JVM broke some of the `toString()` string generation
@@ -829,6 +881,106 @@ generate authentication headers suitable for validation 
by S3.
 
 Fix: make sure that the version of Joda Time is 2.8.1 or later.
 
+### "Bad Request" exception when working with AWS S3 Frankfurt, Seoul, or 
elsewhere
+
+
+S3 Frankfurt and Seoul *only* support
+[the V4 authentication 
API](http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html).
+
+Requests using the V2 API will be rejected with 400 `Bad Request`
+
+```
+$ bin/hadoop fs -ls s3a://frankfurt/
+WARN s3a.S3AFileSystem: Client: Amazon S3 error 400: 400 Bad Request; Bad 
Request (retryable)
+
+com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: 
Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
923C5D9E75E44C06), S3 Extended Request ID: 
HDwje6k+ANEeDsM6aJ8+D5gUmNAMguOk2BvZ8PH3g9z0gpH+IuwT7N19oQOnIr5CIx7Vqb/uThE=
+   at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
+   at 

[31/50] [abbrv] hadoop git commit: HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by Sean Mackrory.

2016-08-17 Thread subru
HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by 
Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b689e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b689e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b689e7a

Branch: refs/heads/YARN-2915
Commit: 4b689e7a758a55cec2ca8398727feefc8ac21bfd
Parents: 6c154ab
Author: Andrew Wang 
Authored: Tue Aug 16 15:01:18 2016 -0700
Committer: Andrew Wang 
Committed: Tue Aug 16 15:01:18 2016 -0700

--
 .../org/apache/hadoop/conf/ConfigRedactor.java  | 84 
 .../apache/hadoop/conf/ReconfigurableBase.java  | 13 ++-
 .../fs/CommonConfigurationKeysPublic.java   | 14 
 .../src/main/resources/core-default.xml | 10 +++
 .../apache/hadoop/conf/TestConfigRedactor.java  | 72 +
 5 files changed, 190 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b689e7a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
new file mode 100644
index 000..0ba756c
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.conf;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeys.*;
+
+/**
+ * Tool for redacting sensitive information when displaying config parameters.
+ *
+ * Some config parameters contain sensitive information (for example, cloud
+ * storage keys). When these properties are displayed in plaintext, we should
+ * redactor their values as appropriate.
+ */
+public class ConfigRedactor {
+
+  private static final String REDACTED_TEXT = "";
+
+  private List compiledPatterns;
+
+  public ConfigRedactor(Configuration conf) {
+String sensitiveRegexList = conf.get(
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS,
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS_DEFAULT);
+List sensitiveRegexes = 
Arrays.asList(sensitiveRegexList.split(","));
+compiledPatterns = new ArrayList();
+for (String regex : sensitiveRegexes) {
+  Pattern p = Pattern.compile(regex);
+  compiledPatterns.add(p);
+}
+  }
+
+  /**
+   * Given a key / value pair, decides whether or not to redact and returns
+   * either the original value or text indicating it has been redacted.
+   *
+   * @param key
+   * @param value
+   * @return Original value, or text indicating it has been redacted
+   */
+  public String redact(String key, String value) {
+if (configIsSensitive(key)) {
+  return REDACTED_TEXT;
+}
+return value;
+  }
+
+  /**
+   * Matches given config key against patterns and determines whether or not
+   * it should be considered sensitive enough to redact in logs and other
+   * plaintext displays.
+   *
+   * @param key
+   * @return True if parameter is considered sensitive
+   */
+  private boolean configIsSensitive(String key) {
+for (Pattern regex : compiledPatterns) {
+  if (regex.matcher(key).find()) {
+return true;
+  }
+}
+return false;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b689e7a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
index 

[08/50] [abbrv] hadoop git commit: HDFS-8668. Erasure Coding: revisit buffer used for encoding and decoding. Contributed by Sammi Chen

2016-08-17 Thread subru
HDFS-8668. Erasure Coding: revisit buffer used for encoding and decoding. 
Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5af9be7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5af9be7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5af9be7

Branch: refs/heads/YARN-2915
Commit: b5af9be72c72734d668f817c99d889031922a951
Parents: 4d3ea92
Author: Kai Zheng 
Authored: Sat Aug 13 13:52:37 2016 +0800
Committer: Kai Zheng 
Committed: Sat Aug 13 13:52:37 2016 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java |  1 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 52 +++-
 .../hadoop/hdfs/DFSStripedInputStream.java  | 18 ---
 .../hadoop/hdfs/DFSStripedOutputStream.java | 29 +--
 .../StripedBlockChecksumReconstructor.java  |  3 +-
 .../erasurecode/StripedBlockReader.java |  4 ++
 .../erasurecode/StripedBlockReconstructor.java  |  4 +-
 .../erasurecode/StripedBlockWriter.java | 22 +++--
 .../datanode/erasurecode/StripedReader.java | 11 -
 .../erasurecode/StripedReconstructor.java   | 13 -
 .../datanode/erasurecode/StripedWriter.java |  8 +++
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  8 +++
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |  8 +++
 .../TestDFSStripedOutputStreamWithFailure.java  | 11 -
 .../hadoop/hdfs/TestReconstructStripedFile.java | 11 -
 15 files changed, 169 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5af9be7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index 694fcbe..c35d608 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -101,6 +101,7 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
 
   @Override
   public synchronized void putBuffer(ByteBuffer buffer) {
+buffer.clear();
 TreeMap tree = getBufferTree(buffer.isDirect());
 while (true) {
   Key key = new Key(buffer.capacity(), System.nanoTime());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5af9be7/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index cc919da..93aee0e 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs;
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InterruptedIOException;
+import java.nio.ByteBuffer;
 import java.nio.channels.ClosedChannelException;
 import java.util.EnumSet;
 import java.util.concurrent.atomic.AtomicReference;
@@ -393,11 +394,47 @@ public class DFSOutputStream extends FSOutputSummer
   @Override
   protected synchronized void writeChunk(byte[] b, int offset, int len,
   byte[] checksum, int ckoff, int cklen) throws IOException {
+writeChunkPrepare(len, ckoff, cklen);
+
+currentPacket.writeChecksum(checksum, ckoff, cklen);
+currentPacket.writeData(b, offset, len);
+currentPacket.incNumChunks();
+getStreamer().incBytesCurBlock(len);
+
+// If packet is full, enqueue it for transmission
+if (currentPacket.getNumChunks() == currentPacket.getMaxChunks() ||
+getStreamer().getBytesCurBlock() == blockSize) {
+  enqueueCurrentPacketFull();
+}
+  }
+
+  /* write the data chunk in buffer staring at
+  * buffer.position with
+  * a length of len > 0, and its checksum
+  */
+  protected synchronized void writeChunk(ByteBuffer buffer, int len,
+  byte[] checksum, int ckoff, int cklen) throws IOException {
+writeChunkPrepare(len, ckoff, cklen);
+
+currentPacket.writeChecksum(checksum, ckoff, cklen);
+currentPacket.writeData(buffer, len);
+currentPacket.incNumChunks();
+getStreamer().incBytesCurBlock(len);
+
+// If packet is full, enqueue it for transmission
+if 

[1/2] hadoop git commit: YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni Matteo Fumarola via Subru)

2016-08-17 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2915 fc5d8395e -> 08ff90603


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08ff9060/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
new file mode 100644
index 000..13175ae
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/utils/TestFederationStateStoreInputValidator.java
@@ -0,0 +1,1265 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.federation.store.utils;
+
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import 
org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster;
+import 
org.apache.hadoop.yarn.server.federation.store.records.DeleteApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterHeartbeatRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
+import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Unit tests for FederationApplicationInputValidator,
+ * FederationMembershipInputValidator, and FederationPolicyInputValidator.
+ */
+public class TestFederationStateStoreInputValidator {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestFederationStateStoreInputValidator.class);
+
+  private static SubClusterId subClusterId;
+  private static String amRMServiceAddress;
+  private static String clientRMServiceAddress;
+  private static String rmAdminServiceAddress;
+  private static String rmWebServiceAddress;
+  private static int lastHeartBeat;
+  private static SubClusterState stateNew;
+  private static SubClusterState stateLost;
+  private static ApplicationId appId;
+  private static int lastStartTime;
+  private static String capability;
+  private static String queue;
+  private static String type;
+  private static ByteBuffer params;
+
+  private static SubClusterId subClusterIdInvalid;
+  private static SubClusterId subClusterIdNull;
+
+  private static int lastHeartBeatNegative;
+  private static int lastStartTimeNegative;
+
+  private static SubClusterState stateNull;
+  private static ApplicationId appIdNull;
+
+  private static String capabilityNull;
+  private static String capabilityEmpty;
+
+  private static String addressNull;
+  private static String addressEmpty;
+  private static String addressWrong;
+  private static 

[2/2] hadoop git commit: YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni Matteo Fumarola via Subru)

2016-08-17 Thread subru
YARN-5467. InputValidator for the FederationStateStore internal APIs. (Giovanni 
Matteo Fumarola via Subru)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08ff9060
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08ff9060
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08ff9060

Branch: refs/heads/YARN-2915
Commit: 08ff9060375f01089d7ac090b25c15656253a964
Parents: fc5d839
Author: Subru Krishnan 
Authored: Wed Aug 17 12:07:06 2016 -0700
Committer: Subru Krishnan 
Committed: Wed Aug 17 12:07:06 2016 -0700

--
 .../store/impl/MemoryFederationStateStore.java  |   30 +
 ...cationHomeSubClusterStoreInputValidator.java |  183 +++
 ...ationMembershipStateStoreInputValidator.java |  317 +
 .../FederationPolicyStoreInputValidator.java|  144 ++
 ...derationStateStoreInvalidInputException.java |   48 +
 .../federation/store/utils/package-info.java|   17 +
 .../impl/FederationStateStoreBaseTest.java  |6 +-
 .../TestFederationStateStoreInputValidator.java | 1265 ++
 8 files changed, 2007 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08ff9060/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
index 8144435..6e564dc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
@@ -57,6 +57,9 @@ import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegister
 import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterResponse;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
 import 
org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterResponse;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator;
+import 
org.apache.hadoop.yarn.server.federation.store.utils.FederationPolicyStoreInputValidator;
 import org.apache.hadoop.yarn.server.records.Version;
 import org.apache.hadoop.yarn.util.MonotonicClock;
 
@@ -88,6 +91,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public SubClusterRegisterResponse registerSubCluster(
   SubClusterRegisterRequest request) throws YarnException {
+FederationMembershipStateStoreInputValidator
+.validateSubClusterRegisterRequest(request);
 SubClusterInfo subClusterInfo = request.getSubClusterInfo();
 membership.put(subClusterInfo.getSubClusterId(), subClusterInfo);
 return SubClusterRegisterResponse.newInstance();
@@ -96,6 +101,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public SubClusterDeregisterResponse deregisterSubCluster(
   SubClusterDeregisterRequest request) throws YarnException {
+FederationMembershipStateStoreInputValidator
+.validateSubClusterDeregisterRequest(request);
 SubClusterInfo subClusterInfo = membership.get(request.getSubClusterId());
 if (subClusterInfo == null) {
   throw new YarnException(
@@ -111,6 +118,8 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   public SubClusterHeartbeatResponse subClusterHeartbeat(
   SubClusterHeartbeatRequest request) throws YarnException {
 
+FederationMembershipStateStoreInputValidator
+.validateSubClusterHeartbeatRequest(request);
 SubClusterId subClusterId = request.getSubClusterId();
 SubClusterInfo subClusterInfo = membership.get(subClusterId);
 
@@ -129,6 +138,9 @@ public class MemoryFederationStateStore implements 
FederationStateStore {
   @Override
   public GetSubClusterInfoResponse getSubCluster(
   GetSubClusterInfoRequest request) throws YarnException {
+
+FederationMembershipStateStoreInputValidator
+

[47/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf1ab3a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
new file mode 100644
index 000..21a715c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
@@ -0,0 +1,102 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import { moduleFor, test } from 'ember-qunit';
+
+moduleFor('serializer:yarn-node-app', 'Unit | Serializer | NodeApp', {
+});
+
+test('Basic creation test', function(assert) {
+  let serializer = this.subject();
+
+  assert.ok(serializer);
+  assert.ok(serializer.normalizeSingleResponse);
+  assert.ok(serializer.normalizeArrayResponse);
+  assert.ok(serializer.internalNormalizeSingleResponse);
+});
+
+test('normalizeArrayResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+apps: {
+  app: [{
+id:"application_1456251210105_0001", state:"FINISHED", user:"root"
+  },{
+id:"application_1456251210105_0002", state:"RUNNING",user:"root",
+containerids:["container_e38_1456251210105_0002_01_01",
+"container_e38_1456251210105_0002_01_02"]
+  }]
+}
+  };
+  assert.expect(15);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 2);
+  assert.equal(response.data[0].attributes.containers, undefined);
+  assert.equal(response.data[1].attributes.containers.length, 2);
+  assert.deepEqual(response.data[1].attributes.containers,
+  payload.apps.app[1].containerids);
+  for (var i = 0; i < 2; i++) {
+assert.equal(response.data[i].type, modelClass.modelName);
+assert.equal(response.data[i].id, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.appId, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.state, payload.apps.app[i].state);
+assert.equal(response.data[i].attributes.user, payload.apps.app[i].user);
+  }
+});
+
+test('normalizeArrayResponse no apps test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = { apps: null };
+  assert.expect(5);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 1);
+  assert.equal(response.data[0].type, modelClass.modelName);
+  assert.equal(response.data[0].id, "dummy");
+  assert.equal(response.data[0].attributes.appId, undefined);
+});
+
+test('normalizeSingleResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+app: {id:"application_1456251210105_0001", state:"FINISHED", user:"root"}
+  };
+  assert.expect(7);
+  var response =
+  serializer.normalizeSingleResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(payload.app.id, response.data.id);
+  assert.equal(modelClass.modelName, response.data.type);
+  assert.equal(payload.app.id, response.data.attributes.appId);
+  assert.equal(payload.app.state, response.data.attributes.state);
+  assert.equal(payload.app.user, response.data.attributes.user);
+  assert.equal(response.data.attributes.containers, undefined);
+});
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf1ab3a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
new file mode 100644
index 000..1f08467
--- 

[32/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
deleted file mode 100644
index 5877589..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-node', 'Unit | Model | Node', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.totalVmemAllocatedContainersMB);
-  assert.ok(model.vmemCheckEnabled);
-  assert.ok(model.pmemCheckEnabled);
-  assert.ok(model.nodeHealthy);
-  assert.ok(model.lastNodeUpdateTime);
-  assert.ok(model.healthReport);
-  assert.ok(model.nmStartupTime);
-  assert.ok(model.nodeManagerBuildVersion);
-  assert.ok(model.hadoopBuildVersion);
-});
-
-test('test fields', function(assert) {
-  let model = this.subject();
-
-  assert.expect(4);
-  Ember.run(function () {
-model.set("totalVmemAllocatedContainersMB", 4096);
-model.set("totalPmemAllocatedContainersMB", 2048);
-model.set("totalVCoresAllocatedContainers", 4);
-model.set("hadoopBuildVersion", "3.0.0-SNAPSHOT");
-assert.equal(model.get("totalVmemAllocatedContainersMB"), 4096);
-assert.equal(model.get("totalPmemAllocatedContainersMB"), 2048);
-assert.equal(model.get("totalVCoresAllocatedContainers"), 4);
-assert.equal(model.get("hadoopBuildVersion"), "3.0.0-SNAPSHOT");
-  });
-});
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
deleted file mode 100644
index 4fd2517..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
+++ /dev/null
@@ -1,95 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-rm-node', 'Unit | Model | RMNode', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.rack);
-  assert.ok(model.state);
-  assert.ok(model.nodeHostName);
-  assert.ok(model.nodeHTTPAddress);
-  assert.ok(model.lastHealthUpdate);
-  assert.ok(model.healthReport);
-  assert.ok(model.numContainers);
-  assert.ok(model.usedMemoryMB);
-  assert.ok(model.availMemoryMB);
-  assert.ok(model.usedVirtualCores);
-  assert.ok(model.availableVirtualCores);
-  assert.ok(model.version);
-  assert.ok(model.nodeLabels);
-  

[46/50] [abbrv] hadoop git commit: YARN-5161. [YARN-3368] Add Apache Hadoop logo in YarnUI home page. (Kai Sasaki via Sunil G)

2016-08-17 Thread wangda
YARN-5161. [YARN-3368] Add Apache Hadoop logo in YarnUI home page. (Kai Sasaki 
via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88a3e762
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88a3e762
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88a3e762

Branch: refs/heads/YARN-3368
Commit: 88a3e762fde93a2cce4e66dcf45267f038879935
Parents: dde888e
Author: Sunil 
Authored: Mon Jul 11 14:31:25 2016 +0530
Committer: Wangda Tan 
Committed: Wed Aug 17 10:58:01 2016 -0700

--
 .../src/main/webapp/app/styles/app.css |  11 +++
 .../src/main/webapp/app/templates/application.hbs  |  12 +++-
 .../webapp/public/assets/images/hadoop_logo.png| Bin 0 -> 26495 bytes
 3 files changed, 18 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/88a3e762/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
index bcb6aab..e2d09dc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
@@ -157,3 +157,14 @@ table.dataTable thead .sorting_desc_disabled {
   stroke: #ccc;  
   stroke-width: 2px;
 }
+
+.hadoop-brand-image {
+  margin-top: -10px;
+  width: auto;
+  height: 45px;
+}
+
+li a.navigation-link.ember-view {
+  color: #2196f3;
+  font-weight: bold;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88a3e762/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
index b45ec6b..03b2c4a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
@@ -20,35 +20,37 @@
   
 
 
+  
+
+  
   
 Toggle navigation
 
 
 
   
-  Apache Hadoop YARN
 
 
 
 
   
 {{#link-to 'yarn-queue' 'root' tagName="li"}}
-  {{#link-to 'yarn-queue' 'root'}}Queues
+  {{#link-to 'yarn-queue' 'root' class="navigation-link"}}Queues
 (current)
   {{/link-to}}
 {{/link-to}}
 {{#link-to 'yarn-apps' tagName="li"}}
-  {{#link-to 'yarn-apps'}}Applications
+  {{#link-to 'yarn-apps' class="navigation-link"}}Applications
 (current)
   {{/link-to}}
 {{/link-to}}
 {{#link-to 'cluster-overview' tagName="li"}}
-  {{#link-to 'cluster-overview'}}Cluster Overview
+  {{#link-to 'cluster-overview' class="navigation-link"}}Cluster 
Overview
 (current)
   {{/link-to}}
 {{/link-to}}
 {{#link-to 'yarn-nodes' tagName="li"}}
-  {{#link-to 'yarn-nodes'}}Nodes
+  {{#link-to 'yarn-nodes' class="navigation-link"}}Nodes
 (current)
   {{/link-to}}
 {{/link-to}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88a3e762/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png
new file mode 100644
index 000..275d39e
Binary files /dev/null and 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/hadoop_logo.png
 differ


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[36/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
new file mode 100644
index 000..f7ec020
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -0,0 +1,275 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Component.extend({
+  // Map: 
+  map : undefined,
+
+  // Normalized data for d3
+  treeData: undefined,
+
+  // folded queues, folded[] == true means  is folded
+  foldedQueues: { },
+
+  // maxDepth
+  maxDepth: 0,
+
+  // num of leaf queue, folded queue is treated as leaf queue
+  numOfLeafQueue: 0,
+
+  // mainSvg
+  mainSvg: undefined,
+
+  // Init data
+  initData: function() {
+this.map = { };
+this.treeData = { };
+this.maxDepth = 0;
+this.numOfLeafQueue = 0;
+
+this.get("model")
+  .forEach(function(o) {
+this.map[o.id] = o;
+  }.bind(this));
+
+var selected = this.get("selected");
+
+this.initQueue("root", 1, this.treeData);
+  },
+
+  // get Children array of given queue
+  getChildrenNamesArray: function(q) {
+var namesArr = [];
+
+// Folded queue's children is empty
+if (this.foldedQueues[q.get("name")]) {
+  return namesArr;
+}
+
+var names = q.get("children");
+if (names) {
+  names.forEach(function(name) {
+namesArr.push(name);
+  });
+}
+
+return namesArr;
+  },
+
+  // Init queues
+  initQueue: function(queueName, depth, node) {
+if ((!queueName) || (!this.map[queueName])) {
+  // Queue is not existed
+  return;
+}
+
+if (depth > this.maxDepth) {
+  this.maxDepth = this.maxDepth + 1;
+}
+
+var queue = this.map[queueName];
+
+var names = this.getChildrenNamesArray(queue);
+
+node.name = queueName;
+node.parent = queue.get("parent");
+node.queueData = queue;
+
+if (names.length > 0) {
+  node.children = [];
+
+  names.forEach(function(name) {
+var childQueueData = {};
+node.children.push(childQueueData);
+this.initQueue(name, depth + 1, childQueueData);
+  }.bind(this));
+} else {
+  this.numOfLeafQueue = this.numOfLeafQueue + 1;
+}
+  },
+
+  update: function(source, root, tree, diagonal) {
+var duration = 300;
+var i = 0;
+
+// Compute the new tree layout.
+var nodes = tree.nodes(root).reverse();
+var links = tree.links(nodes);
+
+// Normalize for fixed-depth.
+nodes.forEach(function(d) { d.y = d.depth * 200; });
+
+// Update the nodes…
+var node = this.mainSvg.selectAll("g.node")
+  .data(nodes, function(d) { return d.id || (d.id = ++i); });
+
+// Enter any new nodes at the parent's previous position.
+var nodeEnter = node.enter().append("g")
+  .attr("class", "node")
+  .attr("transform", function(d) { return "translate(" + source.y0 + "," + 
source.x0 + ")"; })
+  .on("click", function(d,i){
+if (d.queueData.get("name") != this.get("selected")) {
+document.location.href = "yarnQueue/" + d.queueData.get("name");
+}
+  }.bind(this));
+  // .on("click", click);
+
+nodeEnter.append("circle")
+  .attr("r", 1e-6)
+  .style("fill", function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap <= 60.0) {
+  return "LimeGreen";
+} else if (usedCap <= 100.0) {
+  return "DarkOrange";
+} else {
+  return "LightCoral";
+}
+  });
+
+// append percentage
+nodeEnter.append("text")
+  .attr("x", function(d) { return 0; })
+  .attr("dy", ".35em")
+  .attr("text-anchor", function(d) { return "middle"; })
+  .text(function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap >= 100.0) {
+

[34/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
new file mode 100644
index 000..ca80ccd
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
@@ -0,0 +1,58 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+
+  
+{{node-menu path="yarnNodeContainers" nodeAddr=model.nodeInfo.addr 
nodeId=model.nodeInfo.id}}
+
+  
+
+  
+Container ID
+Container State
+User
+Logs
+  
+
+
+  {{#if model.containers}}
+{{#each model.containers as |container|}}
+  {{#if container.isDummyContainer}}
+No containers found on this 
node
+  {{else}}
+
+  {{container.containerId}}
+  {{container.state}}
+  {{container.user}}
+  
+{{log-files-comma nodeId=model.nodeInfo.id
+nodeAddr=model.nodeInfo.addr
+containerId=container.containerId
+logFiles=container.containerLogFiles}}
+  
+
+  {{/if}}
+{{/each}}
+  {{/if}}
+
+  
+  {{simple-table table-id="node-containers-table" bFilter=true 
colsOrder="0,desc" colTypes="natural" colTargets="0"}}
+
+  
+
+{{outlet}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
new file mode 100644
index 000..a036076
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs
@@ -0,0 +1,94 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+
+  
+{{node-menu path="yarnNode" nodeId=model.rmNode.id nodeAddr=model.node.id}}
+
+  
+Node Information
+  
+
+  
+Total Vmem allocated for Containers
+{{divide num=model.node.totalVmemAllocatedContainersMB 
den=1024}} GB
+  
+  
+Vmem enforcement enabled
+{{model.node.vmemCheckEnabled}}
+  
+  
+Total Pmem allocated for Containers
+{{divide num=model.node.totalPmemAllocatedContainersMB 
den=1024}} GB
+  
+  
+Pmem enforcement enabled
+{{model.node.pmemCheckEnabled}}
+  
+  
+Total VCores allocated for Containers
+{{model.node.totalVCoresAllocatedContainers}}
+  
+  
+Node Healthy Status
+

[38/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
deleted file mode 100644
index c5394d0..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
+++ /dev/null
@@ -1,49 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  
-  if (payload.appAttempt) {
-payload = payload.appAttempt;  
-  }
-  
-  var fixedPayload = {
-id: payload.appAttemptId,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  startTime: Converter.timeStampToDate(payload.startTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  containerId: payload.containerId,
-  nodeHttpAddress: payload.nodeHttpAddress,
-  nodeId: payload.nodeId,
-  state: payload.nodeId,
-  logsLink: payload.logsLink
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  // return expected is { data: [ {}, {} ] }
-  var normalizedArrayResponse = {};
-
-  // payload has apps : { app: [ {},{},{} ]  }
-  // need some error handling for ex apps or app may not be defined.
-  normalizedArrayResponse.data = 
payload.appAttempts.appAttempt.map(singleApp => {
-return this.internalNormalizeSingleResponse(store, primaryModelClass,
-  singleApp, singleApp.id, requestType);
-  }, this);
-  return normalizedArrayResponse;
-}
-});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
deleted file mode 100644
index a038fff..000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
+++ /dev/null
@@ -1,66 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  if (payload.app) {
-payload = payload.app;  
-  }
-  
-  var fixedPayload = {
-id: id,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  appName: payload.name,
-  user: payload.user,
-  queue: payload.queue,
-  state: payload.state,
-  startTime: Converter.timeStampToDate(payload.startedTime),
-  elapsedTime: Converter.msToElapsedTime(payload.elapsedTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  finalStatus: payload.finalStatus,
-  progress: payload.progress,
-  diagnostics: payload.diagnostics,
-  amContainerLogs: payload.amContainerLogs,
-  amHostHttpAddress: payload.amHostHttpAddress,
-  logAggregationStatus: payload.logAggregationStatus,
-  unmanagedApplication: payload.unmanagedApplication,
-  amNodeLabelExpression: payload.amNodeLabelExpression,
-  priority: payload.priority,
-  allocatedMB: payload.allocatedMB,
-  allocatedVCores: payload.allocatedVCores,
-  runningContainers: payload.runningContainers,
-  memorySeconds: payload.memorySeconds,
-  vcoreSeconds: payload.vcoreSeconds,
-  preemptedResourceMB: payload.preemptedResourceMB,
-  preemptedResourceVCores: payload.preemptedResourceVCores,
-  numNonAMContainerPreempted: payload.numNonAMContainerPreempted,
-  numAMContainerPreempted: payload.numAMContainerPreempted
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {

[24/50] [abbrv] hadoop git commit: YARN-5455. Update Javadocs for LinuxContainerExecutor. Contributed by Daniel Templeton.

2016-08-17 Thread wangda
YARN-5455. Update Javadocs for LinuxContainerExecutor. Contributed by Daniel 
Templeton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f05ff7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f05ff7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f05ff7a

Branch: refs/heads/YARN-3368
Commit: 7f05ff7a4e654693eaaa216ee5fc6e24112e0e23
Parents: 2353271
Author: Varun Vasudev 
Authored: Wed Aug 17 15:34:58 2016 +0530
Committer: Varun Vasudev 
Committed: Wed Aug 17 15:34:58 2016 +0530

--
 .../nodemanager/LinuxContainerExecutor.java |  80 ++-
 .../runtime/DefaultLinuxContainerRuntime.java   |  20 ++-
 .../DelegatingLinuxContainerRuntime.java|  10 ++
 .../runtime/DockerLinuxContainerRuntime.java| 143 ---
 .../linux/runtime/LinuxContainerRuntime.java|  10 +-
 .../runtime/ContainerRuntime.java   |  40 +-
 6 files changed, 267 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f05ff7a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
index 8f5ee6b..2c27f6c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
@@ -38,7 +38,9 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileg
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandler;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime;
+import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.LinuxContainerRuntime;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException;
@@ -64,11 +66,36 @@ import java.util.regex.Pattern;
 
 import static 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.LinuxContainerRuntimeConstants.*;
 
-/** Container execution for Linux. Provides linux-specific localization
- * mechanisms, resource management via cgroups and can switch between multiple
- * container runtimes - e.g Standard "Process Tree", Docker etc
+/**
+ * This class provides {@link Container} execution using a native
+ * {@code container-executor} binary. By using a helper written it native code,
+ * this class is able to do several things that the
+ * {@link DefaultContainerExecutor} cannot, such as execution of applications
+ * as the applications' owners, provide localization that takes advantage of
+ * mapping the application owner to a UID on the execution host, resource
+ * management through Linux CGROUPS, and Docker support.
+ *
+ * If {@code hadoop.security.authetication} is set to {@code simple},
+ * then the
+ * {@code yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}
+ * property will determine whether the {@code LinuxContainerExecutor} runs
+ * processes as the application owner or as the default user, as set in the
+ * {@code yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user}
+ * property.
+ *
+ * The {@code LinuxContainerExecutor} will manage applications through an
+ * appropriate {@link LinuxContainerRuntime} instance. This class uses a
+ * {@link DelegatingLinuxContainerRuntime} instance, which will delegate calls
+ * to either a {@link DefaultLinuxContainerRuntime} instance or a
+ * {@link DockerLinuxContainerRuntime} instance, depending on the job's
+ * configuration.
+ *
+ * 

[23/50] [abbrv] hadoop git commit: HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang Liu)

2016-08-17 Thread wangda
HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by Mingliang 
Liu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/23532716
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/23532716
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/23532716

Branch: refs/heads/YARN-3368
Commit: 23532716fcd3f7e5e20b8f9fc66188041638510a
Parents: 3808876
Author: Mingliang Liu 
Authored: Tue Aug 16 16:30:43 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Aug 16 17:33:04 2016 -0700

--
 .../apache/hadoop/test/GenericTestUtils.java| 31 --
 .../hadoop/test/TestGenericTestUtils.java   | 44 
 2 files changed, 63 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/23532716/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 116a111..0b73cf5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -42,10 +42,12 @@ import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Time;
+import org.apache.log4j.Appender;
 import org.apache.log4j.Layout;
 import org.apache.log4j.Level;
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
+import org.apache.log4j.PatternLayout;
 import org.apache.log4j.WriterAppender;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -275,36 +277,41 @@ public abstract class GenericTestUtils {
 private StringWriter sw = new StringWriter();
 private WriterAppender appender;
 private Logger logger;
-
+
 public static LogCapturer captureLogs(Log l) {
   Logger logger = ((Log4JLogger)l).getLogger();
-  LogCapturer c = new LogCapturer(logger);
-  return c;
+  return new LogCapturer(logger);
+}
+
+public static LogCapturer captureLogs(org.slf4j.Logger logger) {
+  return new LogCapturer(toLog4j(logger));
 }
-
 
 private LogCapturer(Logger logger) {
   this.logger = logger;
-  Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
-  WriterAppender wa = new WriterAppender(layout, sw);
-  logger.addAppender(wa);
+  Appender defaultAppender = Logger.getRootLogger().getAppender("stdout");
+  if (defaultAppender == null) {
+defaultAppender = Logger.getRootLogger().getAppender("console");
+  }
+  final Layout layout = (defaultAppender == null) ? new PatternLayout() :
+  defaultAppender.getLayout();
+  this.appender = new WriterAppender(layout, sw);
+  logger.addAppender(this.appender);
 }
-
+
 public String getOutput() {
   return sw.toString();
 }
-
+
 public void stopCapturing() {
   logger.removeAppender(appender);
-
 }
 
 public void clearOutput() {
   sw.getBuffer().setLength(0);
 }
   }
-  
-  
+
   /**
* Mockito answer helper that triggers one latch as soon as the
* method is called, then waits on another before continuing.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23532716/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
index 8a7b5f6..86df5d5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
@@ -18,8 +18,16 @@
 
 package org.apache.hadoop.test;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
 import org.junit.Test;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.assertTrue;
+
 public class TestGenericTestUtils extends GenericTestUtils {
 
   @Test
@@ -75,4 +83,40 @@ public class TestGenericTestUtils extends GenericTestUtils {
 }
   }
 
+  @Test(timeout = 1)
+  public void testLogCapturer() {
+final Log log = LogFactory.getLog(TestGenericTestUtils.class);
+

[16/50] [abbrv] hadoop git commit: YARN-5514. Clarify DecommissionType.FORCEFUL comment (Vrushali C via Varun Saxena)

2016-08-17 Thread wangda
YARN-5514. Clarify DecommissionType.FORCEFUL comment (Vrushali C via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffe1fff5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffe1fff5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffe1fff5

Branch: refs/heads/YARN-3368
Commit: ffe1fff5262bb1d77f68059f12ae43f9849b530f
Parents: b8a446b
Author: Varun Saxena 
Authored: Tue Aug 16 14:05:41 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 14:05:41 2016 +0530

--
 .../hadoop/yarn/api/records/DecommissionType.java   | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe1fff5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
index 988fd51..ba5609d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DecommissionType.java
@@ -17,13 +17,19 @@
  */
 package org.apache.hadoop.yarn.api.records;
 
+/**
+ * Specifies the different types of decommissioning of nodes.
+ */
 public enum DecommissionType {
-  /** Decomissioning nodes in normal way **/
+  /** Decomissioning nodes in normal way. **/
   NORMAL,
 
-  /** Graceful decommissioning of nodes **/
+  /** Graceful decommissioning of nodes. **/
   GRACEFUL,
 
-  /** Forceful decommissioning of nodes which are already in progress **/
+  /**
+   * Forceful decommissioning of nodes whose decommissioning is already in
+   * progress.
+   **/
   FORCEFUL
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[22/50] [abbrv] hadoop git commit: HADOOP-13324. s3a tests don't authenticate with S3 frankfurt (or other V4 auth only endpoints). Contributed by Steve Loughran.

2016-08-17 Thread wangda
HADOOP-13324. s3a tests don't authenticate with S3 frankfurt (or other V4 auth 
only endpoints). Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3808876c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3808876c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3808876c

Branch: refs/heads/YARN-3368
Commit: 3808876c7397ea68906bc5cc18fdf690c9c42131
Parents: 27a6e09
Author: Chris Nauroth 
Authored: Tue Aug 16 17:05:52 2016 -0700
Committer: Chris Nauroth 
Committed: Tue Aug 16 17:05:52 2016 -0700

--
 .../src/site/markdown/tools/hadoop-aws/index.md | 247 ---
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  15 ++
 .../fs/s3a/TestS3AAWSCredentialsProvider.java   |   1 +
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java   |  15 +-
 .../scale/TestS3AInputStreamPerformance.java|   2 +
 5 files changed, 247 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3808876c/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
--
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
index 01a2bae..cb1df83 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
@@ -634,6 +634,60 @@ this capability.
   any call to setReadahead() is made to an open stream.
 
 
+### Working with buckets in different regions
+
+S3 Buckets are hosted in different regions, the default being US-East.
+The client talks to it by default, under the URL `s3.amazonaws.com`
+
+S3A can work with buckets from any region. Each region has its own
+S3 endpoint, documented [by 
Amazon](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
+
+1. Applications running in EC2 infrastructure do not pay for IO to/from
+*local S3 buckets*. They will be billed for access to remote buckets. Always
+use local buckets and local copies of data, wherever possible.
+1. The default S3 endpoint can support data IO with any bucket when the V1 
request
+signing protocol is used.
+1. When the V4 signing protocol is used, AWS requires the explicit region 
endpoint
+to be used —hence S3A must be configured to use the specific endpoint. This
+is done in the configuration option `fs.s3a.endpoint`.
+1. All endpoints other than the default endpoint only support interaction
+with buckets local to that S3 instance.
+
+While it is generally simpler to use the default endpoint, working with
+V4-signing-only regions (Frankfurt, Seoul) requires the endpoint to be 
identified.
+Expect better performance from direct connections —traceroute will give you 
some insight.
+
+Examples:
+
+The default endpoint:
+
+```xml
+
+  fs.s3a.endpoint
+  s3.amazonaws.com
+
+```
+
+Frankfurt
+
+```xml
+
+  fs.s3a.endpoint
+  s3.eu-central-1.amazonaws.com
+
+```
+
+Seoul
+```xml
+
+  fs.s3a.endpoint
+  s3.ap-northeast-2.amazonaws.com
+
+```
+
+If the wrong endpoint is used, the request may fail. This may be reported as a 
301/redirect error,
+or as a 400 Bad Request.
+
 ### S3AFastOutputStream
  **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk**
 
@@ -819,8 +873,6 @@ of environment-variable authentication by attempting to use 
the `hdfs fs` comman
 to read or write data on S3. That is: comment out the `fs.s3a` secrets and 
rely on
 the environment variables.
 
-S3 Frankfurt is a special case. It uses the V4 authentication API.
-
 ### Authentication failures running on Java 8u60+
 
 A change in the Java 8 JVM broke some of the `toString()` string generation
@@ -829,6 +881,106 @@ generate authentication headers suitable for validation 
by S3.
 
 Fix: make sure that the version of Joda Time is 2.8.1 or later.
 
+### "Bad Request" exception when working with AWS S3 Frankfurt, Seoul, or 
elsewhere
+
+
+S3 Frankfurt and Seoul *only* support
+[the V4 authentication 
API](http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html).
+
+Requests using the V2 API will be rejected with 400 `Bad Request`
+
+```
+$ bin/hadoop fs -ls s3a://frankfurt/
+WARN s3a.S3AFileSystem: Client: Amazon S3 error 400: 400 Bad Request; Bad 
Request (retryable)
+
+com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: 
Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
923C5D9E75E44C06), S3 Extended Request ID: 
HDwje6k+ANEeDsM6aJ8+D5gUmNAMguOk2BvZ8PH3g9z0gpH+IuwT7N19oQOnIr5CIx7Vqb/uThE=
+   at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
+   at 

[35/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
new file mode 100644
index 000..89858bf
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+  model(param) {
+return Ember.RSVP.hash({
+  selected : param.queue_name,
+  queues: this.store.findAll('yarnQueue'),
+  selectedQueue : undefined,
+  apps: undefined, // apps of selected queue
+});
+  },
+
+  afterModel(model) {
+model.selectedQueue = this.store.peekRecord('yarnQueue', model.selected);
+model.apps = this.store.findAll('yarnApp');
+model.apps.forEach(function(o) {
+  console.log(o);
+})
+  }
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
new file mode 100644
index 000..7da6f6d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+export default Ember.Route.extend({
+  beforeModel() {
+this.transitionTo('yarnQueues.root');
+  }
+});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe439258/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
new file mode 100644
index 000..3686c83
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
@@ -0,0 +1,25 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+

[06/50] [abbrv] hadoop git commit: HADOOP-13437. KMS should reload whitelist and default key ACLs when hot-reloading. Contributed by Xiao Chen.

2016-08-17 Thread wangda
HADOOP-13437. KMS should reload whitelist and default key ACLs when 
hot-reloading. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9daa9979
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9daa9979
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9daa9979

Branch: refs/heads/YARN-3368
Commit: 9daa9979a1f92fb3230361c10ddfcc1633795c0e
Parents: 864f878
Author: Xiao Chen 
Authored: Mon Aug 15 18:13:58 2016 -0700
Committer: Xiao Chen 
Committed: Mon Aug 15 18:14:45 2016 -0700

--
 .../hadoop/crypto/key/kms/server/KMSACLs.java   |  75 
 .../crypto/key/kms/server/TestKMSACLs.java  | 174 ++-
 2 files changed, 207 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9daa9979/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
index 5b67950..c36fcf8 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
@@ -34,7 +34,6 @@ import java.util.Map;
 import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
-import java.util.regex.Pattern;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -74,10 +73,10 @@ public class KMSACLs implements Runnable, KeyACLs {
   private volatile Map blacklistedAcls;
   @VisibleForTesting
   volatile Map> keyAcls;
-  private final Map defaultKeyAcls =
-  new HashMap();
-  private final Map whitelistKeyAcls =
-  new HashMap();
+  @VisibleForTesting
+  volatile Map defaultKeyAcls = new HashMap<>();
+  @VisibleForTesting
+  volatile Map whitelistKeyAcls = new 
HashMap<>();
   private ScheduledExecutorService executorService;
   private long lastReload;
 
@@ -111,7 +110,8 @@ public class KMSACLs implements Runnable, KeyACLs {
 blacklistedAcls = tempBlacklist;
   }
 
-  private void setKeyACLs(Configuration conf) {
+  @VisibleForTesting
+  void setKeyACLs(Configuration conf) {
 Map> tempKeyAcls =
 new HashMap>();
 Map allKeyACLS =
@@ -148,38 +148,43 @@ public class KMSACLs implements Runnable, KeyACLs {
 }
   }
 }
-
 keyAcls = tempKeyAcls;
+
+final Map tempDefaults = new HashMap<>();
+final Map tempWhitelists = new HashMap<>();
 for (KeyOpType keyOp : KeyOpType.values()) {
-  if (!defaultKeyAcls.containsKey(keyOp)) {
-String confKey = KMSConfiguration.DEFAULT_KEY_ACL_PREFIX + keyOp;
-String aclStr = conf.get(confKey);
-if (aclStr != null) {
-  if (keyOp == KeyOpType.ALL) {
-// Ignore All operation for default key acl
-LOG.warn("Should not configure default key ACL for KEY_OP '{}'", 
keyOp);
-  } else {
-if (aclStr.equals("*")) {
-  LOG.info("Default Key ACL for KEY_OP '{}' is set to '*'", keyOp);
-}
-defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
-  }
-}
-  }
-  if (!whitelistKeyAcls.containsKey(keyOp)) {
-String confKey = KMSConfiguration.WHITELIST_KEY_ACL_PREFIX + keyOp;
-String aclStr = conf.get(confKey);
-if (aclStr != null) {
-  if (keyOp == KeyOpType.ALL) {
-// Ignore All operation for whitelist key acl
-LOG.warn("Should not configure whitelist key ACL for KEY_OP '{}'", 
keyOp);
-  } else {
-if (aclStr.equals("*")) {
-  LOG.info("Whitelist Key ACL for KEY_OP '{}' is set to '*'", 
keyOp);
-}
-whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
-  }
+  parseAclsWithPrefix(conf, KMSConfiguration.DEFAULT_KEY_ACL_PREFIX,
+  keyOp, tempDefaults);
+  parseAclsWithPrefix(conf, KMSConfiguration.WHITELIST_KEY_ACL_PREFIX,
+  keyOp, tempWhitelists);
+}
+defaultKeyAcls = tempDefaults;
+whitelistKeyAcls 

[19/50] [abbrv] hadoop git commit: MAPREDUCE-6751. Add debug log message when splitting is not possible due to unsplittable compression. (Peter Vary via rchiang)

2016-08-17 Thread wangda
MAPREDUCE-6751. Add debug log message when splitting is not possible due to 
unsplittable compression. (Peter Vary via rchiang)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6c154abd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6c154abd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6c154abd

Branch: refs/heads/YARN-3368
Commit: 6c154abd33279475315b5f7f78dc47f1b0aa7028
Parents: b047bc7
Author: Ray Chiang 
Authored: Tue Aug 16 12:13:22 2016 -0700
Committer: Ray Chiang 
Committed: Tue Aug 16 12:13:22 2016 -0700

--
 .../main/java/org/apache/hadoop/mapred/FileInputFormat.java   | 7 +++
 .../hadoop/mapreduce/lib/input/CombineFileInputFormat.java| 4 
 .../apache/hadoop/mapreduce/lib/input/FileInputFormat.java| 7 +++
 3 files changed, 18 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
index 2c58ebe..5803d60 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
@@ -369,6 +369,13 @@ public abstract class FileInputFormat implements 
InputFormat {
 splitHosts[0], splitHosts[1]));
   }
 } else {
+  if (LOG.isDebugEnabled()) {
+// Log only if the file is big enough to be splitted
+if (length > Math.min(file.getBlockSize(), minSize)) {
+  LOG.debug("File is not splittable so no parallelization "
+  + "is possible: " + file.getPath());
+}
+  }
   String[][] splitHosts = 
getSplitHostsAndCachedHosts(blkLocations,0,length,clusterMap);
   splits.add(makeSplit(path, 0, length, splitHosts[0], splitHosts[1]));
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
index b2b7656..8f9699e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
@@ -600,6 +600,10 @@ public abstract class CombineFileInputFormat
 if (!isSplitable) {
   // if the file is not splitable, just create the one block with
   // full file length
+  if (LOG.isDebugEnabled()) {
+LOG.debug("File is not splittable so no parallelization "
++ "is possible: " + stat.getPath());
+  }
   blocks = new OneBlockInfo[1];
   fileSize = stat.getLen();
   blocks[0] = new OneBlockInfo(stat.getPath(), 0, fileSize,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c154abd/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
index 0c5ede9..7ec882f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
+++ 

[03/50] [abbrv] hadoop git commit: HADOOP-13333. testConf.xml ls comparators in wrong order (Vrushali C via Varun Saxena)

2016-08-17 Thread wangda
HADOOP-1. testConf.xml ls comparators in wrong order (Vrushali C via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d714030b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d714030b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d714030b

Branch: refs/heads/YARN-3368
Commit: d714030b5d4124f307c09d716d72a9f5a4a25995
Parents: bed69d1
Author: Varun Saxena 
Authored: Tue Aug 16 03:03:44 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 03:03:44 2016 +0530

--
 .../hadoop-common/src/test/resources/testConf.xml| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d714030b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml 
b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
index bbbc1ec..82bc789 100644
--- a/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
+++ b/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
@@ -106,11 +106,11 @@
 
 
   RegexpComparator
-  ^\s*-q\s+Print \? instead of non-printable 
characters\.( )*
+  ^\s*rather than a number of bytes\.( 
)*
 
 
   RegexpComparator
-  ^\s*rather than a number of bytes\.( 
)*
+  ^\s*-q\s+Print \? instead of non-printable 
characters\.( )*
 
 
   RegexpComparator


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[17/50] [abbrv] hadoop git commit: YARN-5475. Fix test failure of TestAggregatedLogFormat#testReadAcontainerLogs1 (Jun Gong via Varun Saxena)

2016-08-17 Thread wangda
YARN-5475. Fix test failure of TestAggregatedLogFormat#testReadAcontainerLogs1 
(Jun Gong via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b427ce12
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b427ce12
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b427ce12

Branch: refs/heads/YARN-3368
Commit: b427ce12bcdd35e12b212c18430f0114fbbc1fea
Parents: ffe1fff
Author: Varun Saxena 
Authored: Tue Aug 16 20:24:53 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 20:24:53 2016 +0530

--
 .../hadoop/yarn/logaggregation/TestAggregatedLogFormat.java   | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b427ce12/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
index 45dd8ab..8cbec10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java
@@ -140,7 +140,8 @@ public class TestAggregatedLogFormat {
 final int ch = filler;
 
 UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
-LogWriter logWriter = new LogWriter(conf, remoteAppLogFile, ugi);
+LogWriter logWriter = new LogWriter(new Configuration(), remoteAppLogFile,
+ugi);
 
 LogKey logKey = new LogKey(testContainerId);
 LogValue logValue =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: YARN-5521. Fix random failure of TestCapacityScheduler#testKillAllAppsInQueue (sandflee via Varun Saxena) [Forced Update!]

2016-08-17 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/YARN-3368 6d0fd409e -> 818c015db (forced update)


YARN-5521. Fix random failure of TestCapacityScheduler#testKillAllAppsInQueue 
(sandflee via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24249115
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24249115
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24249115

Branch: refs/heads/YARN-3368
Commit: 24249115bff3162c4202387da5bdd8eba13e6961
Parents: 83e57e0
Author: Varun Saxena 
Authored: Tue Aug 16 00:03:16 2016 +0530
Committer: Varun Saxena 
Committed: Tue Aug 16 00:03:29 2016 +0530

--
 .../resourcemanager/scheduler/capacity/TestCapacityScheduler.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24249115/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index 0b52b86..09c16d0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -2184,6 +2184,7 @@ public class TestCapacityScheduler {
 
 // check postconditions
 rm.waitForState(app.getApplicationId(), RMAppState.KILLED);
+rm.waitForAppRemovedFromScheduler(app.getApplicationId());
 appsInRoot = scheduler.getAppsInQueue("root");
 assertTrue(appsInRoot.isEmpty());
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[07/50] [abbrv] hadoop git commit: HDFS-10567. Improve plan command help message. Contributed by Xiaobing Zhou.

2016-08-17 Thread wangda
HDFS-10567. Improve plan command help message. Contributed by Xiaobing Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02abd131
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02abd131
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02abd131

Branch: refs/heads/YARN-3368
Commit: 02abd131b857a89d9fc21507296603120bb50810
Parents: 9daa997
Author: Anu Engineer 
Authored: Mon Aug 15 19:54:06 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 19:58:57 2016 -0700

--
 .../apache/hadoop/hdfs/tools/DiskBalancer.java  | 29 
 1 file changed, 18 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02abd131/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
index 70912d0..1ed2fdc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java
@@ -266,33 +266,40 @@ public class DiskBalancer extends Configured implements 
Tool {
   private void addPlanCommands(Options opt) {
 
 Option plan = OptionBuilder.withLongOpt(PLAN)
-.withDescription("creates a plan for datanode.")
+.withDescription("Hostname, IP address or UUID of datanode " +
+"for which a plan is created.")
 .hasArg()
 .create();
 getPlanOptions().addOption(plan);
 opt.addOption(plan);
 
 
-Option outFile = OptionBuilder.withLongOpt(OUTFILE)
-.hasArg()
-.withDescription("File to write output to, if not specified " +
-"defaults will be used.")
+Option outFile = OptionBuilder.withLongOpt(OUTFILE).hasArg()
+.withDescription(
+"Local path of file to write output to, if not specified "
++ "defaults will be used.")
 .create();
 getPlanOptions().addOption(outFile);
 opt.addOption(outFile);
 
-Option bandwidth = OptionBuilder.withLongOpt(BANDWIDTH)
-.hasArg()
-.withDescription("Maximum disk bandwidth to be consumed by " +
-"diskBalancer. e.g. 10")
+Option bandwidth = OptionBuilder.withLongOpt(BANDWIDTH).hasArg()
+.withDescription(
+"Maximum disk bandwidth (MB/s) in integer to be consumed by "
++ "diskBalancer. e.g. 10 MB/s.")
 .create();
 getPlanOptions().addOption(bandwidth);
 opt.addOption(bandwidth);
 
 Option threshold = OptionBuilder.withLongOpt(THRESHOLD)
 .hasArg()
-.withDescription("Percentage skew that we" +
-"tolerate before diskbalancer starts working e.g. 10")
+.withDescription("Percentage of data skew that is tolerated before"
++ " disk balancer starts working. For example, if"
++ " total data on a 2 disk node is 100 GB then disk"
++ " balancer calculates the expected value on each disk,"
++ " which is 50 GB. If the tolerance is 10% then data"
++ " on a single disk needs to be more than 60 GB"
++ " (50 GB + 10% tolerance value) for Disk balancer to"
++ " balance the disks.")
 .create();
 getPlanOptions().addOption(threshold);
 opt.addOption(threshold);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[41/50] [abbrv] hadoop git commit: YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki via wangda)

2016-08-17 Thread wangda
YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/45352f48
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/45352f48
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/45352f48

Branch: refs/heads/YARN-3368
Commit: 45352f48cd4bd4f1773a2a34adc94952a951a89b
Parents: b226508
Author: Wangda Tan 
Authored: Thu Aug 11 14:59:14 2016 -0700
Committer: Wangda Tan 
Committed: Wed Aug 17 10:58:01 2016 -0700

--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/45352f48/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 6d46fda..2933a76 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -20,12 +20,12 @@
   
 hadoop-yarn
 org.apache.hadoop
-3.0.0-alpha1-SNAPSHOT
+3.0.0-alpha2-SNAPSHOT
   
   4.0.0
   org.apache.hadoop
   hadoop-yarn-ui
-  3.0.0-alpha1-SNAPSHOT
+  3.0.0-alpha2-SNAPSHOT
   Apache Hadoop YARN UI
   ${packaging.type}
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[42/50] [abbrv] hadoop git commit: YARN-5183. [YARN-3368] Support for responsive navbar when window is resized. (Kai Sasaki via Sunil G)

2016-08-17 Thread wangda
YARN-5183. [YARN-3368] Support for responsive navbar when window is resized. 
(Kai Sasaki via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dde888e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dde888e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dde888e9

Branch: refs/heads/YARN-3368
Commit: dde888e9c198f3bd2adf87813a71783261f4d2b6
Parents: 7cc20d4
Author: Sunil 
Authored: Fri Jun 10 10:33:41 2016 +0530
Committer: Wangda Tan 
Committed: Wed Aug 17 10:58:01 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dde888e9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
index bce18ce..d21cc3e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
@@ -32,6 +32,9 @@ module.exports = function(defaults) {
   app.import("bower_components/select2/dist/js/select2.min.js");
   app.import('bower_components/jquery-ui/jquery-ui.js');
   app.import('bower_components/more-js/dist/more.js');
+  app.import('bower_components/bootstrap/dist/css/bootstrap.css');
+  app.import('bower_components/bootstrap/dist/css/bootstrap-theme.css');
+  app.import('bower_components/bootstrap/dist/js/bootstrap.min.js');
 
   // Use `app.import` to add additional libraries to the generated
   // output files.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[25/50] [abbrv] hadoop git commit: MAPREDUCE-6690. Limit the number of resources a single map reduce job can submit for localization. Contributed by Chris Trezzo

2016-08-17 Thread wangda
MAPREDUCE-6690. Limit the number of resources a single map reduce job can 
submit for localization. Contributed by Chris Trezzo


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f80a7298
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f80a7298
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f80a7298

Branch: refs/heads/YARN-3368
Commit: f80a7298325a4626638ee24467e2012442e480d4
Parents: 7f05ff7
Author: Jason Lowe 
Authored: Wed Aug 17 16:21:20 2016 +
Committer: Jason Lowe 
Committed: Wed Aug 17 16:22:31 2016 +

--
 .../hadoop/mapreduce/JobResourceUploader.java   | 214 +--
 .../apache/hadoop/mapreduce/MRJobConfig.java|  28 ++
 .../ClientDistributedCacheManager.java  |  15 +-
 .../src/main/resources/mapred-default.xml   |  30 ++
 .../mapreduce/TestJobResourceUploader.java  | 355 +++
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  | 166 -
 6 files changed, 776 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f80a7298/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
index fa4dd86..15dbc13 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
@@ -21,12 +21,16 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
@@ -34,6 +38,8 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager;
 import org.apache.hadoop.mapreduce.filecache.DistributedCache;
 
+import com.google.common.annotations.VisibleForTesting;
+
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
 class JobResourceUploader {
@@ -86,31 +92,37 @@ class JobResourceUploader {
 FsPermission mapredSysPerms =
 new FsPermission(JobSubmissionFiles.JOB_DIR_PERMISSION);
 FileSystem.mkdirs(jtFs, submitJobDir, mapredSysPerms);
-// add all the command line files/ jars and archive
-// first copy them to jobtrackers filesystem
 
-uploadFiles(conf, submitJobDir, mapredSysPerms, replication);
-uploadLibJars(conf, submitJobDir, mapredSysPerms, replication);
-uploadArchives(conf, submitJobDir, mapredSysPerms, replication);
-uploadJobJar(job, submitJobDir, replication);
+Collection files = conf.getStringCollection("tmpfiles");
+Collection libjars = conf.getStringCollection("tmpjars");
+Collection archives = conf.getStringCollection("tmparchives");
+String jobJar = job.getJar();
+
+Map statCache = new HashMap();
+checkLocalizationLimits(conf, files, libjars, archives, jobJar, statCache);
+
+uploadFiles(conf, files, submitJobDir, mapredSysPerms, replication);
+uploadLibJars(conf, libjars, submitJobDir, mapredSysPerms, replication);
+uploadArchives(conf, archives, submitJobDir, mapredSysPerms, replication);
+uploadJobJar(job, jobJar, submitJobDir, replication);
 addLog4jToDistributedCache(job, submitJobDir);
 
 // set the timestamps of the archives and files
 // set the public/private visibility of the archives and files
-
ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(conf);
+ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(conf,
+statCache);
 // get DelegationToken for cached file
 ClientDistributedCacheManager.getDelegationTokens(conf,
 job.getCredentials());
   }
 
-  private void uploadFiles(Configuration conf, Path submitJobDir,
-  FsPermission 

[43/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-08-17 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a0f7ea5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
index 8ce4ffa..aae4177 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
@@ -16,55 +16,95 @@
  * limitations under the License.
 }}
 
-
-  {{queue-navigator model=model.queues selected=model.selected}}
+
+  {{em-breadcrumbs items=breadcrumbs}}
 
 
-
-  
-{{queue-configuration-table queue=model.selectedQueue}}
-  
+
+  
 
-  
-{{bar-chart data=model.selectedQueue.capacitiesBarChartData 
-title="Queue Capacities" 
-parentId="capacity-bar-chart"
-textWidth=150
-ratio=0.5
-maxHeight=350}}
-  
+
+  
+
+  Application
+
+
+  
+
+  {{#link-to 'yarn-queue' tagName="li"}}
+{{#link-to 'yarn-queue' model.selected}}Information
+{{/link-to}}
+  {{/link-to}}
+  {{#link-to 'yarn-queue-apps' tagName="li"}}
+{{#link-to 'yarn-queue-apps' model.selected}}Applications List
+{{/link-to}}
+  {{/link-to}}
+
+  
+
+  
+
 
-{{#if model.selectedQueue.hasUserUsages}}
-  
-{{donut-chart data=model.selectedQueue.userUsagesDonutChartData 
-title="User Usages" 
-showLabels=true
-parentId="userusage-donut-chart"
-maxHeight=350}}
-  
-{{/if}}
+
+  
+  
 
-  
-{{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData 
-title="Running Apps" 
-showLabels=true
-parentId="numapplications-donut-chart"
-ratio=0.5
-maxHeight=350}}
-  
-
+
+  
+
+  Queue Information
+
+{{queue-configuration-table queue=model.selectedQueue}}
+  
+
 
-
+
+  
+
+  Queue Capacities
+
+
+  
+  {{bar-chart data=model.selectedQueue.capacitiesBarChartData
+  title=""
+  parentId="capacity-bar-chart"
+  textWidth=170
+  ratio=0.55
+  maxHeight=350}}
+
+  
+
+
+{{#if model.selectedQueue.hasUserUsages}}
+  
+{{donut-chart data=model.selectedQueue.userUsagesDonutChartData
+title="User Usages"
+showLabels=true
+parentId="userusage-donut-chart"
+type="memory"
+ratio=0.6
+maxHeight=350}}
+  
+{{/if}}
+
+
+  
+
+  Running Apps
+
+
+  {{donut-chart 
data=model.selectedQueue.numOfApplicationsDonutChartData
+  showLabels=true
+  parentId="numapplications-donut-chart"
+  ratio=0.6
+  maxHeight=350}}
+
+  
+
+
+  
+
 
-
-  
-{{#if model.apps}}
-  {{app-table table-id="apps-table" arr=model.apps}}
-  {{simple-table table-id="apps-table" bFilter=true 
colTypes="elapsed-time" colTargets="7"}}
-{{else}}
-  Could not find any applications from this 
cluster
-{{/if}}
   
 
-
-{{outlet}}
\ No newline at end of file
+{{outlet}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a0f7ea5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
new file mode 100644
index 000..e27341b
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
@@ -0,0 +1,72 @@
+{{!
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on 

[49/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-08-17 Thread wangda
YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecf1ab3a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecf1ab3a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecf1ab3a

Branch: refs/heads/YARN-3368
Commit: ecf1ab3a0cc7248448a9dc2010bc427e7829685a
Parents: 3d89122
Author: Wangda Tan 
Authored: Mon Mar 21 13:13:02 2016 -0700
Committer: Wangda Tan 
Committed: Wed Aug 17 10:58:01 2016 -0700

--
 .../hadoop-yarn-ui/app/adapters/cluster-info.js |   5 +-
 .../app/adapters/cluster-metric.js  |   5 +-
 .../app/adapters/yarn-app-attempt.js|   3 +-
 .../hadoop-yarn-ui/app/adapters/yarn-app.js |   3 +-
 .../app/adapters/yarn-container-log.js  |  74 +
 .../app/adapters/yarn-container.js  |   5 +-
 .../app/adapters/yarn-node-app.js   |  63 
 .../app/adapters/yarn-node-container.js |  64 
 .../hadoop-yarn-ui/app/adapters/yarn-node.js|  40 +
 .../hadoop-yarn-ui/app/adapters/yarn-queue.js   |   3 +-
 .../hadoop-yarn-ui/app/adapters/yarn-rm-node.js |  45 ++
 .../app/components/simple-table.js  |  38 -
 .../hadoop-yarn/hadoop-yarn-ui/app/config.js|  27 
 .../hadoop-yarn/hadoop-yarn-ui/app/constants.js |  24 +++
 .../app/controllers/application.js  |  55 +++
 .../hadoop-yarn-ui/app/helpers/divide.js|  31 
 .../app/helpers/log-files-comma.js  |  48 ++
 .../hadoop-yarn-ui/app/helpers/node-link.js |  37 +
 .../hadoop-yarn-ui/app/helpers/node-menu.js |  66 
 .../hadoop-yarn-ui/app/models/yarn-app.js   |  14 +-
 .../app/models/yarn-container-log.js|  25 +++
 .../hadoop-yarn-ui/app/models/yarn-node-app.js  |  44 ++
 .../app/models/yarn-node-container.js   |  57 +++
 .../hadoop-yarn-ui/app/models/yarn-node.js  |  33 
 .../hadoop-yarn-ui/app/models/yarn-rm-node.js   |  92 +++
 .../hadoop-yarn/hadoop-yarn-ui/app/router.js|  13 ++
 .../hadoop-yarn-ui/app/routes/application.js|  38 +
 .../hadoop-yarn-ui/app/routes/index.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-apps.js  |   4 +-
 .../app/routes/yarn-container-log.js|  55 +++
 .../hadoop-yarn-ui/app/routes/yarn-node-app.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-node-apps.js |  29 
 .../app/routes/yarn-node-container.js   |  30 
 .../app/routes/yarn-node-containers.js  |  28 
 .../hadoop-yarn-ui/app/routes/yarn-node.js  |  29 
 .../hadoop-yarn-ui/app/routes/yarn-nodes.js |  25 +++
 .../app/serializers/yarn-container-log.js   |  39 +
 .../app/serializers/yarn-node-app.js|  86 +++
 .../app/serializers/yarn-node-container.js  |  74 +
 .../hadoop-yarn-ui/app/serializers/yarn-node.js |  56 +++
 .../app/serializers/yarn-rm-node.js |  77 ++
 .../app/templates/application.hbs   |   4 +-
 .../hadoop-yarn-ui/app/templates/error.hbs  |  19 +++
 .../hadoop-yarn-ui/app/templates/notfound.hbs   |  20 +++
 .../hadoop-yarn-ui/app/templates/yarn-apps.hbs  |   4 +-
 .../app/templates/yarn-container-log.hbs|  36 +
 .../app/templates/yarn-node-app.hbs |  60 
 .../app/templates/yarn-node-apps.hbs|  51 +++
 .../app/templates/yarn-node-container.hbs   |  70 +
 .../app/templates/yarn-node-containers.hbs  |  58 +++
 .../hadoop-yarn-ui/app/templates/yarn-node.hbs  |  94 
 .../hadoop-yarn-ui/app/templates/yarn-nodes.hbs |  65 
 .../hadoop-yarn-ui/app/utils/converter.js   |  21 ++-
 .../hadoop-yarn-ui/app/utils/sorter.js  |  42 -
 .../hadoop-yarn/hadoop-yarn-ui/bower.json   |   2 +-
 .../hadoop-yarn-ui/config/environment.js|   1 -
 .../unit/adapters/yarn-container-log-test.js|  73 +
 .../tests/unit/adapters/yarn-node-app-test.js   |  93 +++
 .../unit/adapters/yarn-node-container-test.js   |  93 +++
 .../tests/unit/adapters/yarn-node-test.js   |  42 +
 .../tests/unit/adapters/yarn-rm-node-test.js|  44 ++
 .../unit/models/yarn-container-log-test.js  |  48 ++
 .../tests/unit/models/yarn-node-app-test.js |  65 
 .../unit/models/yarn-node-container-test.js |  78 ++
 .../tests/unit/models/yarn-node-test.js |  58 +++
 .../tests/unit/models/yarn-rm-node-test.js  |  95 
 .../unit/routes/yarn-container-log-test.js  | 120 +++
 .../tests/unit/routes/yarn-node-app-test.js |  56 +++
 .../tests/unit/routes/yarn-node-apps-test.js|  60 
 .../unit/routes/yarn-node-container-test.js |  

[08/50] [abbrv] hadoop git commit: HDFS-10559. DiskBalancer: Use SHA1 for Plan ID. Contributed by Xiaobing Zhou.

2016-08-17 Thread wangda
HDFS-10559. DiskBalancer: Use SHA1 for Plan ID. Contributed by Xiaobing Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5628b36c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5628b36c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5628b36c

Branch: refs/heads/YARN-3368
Commit: 5628b36c0872d58c9b25f23da3dab4eafad9bca3
Parents: 02abd13
Author: Anu Engineer 
Authored: Mon Aug 15 20:10:21 2016 -0700
Committer: Anu Engineer 
Committed: Mon Aug 15 20:10:21 2016 -0700

--
 .../hadoop/hdfs/protocol/ClientDatanodeProtocol.java  |  2 +-
 .../ClientDatanodeProtocolTranslatorPB.java   |  2 +-
 .../src/main/proto/ClientDatanodeProtocol.proto   |  2 +-
 .../hadoop/hdfs/server/datanode/DiskBalancer.java | 14 +++---
 .../server/diskbalancer/command/CancelCommand.java|  2 +-
 .../server/diskbalancer/command/ExecuteCommand.java   |  2 +-
 .../hdfs/server/diskbalancer/TestDiskBalancer.java|  4 ++--
 .../hdfs/server/diskbalancer/TestDiskBalancerRPC.java |  2 +-
 .../diskbalancer/TestDiskBalancerWithMockMover.java   |  8 
 9 files changed, 19 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
index 477d308..10041f5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
@@ -175,7 +175,7 @@ public interface ClientDatanodeProtocol {
   /**
* Cancel an executing plan.
*
-   * @param planID - A SHA512 hash of the plan string.
+   * @param planID - A SHA-1 hash of the plan string.
*/
   void cancelDiskBalancePlan(String planID) throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
index 045ccd5..0cf006c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
@@ -369,7 +369,7 @@ public class ClientDatanodeProtocolTranslatorPB implements
   /**
* Cancels an executing disk balancer plan.
*
-   * @param planID - A SHA512 hash of the plan string.
+   * @param planID - A SHA-1 hash of the plan string.
* @throws IOException on error
*/
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
index 11d04af..e4333cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
@@ -154,7 +154,7 @@ message GetBalancerBandwidthResponseProto {
  * balancer plan to a data node.
  */
 message SubmitDiskBalancerPlanRequestProto {
-  required string planID = 1; // A hash of the plan like SHA512
+  required string planID = 1; // A hash of the plan like SHA-1
   required string plan = 2;   // Plan file data in Json format
   optional uint64 planVersion = 3;// Plan version number
   optional bool ignoreDateCheck = 4;  // Ignore date checks on this plan.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5628b36c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
--
diff --git 

[20/50] [abbrv] hadoop git commit: HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by Sean Mackrory.

2016-08-17 Thread wangda
HADOOP-13494. ReconfigurableBase can log sensitive information. Contributed by 
Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b689e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b689e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b689e7a

Branch: refs/heads/YARN-3368
Commit: 4b689e7a758a55cec2ca8398727feefc8ac21bfd
Parents: 6c154ab
Author: Andrew Wang 
Authored: Tue Aug 16 15:01:18 2016 -0700
Committer: Andrew Wang 
Committed: Tue Aug 16 15:01:18 2016 -0700

--
 .../org/apache/hadoop/conf/ConfigRedactor.java  | 84 
 .../apache/hadoop/conf/ReconfigurableBase.java  | 13 ++-
 .../fs/CommonConfigurationKeysPublic.java   | 14 
 .../src/main/resources/core-default.xml | 10 +++
 .../apache/hadoop/conf/TestConfigRedactor.java  | 72 +
 5 files changed, 190 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b689e7a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
new file mode 100644
index 000..0ba756c
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.conf;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeys.*;
+
+/**
+ * Tool for redacting sensitive information when displaying config parameters.
+ *
+ * Some config parameters contain sensitive information (for example, cloud
+ * storage keys). When these properties are displayed in plaintext, we should
+ * redactor their values as appropriate.
+ */
+public class ConfigRedactor {
+
+  private static final String REDACTED_TEXT = "";
+
+  private List compiledPatterns;
+
+  public ConfigRedactor(Configuration conf) {
+String sensitiveRegexList = conf.get(
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS,
+HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS_DEFAULT);
+List sensitiveRegexes = 
Arrays.asList(sensitiveRegexList.split(","));
+compiledPatterns = new ArrayList();
+for (String regex : sensitiveRegexes) {
+  Pattern p = Pattern.compile(regex);
+  compiledPatterns.add(p);
+}
+  }
+
+  /**
+   * Given a key / value pair, decides whether or not to redact and returns
+   * either the original value or text indicating it has been redacted.
+   *
+   * @param key
+   * @param value
+   * @return Original value, or text indicating it has been redacted
+   */
+  public String redact(String key, String value) {
+if (configIsSensitive(key)) {
+  return REDACTED_TEXT;
+}
+return value;
+  }
+
+  /**
+   * Matches given config key against patterns and determines whether or not
+   * it should be considered sensitive enough to redact in logs and other
+   * plaintext displays.
+   *
+   * @param key
+   * @return True if parameter is considered sensitive
+   */
+  private boolean configIsSensitive(String key) {
+for (Pattern regex : compiledPatterns) {
+  if (regex.matcher(key).find()) {
+return true;
+  }
+}
+return false;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b689e7a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java
index 

  1   2   >