[jira] [Commented] (MAPREDUCE-6291) Correct mapred queue usage command

2015-03-27 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383385#comment-14383385
 ] 

Rohith commented on MAPREDUCE-6291:
---

I think it is better to strict to [~qwertymaniac] suggestion, all Hadoop script 
help message changes can go all together in one jira. I believe this will 
reduce work.

 Correct mapred queue usage command
 --

 Key: MAPREDUCE-6291
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6291
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: MAPRED-6291-001.patch, MAPRED-6291.patch, 
 MAPREDUCE-6291-002.patch


  *Currently it is like following..* 
 Usage: JobQueueClient command args
  *It should be* 
 Usage: queue command args
  *For more Details check following* 
 {noformat}
 hdfs@host1:/hadoop/bin ./mapred queue
 Usage: JobQueueClient command args
   [-list]
   [-info job-queue-name [-showJobs]]
   [-showacls] 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383462#comment-14383462
 ] 

Hadoop QA commented on MAPREDUCE-6165:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707697/MAPREDUCE-6165-003.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution
  org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle
  org.apache.hadoop.mapred.TestMiniMRChildTask
  org.apache.hadoop.mapred.TestMiniMRBringup
  org.apache.hadoop.conf.TestNoDefaultsJobConf
  org.apache.hadoop.mapred.TestJobSysDirWithDFS
  org.apache.hadoop.mapred.TestMiniMRClientCluster
  org.apache.hadoop.mapred.TestClusterMRNotification
  org.apache.hadoop.mapred.TestMRTimelineEventHandling
  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService
  org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
  org.apache.hadoop.mapred.TestReduceFetch
  org.apache.hadoop.mapreduce.v2.TestNonExistentJob
  org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser
  org.apache.hadoop.mapred.TestNetworkedJob
  org.apache.hadoop.ipc.TestMRCJCSocketFactory
  org.apache.hadoop.mapreduce.v2.TestMROldApiJobs
  org.apache.hadoop.mapred.TestClusterMapReduceTestCase
  org.apache.hadoop.mapred.TestMRIntermediateDataEncryption
  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter
  org.apache.hadoop.mapreduce.TestLargeSort
  org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner
  org.apache.hadoop.mapred.TestJobName
  org.apache.hadoop.mapreduce.v2.TestMRJobsWithProfiler
  org.apache.hadoop.mapred.TestJobCounters
  org.apache.hadoop.mapreduce.TestMapReduceLazyOutput
  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
  org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath
  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.security.TestMRCredentials
  org.apache.hadoop.mapred.TestLazyOutput
  org.apache.hadoop.mapred.TestMerge
  
org.apache.hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities
  org.apache.hadoop.mapreduce.v2.TestRMNMInfo
  org.apache.hadoop.mapred.TestJobCleanup
  org.apache.hadoop.mapreduce.TestChild
  org.apache.hadoop.mapred.TestMiniMRClasspath
  org.apache.hadoop.mapreduce.TestMRJobClient
  org.apache.hadoop.mapreduce.v2.TestMRJobs

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5340//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5340//console

This message is automatically generated.

 [JDK8] TestCombineFileInputFormat failed on JDK8
 

 Key: MAPREDUCE-6165
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
 MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, 
 MAPREDUCE-6165-reproduce.patch


 The error msg:
 {noformat}
 testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
   Time 

[jira] [Updated] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6165:
--
Attachment: MAPREDUCE-6165-003.patch

The results of tests looks strange. Attaching v3 patch again to kick Jenkins.

 [JDK8] TestCombineFileInputFormat failed on JDK8
 

 Key: MAPREDUCE-6165
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
 MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, 
 MAPREDUCE-6165-reproduce.patch


 The error msg:
 {noformat}
 testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
   Time elapsed: 2.487 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:2 but was:1
   at junit.framework.Assert.fail(Assert.java:57)
   at junit.framework.Assert.failNotEquals(Assert.java:329)
   at junit.framework.Assert.assertEquals(Assert.java:78)
   at junit.framework.Assert.assertEquals(Assert.java:234)
   at junit.framework.Assert.assertEquals(Assert.java:241)
   at junit.framework.TestCase.assertEquals(TestCase.java:409)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
 testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
   Time elapsed: 0.985 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:2 but was:1
   at junit.framework.Assert.fail(Assert.java:57)
   at junit.framework.Assert.failNotEquals(Assert.java:329)
   at junit.framework.Assert.assertEquals(Assert.java:78)
   at junit.framework.Assert.assertEquals(Assert.java:234)
   at junit.framework.Assert.assertEquals(Assert.java:241)
   at junit.framework.TestCase.assertEquals(TestCase.java:409)
   at 
 org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5608) Replace and deprecate mapred.tasktracker.indexcache.mb

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383662#comment-14383662
 ] 

Hadoop QA commented on MAPREDUCE-5608:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707730/MAPREDUCE-5608-002.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5342//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5342//console

This message is automatically generated.

 Replace and deprecate mapred.tasktracker.indexcache.mb
 --

 Key: MAPREDUCE-5608
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5608
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Akira AJISAKA
  Labels: configuration, newbie
 Attachments: MAPREDUCE-5608-002.patch, MAPREDUCE-5608.patch


 In MR2 mapred.tasktracker.indexcache.mb still works for configuring the size 
 of the shuffle service index cache.  As the tasktracker no longer exists, we 
 should replace this with something like mapreduce.shuffle.indexcache.mb. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5762) Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2

2015-03-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5762:
-
Attachment: MAPREDUCE-5762-branch-2-002.patch

Rebased for the latest branch-2.

 Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2
 

 Key: MAPREDUCE-5762
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5762
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5762-branch-2-002.patch, 
 MAPREDUCE-5762-branch-2.patch


 MRv1 configs are removed in trunk, but they are not removed in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5762) Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2

2015-03-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5762:
-
Priority: Minor  (was: Major)
Target Version/s: 2.8.0  (was: 2.4.0)
  Issue Type: Improvement  (was: Bug)

 Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2
 

 Key: MAPREDUCE-5762
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5762
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: MAPREDUCE-5762-branch-2-002.patch, 
 MAPREDUCE-5762-branch-2.patch


 MRv1 configs are removed in trunk, but they are not removed in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5608) Replace and deprecate mapred.tasktracker.indexcache.mb

2015-03-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5608:
-
Attachment: MAPREDUCE-5608-002.patch

Rebased.

 Replace and deprecate mapred.tasktracker.indexcache.mb
 --

 Key: MAPREDUCE-5608
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5608
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Akira AJISAKA
  Labels: configuration, newbie
 Attachments: MAPREDUCE-5608-002.patch, MAPREDUCE-5608.patch


 In MR2 mapred.tasktracker.indexcache.mb still works for configuring the size 
 of the shuffle service index cache.  As the tasktracker no longer exists, we 
 should replace this with something like mapreduce.shuffle.indexcache.mb. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5762) Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383604#comment-14383604
 ] 

Hadoop QA commented on MAPREDUCE-5762:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707722/MAPREDUCE-5762-branch-2-002.patch
  against trunk revision af618f2.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5341//console

This message is automatically generated.

 Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2
 

 Key: MAPREDUCE-5762
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5762
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: MAPREDUCE-5762-branch-2-002.patch, 
 MAPREDUCE-5762-branch-2.patch


 MRv1 configs are removed in trunk, but they are not removed in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (MAPREDUCE-6294) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa moved HADOOP-11759 to MAPREDUCE-6294:


Affects Version/s: (was: 2.6.0)
   (was: 3.0.0)
   3.0.0
   2.6.0
  Key: MAPREDUCE-6294  (was: HADOOP-11759)
  Project: Hadoop Map/Reduce  (was: Hadoop Common)

 Remove an extra parameter described in Javadoc of TockenCache
 -

 Key: MAPREDUCE-6294
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6294
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.6.0, 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6294) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6294:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for the 
contribution and thanks [~airbots] for your reporting!

 Remove an extra parameter described in Javadoc of TockenCache
 -

 Key: MAPREDUCE-6294
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6294
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5762) Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2

2015-03-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384047#comment-14384047
 ] 

Harsh J commented on MAPREDUCE-5762:


Went over the entire property list, and the removals look fine to me.

+1, please commit. Many thanks for the work!

 Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2
 

 Key: MAPREDUCE-5762
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5762
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: MAPREDUCE-5762-branch-2-002.patch, 
 MAPREDUCE-5762-branch-2.patch


 MRv1 configs are removed in trunk, but they are not removed in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6294) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384007#comment-14384007
 ] 

Brahma Reddy Battula commented on MAPREDUCE-6294:
-

Thanks a lot [~ozawa]!!!

 Remove an extra parameter described in Javadoc of TockenCache
 -

 Key: MAPREDUCE-6294
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6294
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6294) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383977#comment-14383977
 ] 

Hudson commented on MAPREDUCE-6294:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7448 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7448/])
MAPREDUCE-6294. Remove an extra parameter described in Javadoc of TockenCache. 
Contributed by Brahma Reddy Battula. (ozawa: rev 
05499b1093ea6ba6a39a1354d67b0a46a2982824)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java


 Remove an extra parameter described in Javadoc of TockenCache
 -

 Key: MAPREDUCE-6294
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6294
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5036) Default shuffle handler port should not be 8080

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384132#comment-14384132
 ] 

Hadoop QA commented on MAPREDUCE-5036:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12587173/MAPREDUCE-5036-2.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5344//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5344//console

This message is automatically generated.

 Default shuffle handler port should not be 8080
 ---

 Key: MAPREDUCE-5036
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5036
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.4.0

 Attachments: MAPREDUCE-5036-13562.patch, MAPREDUCE-5036-2.patch, 
 MAPREDUCE-5036.patch


 The shuffle handler port (mapreduce.shuffle.port) defaults to 8080.  This is 
 a pretty common port for web services, and is likely to cause unnecessary 
 port conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5036) Default shuffle handler port should not be 8080

2015-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384339#comment-14384339
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-5036:


I think changing it to an ephemeral port is good in general.

But the recent feature of NM-recovery (YARN-1336) can make this a problem.
 - ShuffleHandler starts on port A
 - MR AM launches a container, gets port A as the target location for the 
intermediate output, passes this to the reducer
 - NM restarts and ShuffleHandler starts on port B now.
 - Reducer fails to perform shuffle.

Either we don't allow ephemeral ports here. Or have a mechanism for YARN RM to 
send changed aux-service port info to all apps when a node restarts. /cc 
[~jlowe], [~djp]

 Default shuffle handler port should not be 8080
 ---

 Key: MAPREDUCE-5036
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5036
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.4.0

 Attachments: MAPREDUCE-5036-13562.patch, MAPREDUCE-5036-2.patch, 
 MAPREDUCE-5036.patch


 The shuffle handler port (mapreduce.shuffle.port) defaults to 8080.  This is 
 a pretty common port for web services, and is likely to cause unnecessary 
 port conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5496) Document mapreduce.cluster.administrators in mapred-default.xml

2015-03-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5496:
-
Description: {{mapreduce.cluster.administrators}} is not documented 
anywhere. We should document it in mapred-default.xml.  (was: Two issues:

# {{mapreduce.tasktracker.group}} is mentioned in 
[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml],
 but is no longer used.
# {{mapreduce.cluster.administrators}} is not documented anywhere.)
   Priority: Minor  (was: Major)
 Issue Type: Improvement  (was: Bug)
Summary: Document mapreduce.cluster.administrators in 
mapred-default.xml  (was: Documentation should be updated for two MR2 
properties)

Updated the description.

 Document mapreduce.cluster.administrators in mapred-default.xml
 ---

 Key: MAPREDUCE-5496
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5496
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi
Priority: Minor

 {{mapreduce.cluster.administrators}} is not documented anywhere. We should 
 document it in mapred-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5036) Default shuffle handler port should not be 8080

2015-03-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384364#comment-14384364
 ] 

Harsh J commented on MAPREDUCE-5036:


Thanks Vinod, As it exists, is a static port config proving as a trouble
worthy enough that we would wanna support ephemeral ports?

Also, can this be done on a new JIRA, abandoning the follow-up patch that
this seems to be kept open for?

On Sat, Mar 28, 2015 at 12:09 AM, Vinod Kumar Vavilapalli (JIRA) 




-- 
Harsh J


 Default shuffle handler port should not be 8080
 ---

 Key: MAPREDUCE-5036
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5036
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.4.0

 Attachments: MAPREDUCE-5036-13562.patch, MAPREDUCE-5036-2.patch, 
 MAPREDUCE-5036.patch


 The shuffle handler port (mapreduce.shuffle.port) defaults to 8080.  This is 
 a pretty common port for web services, and is likely to cause unnecessary 
 port conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384384#comment-14384384
 ] 

Karthik Kambatla commented on MAPREDUCE-6288:
-

On a cluster that had this issue, Robert added executable bit to the 
appropriate directories to verify that alone is enough to fix the issue. 

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:257)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1490)
   

[jira] [Updated] (MAPREDUCE-5762) Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2

2015-03-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5762:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed v2 patch to branch-2. Thanks [~qwertymaniac] for review!

 Port MAPREDUCE-3223 (Remove MRv1 config from mapred-default.xml) to branch-2
 

 Key: MAPREDUCE-5762
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5762
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Fix For: 2.8.0

 Attachments: MAPREDUCE-5762-branch-2-002.patch, 
 MAPREDUCE-5762-branch-2.patch


 MRv1 configs are removed in trunk, but they are not removed in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5496) Documentation should be updated for two MR2 properties

2015-03-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384340#comment-14384340
 ] 

Akira AJISAKA commented on MAPREDUCE-5496:
--

{{mapreduce.tasktracker.group}} was removed from branch-2 by MAPREDUCE-5762, so 
we need to remove {{mapreduce.cluster.administrators}} only.

 Documentation should be updated for two MR2 properties
 --

 Key: MAPREDUCE-5496
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5496
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi

 Two issues:
 # {{mapreduce.tasktracker.group}} is mentioned in 
 [http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml],
  but is no longer used.
 # {{mapreduce.cluster.administrators}} is not documented anywhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384360#comment-14384360
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-6288:


Okay. I am good doing this. Can you please add the permissions correction to 
the patch? Thanks.

[~revans2], as clarified immediately above by [~rkanter], his patch doesn't 
open up read permissions of the files, nor does it let users list all files. We 
are going ahead with this unless there are more objections.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384673#comment-14384673
 ] 

Robert Kanter commented on MAPREDUCE-6288:
--

I'll post an updated patch later today that makes the JHS update the 
permissions on startup.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:257)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1490)
   at 
 

[jira] [Updated] (MAPREDUCE-6279) AM should explicity exit JVM after all services have stopped

2015-03-27 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated MAPREDUCE-6279:
--
Status: Patch Available  (was: Open)

 AM should explicity exit JVM after all services have stopped
 

 Key: MAPREDUCE-6279
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6279
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Eric Payne
 Attachments: MAPREDUCE-6279.v1.txt


 Occasionally the MapReduce AM can get stuck trying to shut down.  
 MAPREDUCE-6049 and MAPREDUCE-5888 were specific instances that have been 
 fixed, but this can also occur with uber jobs if the task code inadvertently 
 leaves non-daemon threads lingering.
 We should explicitly shutdown the JVM after the MapReduce AM has unregistered 
 and all services have been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6279) AM should explicity exit JVM after all services have stopped

2015-03-27 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated MAPREDUCE-6279:
--
Attachment: MAPREDUCE-6279.v1.txt

[~jlowe], are unit tests necessary for this change?

 AM should explicity exit JVM after all services have stopped
 

 Key: MAPREDUCE-6279
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6279
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Eric Payne
 Attachments: MAPREDUCE-6279.v1.txt


 Occasionally the MapReduce AM can get stuck trying to shut down.  
 MAPREDUCE-6049 and MAPREDUCE-5888 were specific instances that have been 
 fixed, but this can also occur with uber jobs if the task code inadvertently 
 leaves non-daemon threads lingering.
 We should explicitly shutdown the JVM after the MapReduce AM has unregistered 
 and all services have been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6279) AM should explicity exit JVM after all services have stopped

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384871#comment-14384871
 ] 

Hadoop QA commented on MAPREDUCE-6279:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707896/MAPREDUCE-6279.v1.txt
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app:

  
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler

  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app:

org.apache.hadoop.mapreduce.v2.app.TestStagingCleanup
org.apache.hadoop.mapreduce.v2.app.TestJobEndNotifier

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5345//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5345//console

This message is automatically generated.

 AM should explicity exit JVM after all services have stopped
 

 Key: MAPREDUCE-6279
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6279
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Eric Payne
 Attachments: MAPREDUCE-6279.v1.txt


 Occasionally the MapReduce AM can get stuck trying to shut down.  
 MAPREDUCE-6049 and MAPREDUCE-5888 were specific instances that have been 
 fixed, but this can also occur with uber jobs if the task code inadvertently 
 leaves non-daemon threads lingering.
 We should explicitly shutdown the JVM after the MapReduce AM has unregistered 
 and all services have been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384879#comment-14384879
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-6288:


Thanks Robert. Assuming it is close, I'd like to commit it later today or over 
the weekend so that I can roll an RC for 2.7.0.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:257)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1490)
   

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Bowen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384888#comment-14384888
 ] 

Bowen Zhang commented on MAPREDUCE-6288:


[~rkanter], this is breaking all TestJavaActionExecutor unit tests on Oozie 
with hadoop-2.7

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:257)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1490)
   at 
 

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384893#comment-14384893
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-6288:


[~rkanter] / [~jira.shegalov],

Sigh, I see another big issue with the patch at MAPREDUCE-5875. Now with every 
Cluster.Job getJob(JobID jobId) call, we create a new JobConf and open the HDFS 
file in order to load it later. That can become a huge perf hit for the job as 
well as the Namenode. Even the previous code was adding the remote conf file, 
but it never gets read till somebody does an explicit conf.get() - which 
doesn't happen in the usual wait-for-completion code path.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384925#comment-14384925
 ] 

Robert Kanter commented on MAPREDUCE-6288:
--

What if we do this fix for now and add a followup JIRA to add a cache?  This 
shouldn't be a that huge of a performance hit.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:257)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1490)
   at 
 

[jira] [Updated] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated MAPREDUCE-6288:
-
Attachment: MAPREDUCE-6288.002.patch

The MAPREDUCE-6288.002.patch builds on the previous MAPREDUCE-6288.patch I 
originally made.  Besides using 771 instead of 770, it also makes the JHS check 
and correct the permissions on startup.  I added a unit test and also verified 
in a cluster.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.002.patch, 
 MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1213)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1191)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:299)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:265)
   at 

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385005#comment-14385005
 ] 

zhihai xu commented on MAPREDUCE-6288:
--

I double checked the code in Job.java, JobClient.java, JobSubmitter.java and 
Cluster.java. Cluster#getJob is not used by Job.java, JobSubmitter.java and 
Cluster.java. The only place where Cluster#getJob is used is from 
JobClient#getJobUsingCluster. JobClient#getJobUsingCluster is used by 
JobClient#getJob, JobClient#displayTask,  JobClient#getMapTaskReports, 
JobClient#getReduceTaskReports, JobClient#getCleanupTaskReports and 
JobClient#getSetupTaskReports.
All these functions(JobClient#getJob, JobClient#displayTask,  
JobClient#getMapTaskReports, JobClient#getReduceTaskReports, 
JobClient#getCleanupTaskReports and JobClient#getSetupTaskReports) are public 
APIs, which are currently only used by applications for example CLI.
So It looks like Cluster#getJob will be only used by applications, the 
performance hit won't be that much if the applications don't call these 
functions too many times. If we can add cache to improve the performance, that 
will be better.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at 

[jira] [Commented] (MAPREDUCE-6288) mapred job -status fails with AccessControlException

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385061#comment-14385061
 ] 

Hadoop QA commented on MAPREDUCE-6288:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707951/MAPREDUCE-6288.002.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5346//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5346//console

This message is automatically generated.

 mapred job -status fails with AccessControlException 
 -

 Key: MAPREDUCE-6288
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6288
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: MAPREDUCE-6288-gera-001.patch, MAPREDUCE-6288.002.patch, 
 MAPREDUCE-6288.patch


 After MAPREDUCE-5875, we're seeing this Exception when trying to do {{mapred 
 job -status job_1427080398288_0001}}
 {noformat}
 Exception in thread main org.apache.hadoop.security.AccessControlException: 
 Permission denied: user=jenkins, access=EXECUTE, 
 inode=/user/history/done:mapred:hadoop:drwxrwx---
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
   at 
 org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6553)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6535)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6460)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1919)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1870)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1850)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1822)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:545)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)