[jira] [Updated] (YARN-779) AMRMClient should clean up dangling unsatisfied request
[ https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maysam Yabandeh updated YARN-779: - Attachment: YARN-779.patch Fixed the test case with correct setting for relaxedLocality. > AMRMClient should clean up dangling unsatisfied request > --- > > Key: YARN-779 > URL: https://issues.apache.org/jira/browse/YARN-779 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 2.0.4-alpha >Reporter: Alejandro Abdelnur >Priority: Critical > Attachments: YARN-779.patch, YARN-779.patch > > > If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or > node2 is placed (assuming a single rack) the resulting ResourceRequests will > be > {code} > location - containers > - > node1- 10 > node2- 10 > rack - 10 > ANY - 10 > {code} > Assuming 5 containers are allocated in node1 and 5 containers are allocated > in node2, the following ResourceRequests will be outstanding on the RM. > {code} > location - containers > - > node1- 5 > node2- 5 > {code} > If the AMMRClient does a new ContainerRequest allocation, this time for 5 > containers in node3, the resulting outstanding ResourceRequests on the RM > will be: > {code} > location - containers > - > node1- 5 > node2- 5 > node3- 5 > rack - 5 > ANY - 5 > {code} > At this point, the scheduler may assign 5 containers to node1 and it will > never assign the 5 containers node3 asked for. > AMRMClient should keep track of the outstanding allocations counts per > ContainerRequest and when gets to zero it should update the the RACK/ANY > decrementing the dangling requests. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749309#comment-13749309 ] Hadoop QA commented on YARN-1074: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599777/YARN-1074.8.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.security.TestJHSSecurity The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.v2.TestUberAM {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1755//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1755//console This message is automatically generated. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749305#comment-13749305 ] Hadoop QA commented on YARN-1074: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599777/YARN-1074.8.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.security.TestJHSSecurity The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.v2.TestUberAM {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1754//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1754//console This message is automatically generated. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749295#comment-13749295 ] Hadoop QA commented on YARN-1074: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599777/YARN-1074.8.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.v2.TestUberAM {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1752//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1752//console This message is automatically generated. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749287#comment-13749287 ] Xuan Gong commented on YARN-1074: - Command: yarn application -list --appStates runninG,FinishEd output:Total number of applications (application-types: [] and states: [RUNNING, FINISHED]):2 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL application_1377314568682_0001 QuasiMonteCarlo MAPREDUCE xuan defaultFINISHED SUCCEEDED 100% 192.168.0.16:19888/jobhistory/job/job_1377314568682_0001 application_1377314568682_0002 Sleep job MAPREDUCE xuan default RUNNING UNDEFINED 5.03% Xuan-Gongs-MacBook-Pro.local:49615 > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749286#comment-13749286 ] Xuan Gong commented on YARN-1074: - Command: yarn application -list --appStates failed Output: Total number of applications (application-types: [] and states: [FAILED]):0 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749283#comment-13749283 ] Xuan Gong commented on YARN-1074: - Command : yarn application -list --appStates all, Running The output: Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):2 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL application_1377314568682_0001 QuasiMonteCarlo MAPREDUCE xuan defaultFINISHED SUCCEEDED 100% 192.168.0.16:19888/jobhistory/job/job_1377314568682_0001 application_1377314568682_0002 Sleep job MAPREDUCE xuan default RUNNING UNDEFINED 5.02% Xuan-Gongs-MacBook-Pro.local:49615 > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749282#comment-13749282 ] Xuan Gong commented on YARN-1074: - Command: yarn application -help The output: usage: application -appStatesWorks with --list to filter applications based on their state. The valid application state can be one of the following: ALL,NEW,NEW_SAV ING,SUBMITTED,A CCEPTED,RUNNING ,FINISHED,FAILE D,KILLED -appTypes Works with --list to filter applications based on their type. -help Displays help for all commands. -kill Kills the application. -list List applications from the RM. Supports optional use of --appTypes to filter applications based on application type, and --appStates to filter applications based on application state -status Prints the status of the application. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749281#comment-13749281 ] Xuan Gong commented on YARN-1074: - Command : yarn application -list --appStates test Output: The application state test is invalid. The valid application state can be one of the following: ALL,NEW,NEW_SAVING,SUBMITTED,ACCEPTED,RUNNING,FINISHED,FAILED,KILLED > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749280#comment-13749280 ] Xuan Gong commented on YARN-1074: - Command : yarn application -list --appStates finished Output: Total number of applications (application-types: [] and states: [FINISHED]):1 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL application_1377314568682_0001 QuasiMonteCarlo MAPREDUCE xuan defaultFINISHED SUCCEEDED 100% 192.168.0.16:19888/jobhistory/job/job_1377314568682_0001 > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749279#comment-13749279 ] Xuan Gong commented on YARN-1074: - The command : yarn application -list The output : Total number of applications (application-types: [] and states: [RUNNING]):1 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL application_1377314568682_0002 Sleep job MAPREDUCE xuan default RUNNING UNDEFINED 5.01% Xuan-Gongs-MacBook-Pro.local:49615 > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749278#comment-13749278 ] Xuan Gong commented on YARN-1074: - Do test on single node cluster. 1. Complete a mapreduce job 2. Running a sleep job. The command : Run the command Yarn application -list --appStates all The output: Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):2 Application-Id Application-NameApplication-Type User Queue State Final-State ProgressTracking-URL application_1377314568682_0001 QuasiMonteCarlo MAPREDUCE xuan defaultFINISHED SUCCEEDED 100% 192.168.0.16:19888/jobhistory/job/job_1377314568682_0001 application_1377314568682_0002 Sleep job MAPREDUCE xuan default RUNNING UNDEFINED 5% Xuan-Gongs-MacBook-Pro.local:49615 > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-540) Race condition causing RM to potentially relaunch already unregistered AMs on RM restart
[ https://issues.apache.org/jira/browse/YARN-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-540: - Attachment: YARN-540.2.patch upload a new patch and add test cases to test the state machine transitions. Did single node RM restart test. will do more rigorous manual test > Race condition causing RM to potentially relaunch already unregistered AMs on > RM restart > > > Key: YARN-540 > URL: https://issues.apache.org/jira/browse/YARN-540 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Jian He >Assignee: Jian He >Priority: Blocker > Attachments: YARN-540.1.patch, YARN-540.2.patch, YARN-540.patch, > YARN-540.patch > > > When job succeeds and successfully call finishApplicationMaster, RM shutdown > and restart-dispatcher is stopped before it can process REMOVE_APP event. The > next time RM comes back, it will reload the existing state files even though > the job is succeeded -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749275#comment-13749275 ] Hudson commented on YARN-1085: -- SUCCESS: Integrated in Hadoop-trunk-Commit #4321 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4321/]) YARN-1085. Modified YARN and MR2 web-apps to do HTTP authentication in secure setup with kerberos. Contributed by Omkar Vinit Joshi. (vinodkv: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1517101) * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryClientService.java * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Fix For: 2.1.1-beta > > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch, YARN-1085.20130823.3.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749273#comment-13749273 ] Vinod Kumar Vavilapalli commented on YARN-1074: --- +1, looks good. Will check this in once Jenkins gives its blessings. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749268#comment-13749268 ] Hadoop QA commented on YARN-1085: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599772/YARN-1085.20130823.3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1751//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1751//console This message is automatically generated. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch, YARN-1085.20130823.3.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1074: Attachment: YARN-1074.8.patch Add test case to test case insensitive > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch, > YARN-1074.8.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749235#comment-13749235 ] Vinod Kumar Vavilapalli commented on YARN-1085: --- +1. Will check this in once Jenkins blesses. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch, YARN-1085.20130823.3.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749225#comment-13749225 ] Thomas Graves commented on YARN-981: ah sorry, thanks for clarifying. > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749223#comment-13749223 ] Hudson commented on YARN-707: - SUCCESS: Integrated in Hadoop-trunk-Commit #4320 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4320/]) Revert MAPREDUCE-5475 and YARN-707 (jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1517097) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStore.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Fix For: 3.0.0, 2.1.1-beta > > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omkar Vinit Joshi updated YARN-1085: Attachment: YARN-1085.20130823.3.patch > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch, YARN-1085.20130823.3.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749216#comment-13749216 ] Omkar Vinit Joshi commented on YARN-1085: - Thanks vinod... fixing small typo.. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch, YARN-1085.20130823.3.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749215#comment-13749215 ] Vinod Kumar Vavilapalli commented on YARN-707: -- My bad, I need another vacation where I can really take some rest. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Fix For: 3.0.0, 2.1.1-beta > > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe reopened YARN-707: - Reopening this, as the client token is always the app submitter. That means the AM always sees the user as the app submitter when clients authenticate, and that prevents proper ACL checking re: MAPREDUCE-5475. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Fix For: 3.0.0, 2.1.1-beta > > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749207#comment-13749207 ] Hadoop QA commented on YARN-1085: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599748/YARN-1085.20130823.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1749//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1749//console This message is automatically generated. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Moved] (YARN-1094) RM restart throws Null pointer Exception in Secure Env
[ https://issues.apache.org/jira/browse/YARN-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli moved MAPREDUCE-5479 to YARN-1094: -- Target Version/s: 2.1.1-beta (was: 2.1.1-beta) Key: YARN-1094 (was: MAPREDUCE-5479) Project: Hadoop YARN (was: Hadoop Map/Reduce) > RM restart throws Null pointer Exception in Secure Env > -- > > Key: YARN-1094 > URL: https://issues.apache.org/jira/browse/YARN-1094 > Project: Hadoop YARN > Issue Type: Bug > Environment: secure env >Reporter: yeshavora >Assignee: Vinod Kumar Vavilapalli >Priority: Blocker > > Enable rmrestart feature And restart Resorce Manager while a job is running. > Resorce Manager fails to start with below error > 2013-08-23 17:57:40,705 INFO resourcemanager.RMAppManager > (RMAppManager.java:recover(370)) - Recovering application > application_1377280618693_0001 > 2013-08-23 17:57:40,763 ERROR resourcemanager.ResourceManager > (ResourceManager.java:serviceStart(617)) - Failed to load/recover state > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.setTimerForTokenRenewal(DelegationTokenRenewer.java:371) > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.addApplication(DelegationTokenRenewer.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:371) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:819) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:613) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:832) > 2013-08-23 17:57:40,766 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - > Exiting with status 1 > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-540) Race condition causing RM to potentially relaunch already unregistered AMs on RM restart
[ https://issues.apache.org/jira/browse/YARN-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-540: - Attachment: YARN-540.1.patch upload a patch without tests, will add tests later on. > Race condition causing RM to potentially relaunch already unregistered AMs on > RM restart > > > Key: YARN-540 > URL: https://issues.apache.org/jira/browse/YARN-540 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Jian He >Assignee: Jian He >Priority: Blocker > Attachments: YARN-540.1.patch, YARN-540.patch, YARN-540.patch > > > When job succeeds and successfully call finishApplicationMaster, RM shutdown > and restart-dispatcher is stopped before it can process REMOVE_APP event. The > next time RM comes back, it will reload the existing state files even though > the job is succeeded -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749191#comment-13749191 ] Omkar Vinit Joshi commented on YARN-1085: - Thanks Vinod.. fixed all the comments.. Test. * On secured cluster tried accessing web before doing kinit.(getting tgt). It failed with 401 error * did kinit and then tried accessing "http://:/conf" .. it works. Checked for RM/NM/history server. You should see "Logged in as :" on RM UI. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omkar Vinit Joshi updated YARN-1085: Attachment: YARN-1085.20130823.2.patch > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch, > YARN-1085.20130823.2.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749172#comment-13749172 ] Vinod Kumar Vavilapalli commented on YARN-905: -- Sorry, I didn't refresh the page to get that last commit-related JIRA update which happened 5 mins before my comment +1 for addedums here. > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-981: - Attachment: YARN-981.2.patch sorry for the confusing, upload a same patch named as YARN-981.2.patch > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omkar Vinit Joshi updated YARN-1085: Attachment: YARN-1085.20130823.1.patch > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1085) Yarn and MRv2 should do HTTP client authentication in kerberos setup.
[ https://issues.apache.org/jira/browse/YARN-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749146#comment-13749146 ] Omkar Vinit Joshi commented on YARN-1085: - Fixed all the comments. Separated out configurations for all. Adding new patch. > Yarn and MRv2 should do HTTP client authentication in kerberos setup. > - > > Key: YARN-1085 > URL: https://issues.apache.org/jira/browse/YARN-1085 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager, resourcemanager >Reporter: Jaimin D Jetly >Assignee: Omkar Vinit Joshi >Priority: Blocker > Labels: security > Attachments: YARN-1085.20130820.1.patch, YARN-1085.20130823.1.patch > > > In kerberos setup it's expected for a http client to authenticate to kerberos > before allowing user to browse any information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749142#comment-13749142 ] Sandy Ryza commented on YARN-905: - I just committed this before Vinod's comment. I don't think the current version is harmful in such a way that it needs to be reverted. I would prefer to make these extra changes in a separate JIRA, but would also be happy to review/commit an addendum here. > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749130#comment-13749130 ] Wei Yan commented on YARN-905: -- Sure, will update a patch later to fix the all/ALL and the user input validation. > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749126#comment-13749126 ] Vinod Kumar Vavilapalli commented on YARN-905: -- Looks like it is already there mostly - we just need to handle both all and ALL. Also, like stated [here|https://issues.apache.org/jira/browse/YARN-1074?focusedCommentId=13748180&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13748180], we should do the validation of user input, exit with non-zero code and print all valid states when user gives an invalid state. > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749123#comment-13749123 ] Vinod Kumar Vavilapalli commented on YARN-905: -- I've been looking at YARN-1074, where I suggested doing case-insensitive checks for the states. Can we do that here too? > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-905) Add state filters to nodes CLI
[ https://issues.apache.org/jira/browse/YARN-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749116#comment-13749116 ] Hudson commented on YARN-905: - SUCCESS: Integrated in Hadoop-trunk-Commit #4319 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4319/]) YARN-905. Add state filters to nodes CLI (Wei Yan via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1517083) * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java > Add state filters to nodes CLI > -- > > Key: YARN-905 > URL: https://issues.apache.org/jira/browse/YARN-905 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.0.4-alpha >Reporter: Sandy Ryza >Assignee: Wei Yan > Attachments: Yarn-905.patch, YARN-905.patch, YARN-905.patch > > > It would be helpful for the nodes CLI to have a node-states option that > allows it to return nodes that are not just in the RUNNING state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749111#comment-13749111 ] Vinod Kumar Vavilapalli commented on YARN-981: -- bq. I haven't looked at this in detail but you can't remove the /logs from the HttpServer.java as that is used by hdfs, unless that was changed in another jira? Thomas, I think you made the same mistake as I. The first one was dead, you have to look at YARN-981.patch, which by the looks of it is still a WIP. > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch, YARN-981.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1008) MiniYARNCluster with multiple nodemanagers, all nodes have same key for allocations
[ https://issues.apache.org/jira/browse/YARN-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Yan updated YARN-1008: -- Assignee: Alejandro Abdelnur (was: Wei Yan) > MiniYARNCluster with multiple nodemanagers, all nodes have same key for > allocations > --- > > Key: YARN-1008 > URL: https://issues.apache.org/jira/browse/YARN-1008 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.1.0-beta >Reporter: Alejandro Abdelnur >Assignee: Alejandro Abdelnur > Attachments: YARN-1008.patch, YARN-1008.patch, YARN-1008.patch, > YARN-1008.patch, YARN-1008.patch > > > While the NMs are keyed using the NodeId, the allocation is done based on the > hostname. > This makes the different nodes indistinguishable to the scheduler. > There should be an option to enabled the host:port instead just port for > allocations. The nodes reported to the AM should report the 'key' (host or > host:port). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (YARN-1008) MiniYARNCluster with multiple nodemanagers, all nodes have same key for allocations
[ https://issues.apache.org/jira/browse/YARN-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Yan reassigned YARN-1008: - Assignee: Wei Yan (was: Alejandro Abdelnur) > MiniYARNCluster with multiple nodemanagers, all nodes have same key for > allocations > --- > > Key: YARN-1008 > URL: https://issues.apache.org/jira/browse/YARN-1008 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.1.0-beta >Reporter: Alejandro Abdelnur >Assignee: Wei Yan > Attachments: YARN-1008.patch, YARN-1008.patch, YARN-1008.patch, > YARN-1008.patch, YARN-1008.patch > > > While the NMs are keyed using the NodeId, the allocation is done based on the > hostname. > This makes the different nodes indistinguishable to the scheduler. > There should be an option to enabled the host:port instead just port for > allocations. The nodes reported to the AM should report the 'key' (host or > host:port). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749086#comment-13749086 ] Hudson commented on YARN-707: - SUCCESS: Integrated in Hadoop-trunk-Commit #4318 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/4318/]) YARN-707. Add user info in the YARN ClientToken. Contributed by Vinod Kumar Vavilapalli (jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1517073) * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestRMStateStore.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Fix For: 3.0.0, 2.1.1-beta > > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749065#comment-13749065 ] Thomas Graves commented on YARN-981: I haven't looked at this in detail but you can't remove the /logs from the HttpServer.java as that is used by hdfs, unless that was changed in another jira? Also you need to make sure the logs can only be accessed by admins - the HttpServer code used to go through the AdminAuthorizedServlet. Some of the default servlets do work, like /jmx, /logLevel, I would think the /metrics one could too, perhaps if we fix whatever is broken for both hdfs and yarn it will just work? /logs is a bit different since its just a resource handler. I'm not totally against changing it from the previous behavior of showing everything in the directory as this is probably more secure but I don't see any discussion about that here? It appears we are only showing the .log file, so we want to show the .out file also? > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch, YARN-981.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749037#comment-13749037 ] Jason Lowe commented on YARN-707: - Yep, talked with Daryn offline and he's OK with it. We can address any necessary followups later if there are any. I'll commit it now. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749027#comment-13749027 ] Vinod Kumar Vavilapalli commented on YARN-707: -- I did change that, the owner of the token IS the user now in the patch and App attempt is an additional field. Re real/effective user, for these tokens, essentially they are the same, and correspond to the app-submitter. We can make other enhancements according to your suggestions if you have any in a follow up. Jason, if you are okay with the patch, can you please commit it? Thanks. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749026#comment-13749026 ] Hadoop QA commented on YARN-707: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599565/YARN-707-20130822.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1748//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1748//console This message is automatically generated. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748999#comment-13748999 ] Daryn Sharp commented on YARN-707: -- It almost seems like it would be better to invert the approach to be more consistent with other tokens - the owner of the token is the user (not the app attempt) and there's a new field for the app attempt (instead of a new field for the user). Another thought would be leverage the existing real/effective user in the token. One is the submitter, the other is the app attempt. Logging that includes the UGI will show "appAttempt (auth:...) via daryn (auth:...)", or vice-versa for the users. Thoughts? > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-942) In Fair Scheduler documentation, inconsistency on which properties have prefix
[ https://issues.apache.org/jira/browse/YARN-942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748988#comment-13748988 ] Sandy Ryza commented on YARN-942: - Thanks [~ajisakaa]. +1 pending jenkins. > In Fair Scheduler documentation, inconsistency on which properties have prefix > -- > > Key: YARN-942 > URL: https://issues.apache.org/jira/browse/YARN-942 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler >Affects Versions: 2.1.0-beta >Reporter: Sandy Ryza > Labels: docuentation, newbie > Attachments: YARN-942.patch > > > locality.threshold.node and locality.threshold.rack should have the > yarn.scheduler.fair prefix like the items before them > http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (YARN-1076) RM gets stuck with a reservation, ignoring new containers
[ https://issues.apache.org/jira/browse/YARN-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omkar Vinit Joshi reassigned YARN-1076: --- Assignee: Omkar Vinit Joshi > RM gets stuck with a reservation, ignoring new containers > - > > Key: YARN-1076 > URL: https://issues.apache.org/jira/browse/YARN-1076 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Maysam Yabandeh >Assignee: Omkar Vinit Joshi >Priority: Minor > Attachments: YARN-1076.patch > > > LeafQueue#assignToQueue rejects newly available containers if > potentialNewCapacity > absoluteMaxCapacity: > {code:java} > private synchronized boolean assignToQueue(Resource clusterResource, > Resource required) { > // Check how of the cluster's absolute capacity we are currently using... > float potentialNewCapacity = > Resources.divide( > resourceCalculator, clusterResource, > Resources.add(usedResources, required), > clusterResource); > if (potentialNewCapacity > absoluteMaxCapacity) { > //... > return false; > } > return true; > } > {code} > The usedResources, which is used to computed potentialNewCapacity, is > composed of both actual and reserved containers. So, a prior reservation > could causes RM to reject newly available containers, despite the starvation > report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (YARN-1076) RM gets stuck with a reservation, ignoring new containers
[ https://issues.apache.org/jira/browse/YARN-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omkar Vinit Joshi resolved YARN-1076. - Resolution: Duplicate > RM gets stuck with a reservation, ignoring new containers > - > > Key: YARN-1076 > URL: https://issues.apache.org/jira/browse/YARN-1076 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Maysam Yabandeh >Assignee: Omkar Vinit Joshi >Priority: Minor > Attachments: YARN-1076.patch > > > LeafQueue#assignToQueue rejects newly available containers if > potentialNewCapacity > absoluteMaxCapacity: > {code:java} > private synchronized boolean assignToQueue(Resource clusterResource, > Resource required) { > // Check how of the cluster's absolute capacity we are currently using... > float potentialNewCapacity = > Resources.divide( > resourceCalculator, clusterResource, > Resources.add(usedResources, required), > clusterResource); > if (potentialNewCapacity > absoluteMaxCapacity) { > //... > return false; > } > return true; > } > {code} > The usedResources, which is used to computed potentialNewCapacity, is > composed of both actual and reserved containers. So, a prior reservation > could causes RM to reject newly available containers, despite the starvation > report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1076) RM gets stuck with a reservation, ignoring new containers
[ https://issues.apache.org/jira/browse/YARN-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748978#comment-13748978 ] Omkar Vinit Joshi commented on YARN-1076: - Thanks [~maysamyabandeh] . Will close it as a duplicate. > RM gets stuck with a reservation, ignoring new containers > - > > Key: YARN-1076 > URL: https://issues.apache.org/jira/browse/YARN-1076 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Maysam Yabandeh >Priority: Minor > Attachments: YARN-1076.patch > > > LeafQueue#assignToQueue rejects newly available containers if > potentialNewCapacity > absoluteMaxCapacity: > {code:java} > private synchronized boolean assignToQueue(Resource clusterResource, > Resource required) { > // Check how of the cluster's absolute capacity we are currently using... > float potentialNewCapacity = > Resources.divide( > resourceCalculator, clusterResource, > Resources.add(usedResources, required), > clusterResource); > if (potentialNewCapacity > absoluteMaxCapacity) { > //... > return false; > } > return true; > } > {code} > The usedResources, which is used to computed potentialNewCapacity, is > composed of both actual and reserved containers. So, a prior reservation > could causes RM to reject newly available containers, despite the starvation > report. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-942) In Fair Scheduler documentation, inconsistency on which properties have prefix
[ https://issues.apache.org/jira/browse/YARN-942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748966#comment-13748966 ] Akira AJISAKA commented on YARN-942: I attached a patch to add the yarn.scheduler.fair prefix. > In Fair Scheduler documentation, inconsistency on which properties have prefix > -- > > Key: YARN-942 > URL: https://issues.apache.org/jira/browse/YARN-942 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler >Affects Versions: 2.1.0-beta >Reporter: Sandy Ryza > Labels: newbie > Attachments: YARN-942.patch > > > locality.threshold.node and locality.threshold.rack should have the > yarn.scheduler.fair prefix like the items before them > http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-942) In Fair Scheduler documentation, inconsistency on which properties have prefix
[ https://issues.apache.org/jira/browse/YARN-942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated YARN-942: --- Attachment: YARN-942.patch > In Fair Scheduler documentation, inconsistency on which properties have prefix > -- > > Key: YARN-942 > URL: https://issues.apache.org/jira/browse/YARN-942 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler >Affects Versions: 2.1.0-beta >Reporter: Sandy Ryza > Labels: newbie > Attachments: YARN-942.patch > > > locality.threshold.node and locality.threshold.rack should have the > yarn.scheduler.fair prefix like the items before them > http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748937#comment-13748937 ] Xuan Gong commented on YARN-1074: - uploaded new patch to ignore case sensitive for the filter appStates and appTypes. > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1074) Clean up YARN CLI app list to show only running apps.
[ https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1074: Attachment: YARN-1074.7.patch ignore the case sensitive > Clean up YARN CLI app list to show only running apps. > - > > Key: YARN-1074 > URL: https://issues.apache.org/jira/browse/YARN-1074 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Assignee: Xuan Gong > Attachments: YARN-1074.1.patch, YARN-1074.2.patch, YARN-1074.3.patch, > YARN-1074.4.patch, YARN-1074.5.patch, YARN-1074.6.patch, YARN-1074.7.patch > > > Once a user brings up YARN daemon, runs jobs, jobs will stay in output > returned by $ yarn application -list even after jobs complete already. We > want YARN command line to clean up this list. Specifically, we want to remove > applications with FINISHED state(not Final-State) or KILLED state from the > result. > {code} > [user1@host1 ~]$ yarn application -list > Total Applications:150 > Application-IdApplication-Name > Application-Type User Queue State > Final-State ProgressTracking-URL > application_1374638600275_0109 Sleep job > MAPREDUCEuser1 default KILLED > KILLED 100%host1:54059 > application_1374638600275_0121 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0121 > application_1374638600275_0020 Sleep job > MAPREDUCEuser1 defaultFINISHED > SUCCEEDED 100% host1:19888/jobhistory/job/job_1374638600275_0020 > application_1374638600275_0038 Sleep job > MAPREDUCEuser1 default > > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1093) corrections to fair scheduler documentation
[ https://issues.apache.org/jira/browse/YARN-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748879#comment-13748879 ] Sandy Ryza commented on YARN-1093: -- Thanks Wing Yew! +1 pendings Jenkins. > corrections to fair scheduler documentation > --- > > Key: YARN-1093 > URL: https://issues.apache.org/jira/browse/YARN-1093 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 2.0.4-alpha >Reporter: Wing Yew Poon > Attachments: YARN-1093.patch > > > The fair scheduler is still evolving, but the current documentation contains > some inaccuracies. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1093) corrections to fair scheduler documentation
[ https://issues.apache.org/jira/browse/YARN-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated YARN-1093: - Fix Version/s: (was: 2.1.0-beta) > corrections to fair scheduler documentation > --- > > Key: YARN-1093 > URL: https://issues.apache.org/jira/browse/YARN-1093 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 2.0.4-alpha >Reporter: Wing Yew Poon > Attachments: YARN-1093.patch > > > The fair scheduler is still evolving, but the current documentation contains > some inaccuracies. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-981: - Attachment: YARN-981.patch upload a patch to fix the local logs link > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch, YARN-981.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748860#comment-13748860 ] Jian He commented on YARN-981: -- Regarding the LocalLogs link, the problem is on webapp.Dispatcher, if the request uri comes as '/static/', '/logs/' or '/cluster' which all return the pathInfo as '/'. In case of '/', the RM webApp router just routes it to the main page. Made the change that only apply GuiceFilter to '/cluster/*', '/node/*', '/jobhistory/*' and '/' The metrics link has not been fixed, which is also not working on HDFS site though. > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1093) corrections to fair scheduler documentation
[ https://issues.apache.org/jira/browse/YARN-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wing Yew Poon updated YARN-1093: Attachment: YARN-1093.patch The corrections and clarifications I have in mind. > corrections to fair scheduler documentation > --- > > Key: YARN-1093 > URL: https://issues.apache.org/jira/browse/YARN-1093 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 2.0.4-alpha >Reporter: Wing Yew Poon > Fix For: 2.1.0-beta > > Attachments: YARN-1093.patch > > > The fair scheduler is still evolving, but the current documentation contains > some inaccuracies. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (YARN-981) YARN/MR2/Job history /logs and /metrics link do not have correct content
[ https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He reassigned YARN-981: Assignee: Jian He (was: Xuan Gong) > YARN/MR2/Job history /logs and /metrics link do not have correct content > > > Key: YARN-981 > URL: https://issues.apache.org/jira/browse/YARN-981 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Jian He > Attachments: YARN-981.1.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-1093) corrections to fair scheduler documentation
Wing Yew Poon created YARN-1093: --- Summary: corrections to fair scheduler documentation Key: YARN-1093 URL: https://issues.apache.org/jira/browse/YARN-1093 Project: Hadoop YARN Issue Type: Bug Components: documentation Affects Versions: 2.0.4-alpha Reporter: Wing Yew Poon Fix For: 2.1.0-beta The fair scheduler is still evolving, but the current documentation contains some inaccuracies. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1081) Minor improvement to output header for $ yarn node -list
[ https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748803#comment-13748803 ] Akira AJISAKA commented on YARN-1081: - I attached a patch for the proposed output. > Minor improvement to output header for $ yarn node -list > > > Key: YARN-1081 > URL: https://issues.apache.org/jira/browse/YARN-1081 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Priority: Minor > Labels: newbie > Fix For: 2.1.0-beta > > Attachments: YARN-1081.patch > > > Output of $ yarn node -list shows number of running containers at each node. > I found a case when new user of YARN thinks that this is container ID, use it > later in other YARN commands and find an error due to misunderstanding. > {code:title=current output} > 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list > 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1 > 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State > Node-Http-Address Running-Containers > 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING > myhost:50060 2 > {code} > {code:title=proposed output} > 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list > 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1 > 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State > Node-Http-Address Number-of-Running-Containers > 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING > myhost:50060 2 > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-1081) Minor improvement to output header for $ yarn node -list
[ https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated YARN-1081: Attachment: YARN-1081.patch > Minor improvement to output header for $ yarn node -list > > > Key: YARN-1081 > URL: https://issues.apache.org/jira/browse/YARN-1081 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Tassapol Athiapinya >Priority: Minor > Labels: newbie > Fix For: 2.1.0-beta > > Attachments: YARN-1081.patch > > > Output of $ yarn node -list shows number of running containers at each node. > I found a case when new user of YARN thinks that this is container ID, use it > later in other YARN commands and find an error due to misunderstanding. > {code:title=current output} > 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list > 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1 > 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State > Node-Http-Address Running-Containers > 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING > myhost:50060 2 > {code} > {code:title=proposed output} > 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list > 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1 > 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State > Node-Http-Address Number-of-Running-Containers > 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING > myhost:50060 2 > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (YARN-49) Improve distributed shell application to work on a secure cluster
[ https://issues.apache.org/jira/browse/YARN-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli reassigned YARN-49: --- Assignee: Vinod Kumar Vavilapalli (was: Omkar Vinit Joshi) I started working on this, but I realized it is a little involved as there is zero security work in Dist-shell that's already done. Should have a patch in a day or two. > Improve distributed shell application to work on a secure cluster > - > > Key: YARN-49 > URL: https://issues.apache.org/jira/browse/YARN-49 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications/distributed-shell >Reporter: Hitesh Shah >Assignee: Vinod Kumar Vavilapalli > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-49) Improve distributed shell application to work on a secure cluster
[ https://issues.apache.org/jira/browse/YARN-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748771#comment-13748771 ] Omkar Vinit Joshi commented on YARN-49: --- Hi [~kamrul], I was stuck with some other issues. I don't have any patch as of now. > Improve distributed shell application to work on a secure cluster > - > > Key: YARN-49 > URL: https://issues.apache.org/jira/browse/YARN-49 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications/distributed-shell >Reporter: Hitesh Shah >Assignee: Omkar Vinit Joshi > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken
[ https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748691#comment-13748691 ] Jason Lowe commented on YARN-707: - Tested this on a secure cluster along with the original MAPREDUCE-5475 patch, and it properly identifies the client user allowing the ACL check to proceed normally. +1 from me, pending Jenkins. > Add user info in the YARN ClientToken > - > > Key: YARN-707 > URL: https://issues.apache.org/jira/browse/YARN-707 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bikas Saha >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-707-20130822.txt > > > If user info is present in the client token then it can be used to do limited > authz in the AM. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1061) NodeManager is indefinitely waiting for nodeHeartBeat() response from ResouceManager.
[ https://issues.apache.org/jira/browse/YARN-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13748469#comment-13748469 ] Rohith Sharma K S commented on YARN-1061: - I added all the ipc configurations to log4j.properities file, stil same issue recured. bq. How can NM wait infinitely? I mean what is your connection timeout set to? When I debug the issue , found that it is an issue with IPC layer. This problem ocure in DataNode to NameNode communication also. When process is in T state(for running process, state is S1. This can be seen by "ps -p -o pid,stat" ) i.e process is stopped using "kill -stop " , ipc proxy does not throw any timeout exception. This is becaue , during proxy creation RPC timetime out is set to Zero(hardcoded) at RPC.waitForProtocolProxy method. Settiing rpc timeout to Zero makes ipc call does not throw any exception.Always ipc call(client) retry for sendPing to server(RM). This can be seen in Client.handleTimeout method {noformat} private void handleTimeout(SocketTimeoutException e) throws IOException { if (shouldCloseConnection.get() || !running.get() || rpcTimeout > 0) { throw e; } else { sendPing(); } } {noformat} I think RPC timeout should be taken from configurations instead of hardcoding to 0. > NodeManager is indefinitely waiting for nodeHeartBeat() response from > ResouceManager. > - > > Key: YARN-1061 > URL: https://issues.apache.org/jira/browse/YARN-1061 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.0.5-alpha >Reporter: Rohith Sharma K S > > It is observed that in one of the scenario, NodeManger is indefinetly waiting > for nodeHeartbeat response from ResouceManger where ResouceManger is in > hanged up state. > NodeManager should get timeout exception instead of waiting indefinetly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira