[jira] [Commented] (YARN-3232) Some application states are not necessarily exposed to users
[ https://issues.apache.org/jira/browse/YARN-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414768#comment-15414768 ] Hadoop QA commented on YARN-3232: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} YARN-3232 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12775788/YARN-3232.002.patch | | JIRA Issue | YARN-3232 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12713/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Some application states are not necessarily exposed to users > > > Key: YARN-3232 > URL: https://issues.apache.org/jira/browse/YARN-3232 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Jian He >Assignee: Varun Saxena > Attachments: YARN-3232.002.patch, YARN-3232.01.patch, > YARN-3232.02.patch > > > application NEW_SAVING and SUBMITTED states are not necessarily exposed to > users as they mostly internal to the system, transient and not user-facing. > We may deprecate these two states and remove them from the web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state
[ https://issues.apache.org/jira/browse/YARN-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414763#comment-15414763 ] Rohith Sharma K S commented on YARN-5477: - Closed as duplicate. > ApplicationId should not be visible to client before NEW_SAVING state > - > > Key: YARN-5477 > URL: https://issues.apache.org/jira/browse/YARN-5477 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Yesha Vora >Priority: Critical > > we should not return applicationId to client before entering NEW_SAVING > state. > As per design, RM restart/failover is not supported when application is in > NEW state. Thus, It makes sense to return appId to client after entering to > NEW_SAVING state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state
[ https://issues.apache.org/jira/browse/YARN-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S resolved YARN-5477. - Resolution: Duplicate > ApplicationId should not be visible to client before NEW_SAVING state > - > > Key: YARN-5477 > URL: https://issues.apache.org/jira/browse/YARN-5477 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Yesha Vora >Priority: Critical > > we should not return applicationId to client before entering NEW_SAVING > state. > As per design, RM restart/failover is not supported when application is in > NEW state. Thus, It makes sense to return appId to client after entering to > NEW_SAVING state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3232) Some application states are not necessarily exposed to users
[ https://issues.apache.org/jira/browse/YARN-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414757#comment-15414757 ] Rohith Sharma K S commented on YARN-3232: - [~varun_saxena] would you rebase the patch? > Some application states are not necessarily exposed to users > > > Key: YARN-3232 > URL: https://issues.apache.org/jira/browse/YARN-3232 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Jian He >Assignee: Varun Saxena > Attachments: YARN-3232.002.patch, YARN-3232.01.patch, > YARN-3232.02.patch > > > application NEW_SAVING and SUBMITTED states are not necessarily exposed to > users as they mostly internal to the system, transient and not user-facing. > We may deprecate these two states and remove them from the web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2098) App priority support in Fair Scheduler
[ https://issues.apache.org/jira/browse/YARN-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414756#comment-15414756 ] Wei Yan commented on YARN-2098: --- Sorry... missed this ticket... I'll grab some time to fix it. > App priority support in Fair Scheduler > -- > > Key: YARN-2098 > URL: https://issues.apache.org/jira/browse/YARN-2098 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.5.0 >Reporter: Ashwin Shankar >Assignee: Wei Yan > Attachments: YARN-2098.patch, YARN-2098.patch > > > This jira is created for supporting app priorities in fair scheduler. > AppSchedulable hard codes priority of apps to 1,we should > change this to get priority from ApplicationSubmissionContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5446) Cluster Resource Usage table is not displayed when user clicks on hadoop logo on top of web UI home page
[ https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414755#comment-15414755 ] Akhil PB commented on YARN-5446: Hi [~sunilg], I have updated patch with required naming template. > Cluster Resource Usage table is not displayed when user clicks on hadoop > logo on top of web UI home page > - > > Key: YARN-5446 > URL: https://issues.apache.org/jira/browse/YARN-5446 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chen Ge >Assignee: Akhil PB > Attachments: YARN-5446-YARN-3368.patch, screenshot-1.png > > > Under latest 3368, when user clicks hadoop icon in cluster overview page, > "Cluster Resource Usage By Applications" could not be correctly displayed. > Following is error description found on website. > {code} > donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of > undefined(anonymous function) @ > donut-chart.js:110arguments.length.each.value.function.value.textContent @ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5446) Cluster Resource Usage table is not displayed when user clicks on hadoop logo on top of web UI home page
[ https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-5446: --- Attachment: YARN-5446-YARN-3368.patch > Cluster Resource Usage table is not displayed when user clicks on hadoop > logo on top of web UI home page > - > > Key: YARN-5446 > URL: https://issues.apache.org/jira/browse/YARN-5446 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chen Ge >Assignee: Akhil PB > Attachments: YARN-5446-YARN-3368.patch, screenshot-1.png > > > Under latest 3368, when user clicks hadoop icon in cluster overview page, > "Cluster Resource Usage By Applications" could not be correctly displayed. > Following is error description found on website. > {code} > donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of > undefined(anonymous function) @ > donut-chart.js:110arguments.length.each.value.function.value.textContent @ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5446) Cluster Resource Usage table is not displayed when user clicks on hadoop logo on top of web UI home page
[ https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-5446: --- Attachment: (was: YARN-5446.patch) > Cluster Resource Usage table is not displayed when user clicks on hadoop > logo on top of web UI home page > - > > Key: YARN-5446 > URL: https://issues.apache.org/jira/browse/YARN-5446 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chen Ge >Assignee: Akhil PB > Attachments: YARN-5446-YARN-3368.patch, screenshot-1.png > > > Under latest 3368, when user clicks hadoop icon in cluster overview page, > "Cluster Resource Usage By Applications" could not be correctly displayed. > Following is error description found on website. > {code} > donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of > undefined(anonymous function) @ > donut-chart.js:110arguments.length.each.value.function.value.textContent @ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5501) Container Pooling in YARN
[ https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414714#comment-15414714 ] Atri Sharma commented on YARN-5501: --- Few Comments: 1) What would be the overhead of maintaining those containers when idle? 2) Would this not make sense only for short lived containers? If so, should we allow this behavior to be configurable? 3) Would the Capacity Scheduler be able to change the resource allocation for these pre allocated Containers? If the Containers are VMs, would that not require a VM restart ( I might be missing something here). Regards, Atri > Container Pooling in YARN > - > > Key: YARN-5501 > URL: https://issues.apache.org/jira/browse/YARN-5501 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Arun Suresh > > This JIRA proposes a method for reducing the container launch latency in > YARN. It introduces a notion of pooling *Unattached Pre-Initialized > Containers*. > Proposal in brief: > * Have a *Pre-Initialized Container Factory* service within the NM to create > these unattached containers. > * The NM would then advertise these containers as special resource types > (this should be possible via YARN-3926). > * When a start container request is received by the node manager for > launching a container requesting this specific type of resource, it will take > one of these unattached pre-initialized containers from the pool, and use it > to service the container request. > * Once the request is complete, the pre-initialized container would be > released and ready to serve another request. > This capability would help reduce container launch latencies and thereby > allow for development of more interactive applications on YARN. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests
[ https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-5486: -- Summary: Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests (was: Update OpportunisticConatinerAllocatioAMService allocate method to handle OPPORTUNISTIC container requests) > Update OpportunisticContainerAllocatorAMService::allocate method to handle > OPPORTUNISTIC container requests > --- > > Key: YARN-5486 > URL: https://issues.apache.org/jira/browse/YARN-5486 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Arun Suresh >Assignee: Konstantinos Karanasos > > YARN-5457 refactors the Distributed Scheduling framework to move the > container allocator to yarn-server-common. > This JIRA proposes to update the allocate method in the new AM service to use > the OpportunisticContainerAllocator to allocate opportunistic containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414637#comment-15414637 ] Hadoop QA commented on YARN-5453: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 30s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 9s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822933/YARN-5453.03.patch | | JIRA Issue | YARN-5453 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f500dadb0973 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d00d3ad | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12709/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12709/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >
[jira] [Commented] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state
[ https://issues.apache.org/jira/browse/YARN-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414636#comment-15414636 ] Naganarasimha G R commented on YARN-5477: - Yes [~rohithsharma], here too we were thinking the same, Shall we close this jira, thoughts? > ApplicationId should not be visible to client before NEW_SAVING state > - > > Key: YARN-5477 > URL: https://issues.apache.org/jira/browse/YARN-5477 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Yesha Vora >Priority: Critical > > we should not return applicationId to client before entering NEW_SAVING > state. > As per design, RM restart/failover is not supported when application is in > NEW state. Thus, It makes sense to return appId to client after entering to > NEW_SAVING state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5488) Table in Application tab overflows beyond page boundary.
[ https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414612#comment-15414612 ] Hadoop QA commented on YARN-5488: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 32s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 6s {color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 52s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:d13f52f | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822938/YARN-5488-YARN-3368.01.patch | | JIRA Issue | YARN-5488 | | Optional Tests | asflicense | | uname | Linux 354cd9991dc1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3368 / aba48e6 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12711/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Table in Application tab overflows beyond page boundary. > > > Key: YARN-5488 > URL: https://issues.apache.org/jira/browse/YARN-5488 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: YARN-5488-YARN-3368.01.patch, YARN-5488.01.patch > > > Table in Application tab overflows beyond page boundary and make the UI look > broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5488) Table in Application tab overflows beyond page boundary.
[ https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harish Jaiprakash updated YARN-5488: Attachment: YARN-5488-YARN-3368.01.patch > Table in Application tab overflows beyond page boundary. > > > Key: YARN-5488 > URL: https://issues.apache.org/jira/browse/YARN-5488 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: YARN-5488-YARN-3368.01.patch, YARN-5488.01.patch > > > Table in Application tab overflows beyond page boundary and make the UI look > broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5488) Table in Application tab overflows beyond page boundary.
[ https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414603#comment-15414603 ] Hadoop QA commented on YARN-5488: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} YARN-5488 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822934/YARN-5488.01.patch | | JIRA Issue | YARN-5488 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12710/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Table in Application tab overflows beyond page boundary. > > > Key: YARN-5488 > URL: https://issues.apache.org/jira/browse/YARN-5488 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: YARN-5488.01.patch > > > Table in Application tab overflows beyond page boundary and make the UI look > broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5488) Table in Application tab overflows beyond page boundary.
[ https://issues.apache.org/jira/browse/YARN-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harish Jaiprakash updated YARN-5488: Attachment: YARN-5488.01.patch Surrounding the table with another div and adding overflow-x: scroll to fix this issue. > Table in Application tab overflows beyond page boundary. > > > Key: YARN-5488 > URL: https://issues.apache.org/jira/browse/YARN-5488 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: YARN-5488.01.patch > > > Table in Application tab overflows beyond page boundary and make the UI look > broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sandflee updated YARN-5453: --- Attachment: YARN-5453.03.patch > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch, YARN-5453.02.patch, > YARN-5453.03.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4833) For Queue AccessControlException client retries multiple times on both RM
[ https://issues.apache.org/jira/browse/YARN-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414590#comment-15414590 ] Sunil G commented on YARN-4833: --- I think current approach in latest patch is better and looks fine. Also since {{AccessControlException}} is removed from throwable list, client need not have to worry for such exceptions. [~bibinchundatt], One nit in test case. Could you pls unwrap and verify if its an AccessControlException and the message too. This can help to confirm the scenario. > For Queue AccessControlException client retries multiple times on both RM > - > > Key: YARN-4833 > URL: https://issues.apache.org/jira/browse/YARN-4833 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > Attachments: 0001-YARN-4833.patch, YARN-4833.0001.patch, > YARN-4833.0002.patch, YARN-4833.0003.patch > > > Submit application to queue where ACL is enabled and submitted user is not > having access. Client retries till failMaxattempt 10 times. > {noformat} > 16/03/18 10:01:06 INFO retry.RetryInvocationHandler: Exception while invoking > submitApplication of class ApplicationClientProtocolPBClientImpl over rm1. > Trying to fail over immediately. > org.apache.hadoop.security.AccessControlException: User hdfs does not have > permission to submit application_1458273884145_0001 to queue default > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:380) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:618) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:252) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:483) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2360) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2356) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2356) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateIOException(RPCUtil.java:80) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:119) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:272) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) > at com.sun.proxy.$Proxy23.submitApplication(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:261) > at > org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:295) > at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338) > at java.security.AccessController.doPrivileged(Native Method) > at
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414588#comment-15414588 ] Hadoop QA commented on YARN-5483: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 48s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 21s {color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} branch-2.7 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in branch-2.7 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 396 unchanged - 1 fixed = 397 total (was 397) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2393 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 58s {color} | {color:red} The patch 72 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 21s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_101. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 19s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense
[jira] [Commented] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages
[ https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414564#comment-15414564 ] Sunil G commented on YARN-5334: --- +1. Thanks [~ssomarajapu...@hortonworks.com] for the contribution. Committed to the branch > [YARN-3368] Introduce REFRESH button in various UI pages > > > Key: YARN-5334 > URL: https://issues.apache.org/jira/browse/YARN-5334 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Reporter: Sunil G >Assignee: Sreenath Somarajapuram > Fix For: YARN-3368 > > Attachments: YARN-5334-YARN-3368-0001.patch, > YARN-5334-YARN-3368-0002.patch > > > It will be better if we have a common Refresh button in all pages to get the > latest information in all tables such as apps/nodes/queue etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages
[ https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414550#comment-15414550 ] Hadoop QA commented on YARN-5334: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 6s {color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 33s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:d13f52f | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822268/YARN-5334-YARN-3368-0002.patch | | JIRA Issue | YARN-5334 | | Optional Tests | asflicense | | uname | Linux d03875ada963 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3368 / 3c2c918 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12708/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > [YARN-3368] Introduce REFRESH button in various UI pages > > > Key: YARN-5334 > URL: https://issues.apache.org/jira/browse/YARN-5334 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Reporter: Sunil G >Assignee: Sreenath Somarajapuram > Attachments: YARN-5334-YARN-3368-0001.patch, > YARN-5334-YARN-3368-0002.patch > > > It will be better if we have a common Refresh button in all pages to get the > latest information in all tables such as apps/nodes/queue etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5496) Make Node Heatmap Chart categories clickable
[ https://issues.apache.org/jira/browse/YARN-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yesha Vora updated YARN-5496: - Description: Make Node Heatmap Chart categories clickable. This Heatmap chart has few categories like 10% used, 30% used etc. This tags should be clickable. If user clicks on 10% used tag, it should show hosts with 10% usage. This can be a useful feature for clusters having 1000s of nodes. was: Make Node Heatmap Chart categories clickable. This Heatmap chart has few categories like 10% used, 30% used etc. This tags should be clickable. If user clicks on 10% used tag, it should shows hosts with 10% usage. This can be a useful feature for clusters having 1000s of nodes. > Make Node Heatmap Chart categories clickable > > > Key: YARN-5496 > URL: https://issues.apache.org/jira/browse/YARN-5496 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora > > Make Node Heatmap Chart categories clickable. > This Heatmap chart has few categories like 10% used, 30% used etc. > This tags should be clickable. If user clicks on 10% used tag, it should show > hosts with 10% usage. This can be a useful feature for clusters having 1000s > of nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fails intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414514#comment-15414514 ] Yufei Gu commented on YARN-5343: BTW, the reason of the test failure is that {{ContinuousSchedulingThread}} uses {{Thread.sleep}} to wait between two attempts, so advancing clock in test cases takes no effect on {{ContinuousSchedulingThread}}. > TestContinuousScheduling#testSortedNodes fails intermittently > - > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Fix For: 2.9.0 > > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fails intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414504#comment-15414504 ] Yufei Gu commented on YARN-5343: Thanks for the review and commit. [~kasha]. Thanks [~sandflee] for reporting this bug and provide insights. > TestContinuousScheduling#testSortedNodes fails intermittently > - > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Fix For: 2.9.0 > > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fails intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414492#comment-15414492 ] Hudson commented on YARN-5343: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10250 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10250/]) YARN-5343. TestContinuousScheduling#testSortedNodes fails (kasha: rev 7992c0b42ceb10fd3ca6c4ced4f59b8e8998e046) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java > TestContinuousScheduling#testSortedNodes fails intermittently > - > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Fix For: 2.9.0 > > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414479#comment-15414479 ] Hadoop QA commented on YARN-5453: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 34s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 34s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 3 new + 5 unchanged - 2 fixed = 8 total (was 7) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 46s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822913/YARN-5453.02.patch | | JIRA Issue | YARN-5453 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4c4e3243fe1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9c6a438 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/12707/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/12707/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | javac |
[jira] [Commented] (YARN-5495) Remove import wildcard in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414478#comment-15414478 ] Ray Chiang commented on YARN-5495: -- RE: unit test failure Unit test passes in my tree. May or not be related to YARN-5492. > Remove import wildcard in CapacityScheduler > --- > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5343) TestContinuousScheduling#testSortedNodes fails intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-5343: --- Summary: TestContinuousScheduling#testSortedNodes fails intermittently (was: TestContinuousScheduling#testSortedNodes fail intermittently) > TestContinuousScheduling#testSortedNodes fails intermittently > - > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fail intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414465#comment-15414465 ] Karthik Kambatla commented on YARN-5343: Checking this in. > TestContinuousScheduling#testSortedNodes fail intermittently > > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fail intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414458#comment-15414458 ] Hadoop QA commented on YARN-5343: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 32s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 16s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822901/YARN-5343.001.patch | | JIRA Issue | YARN-5343 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bd8c1292490b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cc48251 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12705/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12705/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > TestContinuousScheduling#testSortedNodes fail intermittently > > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at
[jira] [Created] (YARN-5501) Container Pooling in YARN
Arun Suresh created YARN-5501: - Summary: Container Pooling in YARN Key: YARN-5501 URL: https://issues.apache.org/jira/browse/YARN-5501 Project: Hadoop YARN Issue Type: Improvement Reporter: Arun Suresh This JIRA proposes a method for reducing the container launch latency in YARN. It introduces a notion of pooling *Unattached Pre-Initialized Containers*. Proposal in brief: * Have a *Pre-Initialized Container Factory* service within the NM to create these unattached containers. * The NM would then advertise these containers as special resource types (this should be possible via YARN-3926). * When a start container request is received by the node manager for launching a container requesting this specific type of resource, it will take one of these unattached pre-initialized containers from the pool, and use it to service the container request. * Once the request is complete, the pre-initialized container would be released and ready to serve another request. This capability would help reduce container launch latencies and thereby allow for development of more interactive applications on YARN. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414452#comment-15414452 ] sandflee commented on YARN-5453: agree, update the patch as you suggested. > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch, YARN-5453.02.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5495) Remove import wildcard in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414451#comment-15414451 ] Hadoop QA commented on YARN-5495: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 49s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 41s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822896/YARN-5495.001.patch | | JIRA Issue | YARN-5495 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 89c3cae6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 85422bb | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12704/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12704/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12704/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12704/console
[jira] [Updated] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sandflee updated YARN-5453: --- Attachment: YARN-5453.02.patch > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch, YARN-5453.02.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5495) Remove import wildcard in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414449#comment-15414449 ] Ray Chiang commented on YARN-5495: -- Not a problem. I just wanted to check. Thanks for the quick reply. Will start the commit process soon. > Remove import wildcard in CapacityScheduler > --- > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5343) TestContinuousScheduling#testSortedNodes fail intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414443#comment-15414443 ] Karthik Kambatla commented on YARN-5343: Nice catch, [~sandflee] and [~yufeigu]. The patch looks good to me. +1, pending Jenkins. Will commit it once Jenkins says okay. > TestContinuousScheduling#testSortedNodes fail intermittently > > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414431#comment-15414431 ] sandflee commented on YARN-5483: update the patch. > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, > YARN-5483-branch-2.6.patch.02, YARN-5483-branch-2.7.patch, > YARN-5483-branch-2.7.patch.02, YARN-5483.01.patch, YARN-5483.02.patch, > YARN-5483.03.patch, YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5492) TestSubmitApplicationWithRMHA is failing sporadically during precommit builds
[ https://issues.apache.org/jira/browse/YARN-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414430#comment-15414430 ] Hadoop QA commented on YARN-5492: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 23s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822892/YARN-5492.001.patch | | JIRA Issue | YARN-5492 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 72cd3e5b67a9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 85422bb | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12703/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12703/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12703/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12703/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. >
[jira] [Updated] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sandflee updated YARN-5483: --- Attachment: YARN-5483-branch-2.7.patch.02 YARN-5483-branch-2.6.patch.02 > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, > YARN-5483-branch-2.6.patch.02, YARN-5483-branch-2.7.patch, > YARN-5483-branch-2.7.patch.02, YARN-5483.01.patch, YARN-5483.02.patch, > YARN-5483.03.patch, YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5498) [Usability] Make UI continue to work and render already loaded models even when there is no network connection
[ https://issues.apache.org/jira/browse/YARN-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5498: Attachment: YARN-5498-YARN-3368.001.patch > [Usability] Make UI continue to work and render already loaded models even > when there is no network connection > -- > > Key: YARN-5498 > URL: https://issues.apache.org/jira/browse/YARN-5498 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Attachments: No_Internet_Connection_Sample.png, > YARN-5498-YARN-3368.001.patch > > > I load the UI in my browser and traverse to all the tabs. Then I disconnect > the network. The tabs "Queues", "Applications" and "Nodes" continue to work > even when there is no network connection. However, the "Cluster Overview" tab > does not work and UI shows "Sorry, Error Occurred.". This tab should also > continue to show the already loaded models, for better usability. > We should also add a small message on the top of the UI when the network > connection is gone. It is very similar to what gmail or other modern > applications do today. An exception of type > {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can > be caught and this small message can be marked visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5500) 'Master node' link under application tab is broken
Sumana Sathish created YARN-5500: Summary: 'Master node' link under application tab is broken Key: YARN-5500 URL: https://issues.apache.org/jira/browse/YARN-5500 Project: Hadoop YARN Issue Type: Bug Reporter: Sumana Sathish Priority: Critical Steps to reproduce: * Click on the running application portion on the donut under "Cluster resource usage by applications" * Under App Master Info, there is a link provided for "Master Node". The link is broken. It doesn't redirect to any page. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5498) [Usability] Make UI continue to work and render already loaded models even when there is no network connection
[ https://issues.apache.org/jira/browse/YARN-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5498: Description: I load the UI in my browser and traverse to all the tabs. Then I disconnect the network. The tabs "Queues", "Applications" and "Nodes" continue to work even when there is no network connection. However, the "Cluster Overview" tab does not work and UI shows "Sorry, Error Occurred.". This tab should also continue to show the already loaded models, for better usability. We should also add a small message on the top of the UI when the network connection is gone. It is very similar to what gmail or other modern applications do today. An exception of type {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can be caught and this small message can be marked visible. was: I load the UI in my browser and traverse to all the tabs. Then I disconnect the network. The tabs "Queues", "Applications" and "Nodes" continue to work even when there is no network connection. However, the "Cluster Overview" tab does not work. This tab should also continue to show the already loaded models, for better usability. We should also add a small message on the top of the UI when the network connection is gone. It is very similar to what gmail or other modern applications do today. An exception of type {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can be caught and this small message can be marked visible. > [Usability] Make UI continue to work and render already loaded models even > when there is no network connection > -- > > Key: YARN-5498 > URL: https://issues.apache.org/jira/browse/YARN-5498 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Attachments: No_Internet_Connection_Sample.png > > > I load the UI in my browser and traverse to all the tabs. Then I disconnect > the network. The tabs "Queues", "Applications" and "Nodes" continue to work > even when there is no network connection. However, the "Cluster Overview" tab > does not work and UI shows "Sorry, Error Occurred.". This tab should also > continue to show the already loaded models, for better usability. > We should also add a small message on the top of the UI when the network > connection is gone. It is very similar to what gmail or other modern > applications do today. An exception of type > {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can > be caught and this small message can be marked visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5498) [Usability] Make UI continue to work and render already loaded models even when there is no network connection
[ https://issues.apache.org/jira/browse/YARN-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5498: Assignee: Gour Saha I have a patch for the first issue where the "Cluster Overview" tab works when the network connection is gone. > [Usability] Make UI continue to work and render already loaded models even > when there is no network connection > -- > > Key: YARN-5498 > URL: https://issues.apache.org/jira/browse/YARN-5498 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Attachments: No_Internet_Connection_Sample.png > > > I load the UI in my browser and traverse to all the tabs. Then I disconnect > the network. The tabs "Queues", "Applications" and "Nodes" continue to work > even when there is no network connection. However, the "Cluster Overview" tab > does not work. This tab should also continue to show the already loaded > models, for better usability. > We should also add a small message on the top of the UI when the network > connection is gone. It is very similar to what gmail or other modern > applications do today. An exception of type > {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can > be caught and this small message can be marked visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5499) Logs of container loads first time but fails if you go back and click again
[ https://issues.apache.org/jira/browse/YARN-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumana Sathish updated YARN-5499: - Description: Steps to reproduce: * Click on Nodes. This page will list nodes of the cluster * Select a node which has running container * Select 'list of containers' * Select any one of the logs link like stdout or stderr. Logs link appear properly * Click back on the browser to go to container page again * Select any other logs link The Url prompts "Sorry, Error Occurred." {code} jquery.js:8630 XMLHttpRequest cannot load http://cn044-10.l42scl.hortonworks.com:8042cn044-10.l42scl.hortonworks.com:…e/containerlogs/container_1469893274276_0261_01_09/launch_container.sh. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.send @ jquery.js:8630ajax @ jquery.js:8166(anonymous function) @ rest-adapter.js:764initializePromise @ ember.debug.js:52308Promise @ ember.debug.js:54158ajax @ rest-adapter.js:729ajax @ yarn-container-log.js:36superWrapper @ ember.debug.js:22060findRecord @ rest-adapter.js:333ember$data$lib$system$store$finders$$_find @ finders.js:18fetchRecord @ store.js:541_fetchRecord @ store.js:595_flushPendingFetchForType @ store.js:641cb @ ember.debug.js:17448forEach @ ember.debug.js:17251forEach @ ember.debug.js:17456flushAllPendingFetches @ store.js:584invoke @ ember.debug.js:320flush @ ember.debug.js:384flush @ ember.debug.js:185end @ ember.debug.js:563run @ ember.debug.js:685run @ ember.debug.js:20105(anonymous function) @ ember.debug.js:23761dispatch @ jquery.js:4435elemData.handle @ jquery.js:4121 ember.debug.js:30877 Error: Adapter operation failed at new Error (native) at Error.EmberError (http://localhost:4200/assets/vendor.js:25278:21) at Error.ember$data$lib$adapters$errors$$AdapterError (http://localhost:4200/assets/vendor.js:91198:50) at Class.handleResponse (http://localhost:4200/assets/vendor.js:92494:16) at Class.hash.error (http://localhost:4200/assets/vendor.js:92574:33) at fire (http://localhost:4200/assets/vendor.js:3306:30) at Object.fireWith [as rejectWith] (http://localhost:4200/assets/vendor.js:3418:7) at done (http://localhost:4200/assets/vendor.js:8473:14) at XMLHttpRequest. (http://localhost:4200/assets/vendor.js:8806:9) at Object.send (http://localhost:4200/assets/vendor.js:8837:10) {code} was: Steps to reproduce: *Click on Nodes. This page will list nodes of the cluster *Select a node which has running container *Select 'list of containers' *Select any one of the logs link like stdout or stderr. Logs link appear properly *Click back on the browser to go to container page again *Select any other logs link The Url prompts "Sorry, Error Occurred." {code} jquery.js:8630 XMLHttpRequest cannot load http://cn044-10.l42scl.hortonworks.com:8042cn044-10.l42scl.hortonworks.com:…e/containerlogs/container_1469893274276_0261_01_09/launch_container.sh. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.send @ jquery.js:8630ajax @ jquery.js:8166(anonymous function) @ rest-adapter.js:764initializePromise @ ember.debug.js:52308Promise @ ember.debug.js:54158ajax @ rest-adapter.js:729ajax @ yarn-container-log.js:36superWrapper @ ember.debug.js:22060findRecord @ rest-adapter.js:333ember$data$lib$system$store$finders$$_find @ finders.js:18fetchRecord @ store.js:541_fetchRecord @ store.js:595_flushPendingFetchForType @ store.js:641cb @ ember.debug.js:17448forEach @ ember.debug.js:17251forEach @ ember.debug.js:17456flushAllPendingFetches @ store.js:584invoke @ ember.debug.js:320flush @ ember.debug.js:384flush @ ember.debug.js:185end @ ember.debug.js:563run @ ember.debug.js:685run @ ember.debug.js:20105(anonymous function) @ ember.debug.js:23761dispatch @ jquery.js:4435elemData.handle @ jquery.js:4121 ember.debug.js:30877 Error: Adapter operation failed at new Error (native) at Error.EmberError (http://localhost:4200/assets/vendor.js:25278:21) at Error.ember$data$lib$adapters$errors$$AdapterError (http://localhost:4200/assets/vendor.js:91198:50) at Class.handleResponse (http://localhost:4200/assets/vendor.js:92494:16) at Class.hash.error (http://localhost:4200/assets/vendor.js:92574:33) at fire (http://localhost:4200/assets/vendor.js:3306:30) at Object.fireWith [as rejectWith] (http://localhost:4200/assets/vendor.js:3418:7) at done (http://localhost:4200/assets/vendor.js:8473:14) at XMLHttpRequest. (http://localhost:4200/assets/vendor.js:8806:9) at Object.send (http://localhost:4200/assets/vendor.js:8837:10) {code} > Logs of container loads first time but fails if you go back and click again > --- > > Key: YARN-5499 >
[jira] [Updated] (YARN-5498) [Usability] Make UI continue to work and render already loaded models even when there is no network connection
[ https://issues.apache.org/jira/browse/YARN-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5498: Attachment: No_Internet_Connection_Sample.png A sample UI from gmail is attached showing a small message shown when there is no network connection. > [Usability] Make UI continue to work and render already loaded models even > when there is no network connection > -- > > Key: YARN-5498 > URL: https://issues.apache.org/jira/browse/YARN-5498 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha > Attachments: No_Internet_Connection_Sample.png > > > I load the UI in my browser and traverse to all the tabs. Then I disconnect > the network. The tabs "Queues", "Applications" and "Nodes" continue to work > even when there is no network connection. However, the "Cluster Overview" tab > does not work. This tab should also continue to show the already loaded > models, for better usability. > We should also add a small message on the top of the UI when the network > connection is gone. It is very similar to what gmail or other modern > applications do today. An exception of type > {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can > be caught and this small message can be marked visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5499) Logs of container loads first time but fails if you go back and click again
Sumana Sathish created YARN-5499: Summary: Logs of container loads first time but fails if you go back and click again Key: YARN-5499 URL: https://issues.apache.org/jira/browse/YARN-5499 Project: Hadoop YARN Issue Type: Bug Reporter: Sumana Sathish Priority: Critical Steps to reproduce: *Click on Nodes. This page will list nodes of the cluster *Select a node which has running container *Select 'list of containers' *Select any one of the logs link like stdout or stderr. Logs link appear properly *Click back on the browser to go to container page again *Select any other logs link The Url prompts "Sorry, Error Occurred." {code} jquery.js:8630 XMLHttpRequest cannot load http://cn044-10.l42scl.hortonworks.com:8042cn044-10.l42scl.hortonworks.com:…e/containerlogs/container_1469893274276_0261_01_09/launch_container.sh. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.send @ jquery.js:8630ajax @ jquery.js:8166(anonymous function) @ rest-adapter.js:764initializePromise @ ember.debug.js:52308Promise @ ember.debug.js:54158ajax @ rest-adapter.js:729ajax @ yarn-container-log.js:36superWrapper @ ember.debug.js:22060findRecord @ rest-adapter.js:333ember$data$lib$system$store$finders$$_find @ finders.js:18fetchRecord @ store.js:541_fetchRecord @ store.js:595_flushPendingFetchForType @ store.js:641cb @ ember.debug.js:17448forEach @ ember.debug.js:17251forEach @ ember.debug.js:17456flushAllPendingFetches @ store.js:584invoke @ ember.debug.js:320flush @ ember.debug.js:384flush @ ember.debug.js:185end @ ember.debug.js:563run @ ember.debug.js:685run @ ember.debug.js:20105(anonymous function) @ ember.debug.js:23761dispatch @ jquery.js:4435elemData.handle @ jquery.js:4121 ember.debug.js:30877 Error: Adapter operation failed at new Error (native) at Error.EmberError (http://localhost:4200/assets/vendor.js:25278:21) at Error.ember$data$lib$adapters$errors$$AdapterError (http://localhost:4200/assets/vendor.js:91198:50) at Class.handleResponse (http://localhost:4200/assets/vendor.js:92494:16) at Class.hash.error (http://localhost:4200/assets/vendor.js:92574:33) at fire (http://localhost:4200/assets/vendor.js:3306:30) at Object.fireWith [as rejectWith] (http://localhost:4200/assets/vendor.js:3418:7) at done (http://localhost:4200/assets/vendor.js:8473:14) at XMLHttpRequest. (http://localhost:4200/assets/vendor.js:8806:9) at Object.send (http://localhost:4200/assets/vendor.js:8837:10) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5498) [Usability] Make UI continue to work and render already loaded models even when there is no network connection
Gour Saha created YARN-5498: --- Summary: [Usability] Make UI continue to work and render already loaded models even when there is no network connection Key: YARN-5498 URL: https://issues.apache.org/jira/browse/YARN-5498 Project: Hadoop YARN Issue Type: Sub-task Reporter: Gour Saha I load the UI in my browser and traverse to all the tabs. Then I disconnect the network. The tabs "Queues", "Applications" and "Nodes" continue to work even when there is no network connection. However, the "Cluster Overview" tab does not work. This tab should also continue to show the already loaded models, for better usability. We should also add a small message on the top of the UI when the network connection is gone. It is very similar to what gmail or other modern applications do today. An exception of type {color:red}net::ERR_INTERNET_DISCONNECTED{color} is already thrown, which can be caught and this small message can be marked visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5343) TestContinuousScheduling#testSortedNodes fail intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-5343: --- Attachment: YARN-5343.001.patch > TestContinuousScheduling#testSortedNodes fail intermittently > > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5497) Use different color for Undefined and Succeeded final state in application page
Yesha Vora created YARN-5497: Summary: Use different color for Undefined and Succeeded final state in application page Key: YARN-5497 URL: https://issues.apache.org/jira/browse/YARN-5497 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yesha Vora Assignee: Yesha Vora Priority: Trivial When application is in Running state, Final status value is set to "Undefined" When application is succeeded , Final status value is set to "SUCCEEDED". Yarn UI use same green color for both the above final status. It will be good to have different colors for each final status value. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5495) Remove import wildcard in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414386#comment-15414386 ] Wangda Tan commented on YARN-5495: -- [~rchiang], thanks. Sorry for introducing this error, +1 to the patch, please go ahead and commit it. > Remove import wildcard in CapacityScheduler > --- > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5496) Make Node Heatmap Chart categories clickable
Yesha Vora created YARN-5496: Summary: Make Node Heatmap Chart categories clickable Key: YARN-5496 URL: https://issues.apache.org/jira/browse/YARN-5496 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yesha Vora Make Node Heatmap Chart categories clickable. This Heatmap chart has few categories like 10% used, 30% used etc. This tags should be clickable. If user clicks on 10% used tag, it should shows hosts with 10% usage. This can be a useful feature for clusters having 1000s of nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5495) Remove import wildcard in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-5495: - Summary: Remove import wildcard in CapacityScheduler (was: Clean up imports in CapacityScheduler) > Remove import wildcard in CapacityScheduler > --- > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5495) Clean up imports in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414375#comment-15414375 ] Ray Chiang commented on YARN-5495: -- [~leftnoteasy], it looks like this happened (presumably due to an IDE setting) in YARN-4091. If that was unintentional, can we undo it before YARN-5047? > Clean up imports in CapacityScheduler > - > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5495) Clean up imports in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-5495: - Attachment: YARN-5495.001.patch Replace wildcard with individual imports again. > Clean up imports in CapacityScheduler > - > > Key: YARN-5495 > URL: https://issues.apache.org/jira/browse/YARN-5495 > Project: Hadoop YARN > Issue Type: Task > Components: capacityscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Attachments: YARN-5495.001.patch > > > YARN-4091 swapped a bunch of > org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard > version. Assuming things haven't changed in the Style Guide, we disallow > wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5495) Clean up imports in CapacityScheduler
Ray Chiang created YARN-5495: Summary: Clean up imports in CapacityScheduler Key: YARN-5495 URL: https://issues.apache.org/jira/browse/YARN-5495 Project: Hadoop YARN Issue Type: Task Components: capacityscheduler Affects Versions: 3.0.0-alpha2 Reporter: Ray Chiang Assignee: Ray Chiang Priority: Trivial YARN-4091 swapped a bunch of org.apache.hadoop.yarn.server.resourcemanager.scheduler with the wildcard version. Assuming things haven't changed in the Style Guide, we disallow wildcards in the import. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5494) Nodes page throws "Sorry Error Occurred" message
Yesha Vora created YARN-5494: Summary: Nodes page throws "Sorry Error Occurred" message Key: YARN-5494 URL: https://issues.apache.org/jira/browse/YARN-5494 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yesha Vora Priority: Critical Steps to reproduce: * Click on Nodes. This page will list nodes of the cluster * Click on one the nodes such as node1. ( It will redirect to http://:4200/#/yarn-node/:31924/:8042 url) This url prompts "Sorry Error Occurred" error. {code}jquery.js:8630 XMLHttpRequest cannot load http://xxx:xxx:8042/ws/v1/node. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.send @ jquery.js:8630 ember.debug.js:30877 Error: Adapter operation failed at new Error (native) at Error.EmberError (http://xxx:4200/assets/vendor.js:25278:21) at Error.ember$data$lib$adapters$errors$$AdapterError (http://xxx:4200/assets/vendor.js:91198:50) at Class.handleResponse (http://xxx:4200/assets/vendor.js:92494:16) at Class.hash.error (http://xxx:4200/assets/vendor.js:92574:33) at fire (http://xxx:4200/assets/vendor.js:3306:30) at Object.fireWith [as rejectWith] (http://xxx:4200/assets/vendor.js:3418:7) at done (http://xxx:4200/assets/vendor.js:8473:14) at XMLHttpRequest. (http://xxx:4200/assets/vendor.js:8806:9) at Object.send (http://xxx:4200/assets/vendor.js:8837:10)onerrorDefault @ ember.debug.js:30877{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414349#comment-15414349 ] Jason Lowe commented on YARN-5483: -- bq. seems we should merge YARN-5262 to 2.6/2.7 too. Good catch! I committed YARN-5262 to 2.8, 2.7, and 2.6. Could you rebase the 2.7 and 2.6 patches? Trunk patch lgtm. > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5262) Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM heartbeat
[ https://issues.apache.org/jira/browse/YARN-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-5262: - Fix Version/s: (was: 2.9.0) 2.7.4 2.6.5 2.8.0 Thanks, [~rohithsharma]! I committed this to branch-2.8, branch-2.7, and branch-2.6 as well. > Optimize sending RMNodeFinishedContainersPulledByAMEvent for every AM > heartbeat > --- > > Key: YARN-5262 > URL: https://issues.apache.org/jira/browse/YARN-5262 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Fix For: 2.8.0, 2.6.5, 2.7.4 > > Attachments: 0001-YARN-5262.patch, 0002-YARN-5262.patch > > > It is observed that RM triggers an one event for every > ApplicationMaster#allocate request in the following trace. This is not > necessarily required and it can be optimized such that send only if any > containers are there to acknowledge to NodeManager. > {code} > RMAppAttemptImpl.sendFinishedContainersToNM() line: 1871 > RMAppAttemptImpl.pullJustFinishedContainers() line: 805 > ApplicationMasterService.allocate(AllocateRequest) line: 567 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5492) TestSubmitApplicationWithRMHA is failing sporadically during precommit builds
[ https://issues.apache.org/jira/browse/YARN-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-5492: - Attachment: YARN-5492.001.patch Uploading patch 001. Increasing timeout to 50 seconds. The test code in branch-2.7 seems to have the timeout as 50 seconds. https://github.com/apache/hadoop/blob/branch-2.7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestSubmitApplicationWithRMHA.java#L263 > TestSubmitApplicationWithRMHA is failing sporadically during precommit builds > - > > Key: YARN-5492 > URL: https://issues.apache.org/jira/browse/YARN-5492 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Jason Lowe > Attachments: YARN-5492.001.patch > > > I've seen > TestSubmitApplicationWithRMHA#testHandleRMHADuringSubmitApplicationCallWIthoutSavedApplicationState > timeout on some recent YARN precommit builds. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore
[ https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414315#comment-15414315 ] Hadoop QA commented on YARN-5407: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 9s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822884/YARN-5407-YARN-2915.v2.patch | | JIRA Issue | YARN-5407 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 07cdcfbc67a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / dbaebf8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12702/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12702/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12702/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > In-memory based implementation of the
[jira] [Commented] (YARN-4974) Random test failure:TestRMApplicationHistoryWriter#testRMWritingMassiveHistoryForCapacitySche
[ https://issues.apache.org/jira/browse/YARN-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414291#comment-15414291 ] Eric Badger commented on YARN-4974: --- I think that the problem here is that this test assumes that there will be fairly consistent CPU load on the machine running the test for the duration of the test. In practice, this can be very far from true. Especially since this test takes around 60s to complete, the load could be massively different in each half of the test. I can easily and consistently make this test fail by varying the load on my cpu during the test. For example, I started the test, and after 30 seconds, I ran a stress script which stresses the CPU on my machine. The test finished with (elapsedTime1, elapsedTime2) = (37233, 52861), which very clearly does not fall within the 10% threshold and caused the test to fail. Asking [~zjshen] (creator of initial test) and [~vinodkv] (committer of initial test): Is this test appropriate as a unit test? It relies on the assumption that performance will be consistent over the duration of the run, which is not something that I believe we can assume. To me, this is a performance test, not a unit test. > Random test > failure:TestRMApplicationHistoryWriter#testRMWritingMassiveHistoryForCapacitySche > - > > Key: YARN-4974 > URL: https://issues.apache.org/jira/browse/YARN-4974 > Project: Hadoop YARN > Issue Type: Test > Components: test, yarn >Reporter: Bibin A Chundatt > > https://builds.apache.org/job/PreCommit-YARN-Build/11128/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_77.txt > {noformat} > Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 196.959 sec > <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter > testRMWritingMassiveHistoryForCapacitySche(org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter) > Time elapsed: 125.174 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter.testRMWritingMassiveHistory(TestRMApplicationHistoryWriter.java:441) > at > org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter.testRMWritingMassiveHistoryForCapacitySche(TestRMApplicationHistoryWriter.java:383) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore
[ https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-5407: Attachment: YARN-5407-YARN-2915.v2.patch Wrong diff uploaded previously > In-memory based implementation of the FederationApplicationStateStore, > FederationPolicyStateStore > - > > Key: YARN-5407 > URL: https://issues.apache.org/jira/browse/YARN-5407 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Ellen Hui > Attachments: YARN-5407-YARN-2915.v0.patch, > YARN-5407-YARN-2915.v1.patch, YARN-5407-YARN-2915.v2.patch > > > YARN-5307 defines the FederationApplicationStateStore API. YARN-3664 defines > the FederationPolicyStateStore API. This JIRA tracks an in-memory based > implementation which is useful for both single-box testing and for future > unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore
[ https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-5407: Attachment: (was: YARN-5408-YARN-2915.v2.patch) > In-memory based implementation of the FederationApplicationStateStore, > FederationPolicyStateStore > - > > Key: YARN-5407 > URL: https://issues.apache.org/jira/browse/YARN-5407 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Ellen Hui > Attachments: YARN-5407-YARN-2915.v0.patch, > YARN-5407-YARN-2915.v1.patch, YARN-5407-YARN-2915.v2.patch > > > YARN-5307 defines the FederationApplicationStateStore API. YARN-3664 defines > the FederationPolicyStateStore API. This JIRA tracks an in-memory based > implementation which is useful for both single-box testing and for future > unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore
[ https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414277#comment-15414277 ] Hadoop QA commented on YARN-5407: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} | {color:red} YARN-5407 does not apply to YARN-2915. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822882/YARN-5408-YARN-2915.v2.patch | | JIRA Issue | YARN-5407 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12701/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > In-memory based implementation of the FederationApplicationStateStore, > FederationPolicyStateStore > - > > Key: YARN-5407 > URL: https://issues.apache.org/jira/browse/YARN-5407 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Ellen Hui > Attachments: YARN-5407-YARN-2915.v0.patch, > YARN-5407-YARN-2915.v1.patch, YARN-5408-YARN-2915.v2.patch > > > YARN-5307 defines the FederationApplicationStateStore API. YARN-3664 defines > the FederationPolicyStateStore API. This JIRA tracks an in-memory based > implementation which is useful for both single-box testing and for future > unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore
[ https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-5407: Attachment: YARN-5408-YARN-2915.v2.patch Address feedback from [~subru] * Move queue from {{SetSubClusterPolicyConfigurationRequest}} to {{SubClusterPolicyConfiguration}} * Add helper methods for registerSubCluster/addApplication/setPolicy in {{FederationStateStoreBaseTest}} > In-memory based implementation of the FederationApplicationStateStore, > FederationPolicyStateStore > - > > Key: YARN-5407 > URL: https://issues.apache.org/jira/browse/YARN-5407 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Ellen Hui > Attachments: YARN-5407-YARN-2915.v0.patch, > YARN-5407-YARN-2915.v1.patch, YARN-5408-YARN-2915.v2.patch > > > YARN-5307 defines the FederationApplicationStateStore API. YARN-3664 defines > the FederationPolicyStateStore API. This JIRA tracks an in-memory based > implementation which is useful for both single-box testing and for future > unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers
[ https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414266#comment-15414266 ] Daniel Templeton commented on YARN-5047: Looks like the current patch needs to be rebased. I see merge errors on {{AbstractYarnScheduler}}, {{CapacityScheduler}}, and the test class. > Refactor nodeUpdate across schedulers > - > > Key: YARN-5047 > URL: https://issues.apache.org/jira/browse/YARN-5047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, scheduler >Affects Versions: 3.0.0-alpha1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: YARN-5047.001.patch, YARN-5047.002.patch, > YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, > YARN-5047.006.patch, YARN-5047.007.patch, YARN-5047.008.patch > > > FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of > commonality in their code. See about refactoring the common parts into > AbstractYARNScheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5493) In leaf queue page, list applications should only show applications from that leaf queues
[ https://issues.apache.org/jira/browse/YARN-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yesha Vora reassigned YARN-5493: Assignee: Yesha Vora > In leaf queue page, list applications should only show applications from that > leaf queues > - > > Key: YARN-5493 > URL: https://issues.apache.org/jira/browse/YARN-5493 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yesha Vora >Assignee: Yesha Vora > > Steps to reproduce: > * Create a 2 queues > * Go to leaf queue page at http://:/#/yarn-queue-apps/ > url > * click on application list. > Here, it list down all the applications. Instead , It should list down only > applications from that particular leaf queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5493) In leaf queue page, list applications should only show applications from that leaf queues
Yesha Vora created YARN-5493: Summary: In leaf queue page, list applications should only show applications from that leaf queues Key: YARN-5493 URL: https://issues.apache.org/jira/browse/YARN-5493 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yesha Vora Steps to reproduce: * Create a 2 queues * Go to leaf queue page at http://:/#/yarn-queue-apps/ url * click on application list. Here, it list down all the applications. Instead , It should list down only applications from that particular leaf queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications
[ https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414251#comment-15414251 ] Jason Lowe commented on YARN-5382: -- +1 for the latest trunk and branch-2.7 patches. I'll commit this tomorrow if there are no objections. > RM does not audit log kill request for active applications > -- > > Key: YARN-5382 > URL: https://issues.apache.org/jira/browse/YARN-5382 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Vrushali C > Attachments: YARN-5382-branch-2.7.01.patch, > YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, > YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, > YARN-5382-branch-2.7.09.patch, YARN-5382-branch-2.7.10.patch, > YARN-5382-branch-2.7.11.patch, YARN-5382-branch-2.7.12.patch, > YARN-5382-branch-2.7.15.patch, YARN-5382.06.patch, YARN-5382.07.patch, > YARN-5382.08.patch, YARN-5382.09.patch, YARN-5382.10.patch, > YARN-5382.11.patch, YARN-5382.12.patch, YARN-5382.13.patch, > YARN-5382.14.patch, YARN-5382.15.patch > > > ClientRMService will audit a kill request but only if it either fails to > issue the kill or if the kill is sent to an already finished application. It > does not create a log entry when the application is active which is arguably > the most important case to audit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5492) TestSubmitApplicationWithRMHA is failing sporadically during precommit builds
[ https://issues.apache.org/jira/browse/YARN-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414225#comment-15414225 ] Jason Lowe commented on YARN-5492: -- {noformat} Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.69 sec <<< FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA testHandleRMHADuringSubmitApplicationCallWithoutSavedApplicationState(org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA) Time elapsed: 5.052 sec <<< ERROR! java.lang.Exception: test timed out after 5000 milliseconds at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1245) at java.lang.Thread.join(Thread.java:1319) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.stopThreads(AbstractDelegationTokenSecretManager.java:627) at org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService.serviceStop(RMSecretManagerService.java:93) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52) at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80) at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157) at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:728) at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:1057) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1112) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToStandby(AdminService.java:364) at org.apache.hadoop.yarn.server.resourcemanager.RMHATestBase.explicitFailover(RMHATestBase.java:183) at org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA.testHandleRMHADuringSubmitApplicationCallWithoutSavedApplicationState(TestSubmitApplicationWithRMHA.java:284) {noformat} I'm guessing the 5 second timeout is too low. > TestSubmitApplicationWithRMHA is failing sporadically during precommit builds > - > > Key: YARN-5492 > URL: https://issues.apache.org/jira/browse/YARN-5492 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Jason Lowe > > I've seen > TestSubmitApplicationWithRMHA#testHandleRMHADuringSubmitApplicationCallWIthoutSavedApplicationState > timeout on some recent YARN precommit builds. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5492) TestSubmitApplicationWithRMHA is failing sporadically during precommit builds
Jason Lowe created YARN-5492: Summary: TestSubmitApplicationWithRMHA is failing sporadically during precommit builds Key: YARN-5492 URL: https://issues.apache.org/jira/browse/YARN-5492 Project: Hadoop YARN Issue Type: Bug Components: test Reporter: Jason Lowe I've seen TestSubmitApplicationWithRMHA#testHandleRMHADuringSubmitApplicationCallWIthoutSavedApplicationState timeout on some recent YARN precommit builds. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414146#comment-15414146 ] Karthik Kambatla commented on YARN-5453: If we can limit the scheduling to under maxResources, I don't see a reason to limit the demand. Limiting to maxResources could be an optimization, but I don't think that really buys us much. > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications
[ https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414129#comment-15414129 ] Hadoop QA commented on YARN-5382: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 214 unchanged - 1 fixed = 215 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 40s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 25s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822861/YARN-5382.15.patch | | JIRA Issue | YARN-5382 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0f8e4996b6dd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c4b77ae | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12700/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12700/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12700/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12700/testReport/ | | modules | C:
[jira] [Comment Edited] (YARN-5382) RM does not audit log kill request for active applications
[ https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414039#comment-15414039 ] Vrushali C edited comment on YARN-5382 at 8/9/16 6:57 PM: -- Uploading patches for trunk as well as branch 2.7 after rebasing to latest. Appreciate everyone's time, effort and patience on reviewing these patches. was (Author: vrushalic): Uploading patches for trunk as well as branch 2.7 after rebasing to latest. > RM does not audit log kill request for active applications > -- > > Key: YARN-5382 > URL: https://issues.apache.org/jira/browse/YARN-5382 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Vrushali C > Attachments: YARN-5382-branch-2.7.01.patch, > YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, > YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, > YARN-5382-branch-2.7.09.patch, YARN-5382-branch-2.7.10.patch, > YARN-5382-branch-2.7.11.patch, YARN-5382-branch-2.7.12.patch, > YARN-5382-branch-2.7.15.patch, YARN-5382.06.patch, YARN-5382.07.patch, > YARN-5382.08.patch, YARN-5382.09.patch, YARN-5382.10.patch, > YARN-5382.11.patch, YARN-5382.12.patch, YARN-5382.13.patch, > YARN-5382.14.patch, YARN-5382.15.patch > > > ClientRMService will audit a kill request but only if it either fails to > issue the kill or if the kill is sent to an already finished application. It > does not create a log entry when the application is active which is arguably > the most important case to audit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5476) Not existed application reported as ACCEPTED state by YarnClientImpl
[ https://issues.apache.org/jira/browse/YARN-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414043#comment-15414043 ] Junping Du commented on YARN-5476: -- After discussed with Yesha, we found the root cause here is because: 1. yarn client looping in submit application until it get ACCEPTED status from getApplicationReport(). If getApplicationReport() return ApplicationNoFound exception, it will go ahead to resubmit the application. 2. The call to getApplicationReport() will first go to check RM, if RM return ApplicationNoFound, it means RM doesn't have any info about this application. Basically, two possibility here: a. app is finished and RM remove track for this; b. app info haven't get persistent to RMStateStore before RM fail over/restart. Here the case belongs to case b. 3. Although app info haven't get persistent into RMStateStore yet, the app event already sent to ATS for handling so ATS will record this app and its initiated state - ACCEPTED. so getApplicationReport() will return ACCEPTED, and yarn client quit the loop in submit application but actually the app is already forgotten by RM. As a quick solution, we should move RM notify ATS later to wait at least NEW_SAVING states so RM state store get persistent on this application already. > Not existed application reported as ACCEPTED state by YarnClientImpl > > > Key: YARN-5476 > URL: https://issues.apache.org/jira/browse/YARN-5476 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Yesha Vora >Assignee: Junping Du >Priority: Critical > > Steps To reproduce: > * Create a cluster with RM HA enabled > * Start a yarn application > * When yarn application is in NEW state, do RM failover. > In this case, the application gets "ApplicationNotFound" exception from YARN. > and it goes to accepted state and gets stuck. > At this point, if yarn application -status is run, it says that > application is in ACCEPTED state. > This state is misleading. > {code} > hrt_qa@xxx:/root> yarn application -status application_1470379565464_0001 > 16/08/05 17:24:29 INFO impl.TimelineClientImpl: Timeline service address: > https://xxx:8190/ws/v1/timeline/ > 16/08/05 17:24:30 INFO client.AHSProxy: Connecting to Application History > server at xxx/xxx:10200 > 16/08/05 17:24:31 WARN retry.RetryInvocationHandler: Exception while invoking > ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1. Not > retrying because try once and fail. > org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application > with id 'application_1470379565464_0001' doesn't exist in RM. > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at >
[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications
[ https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-5382: - Attachment: YARN-5382.15.patch YARN-5382-branch-2.7.15.patch Uploading patches for trunk as well as branch 2.7 after rebasing to latest. > RM does not audit log kill request for active applications > -- > > Key: YARN-5382 > URL: https://issues.apache.org/jira/browse/YARN-5382 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Vrushali C > Attachments: YARN-5382-branch-2.7.01.patch, > YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, > YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, > YARN-5382-branch-2.7.09.patch, YARN-5382-branch-2.7.10.patch, > YARN-5382-branch-2.7.11.patch, YARN-5382-branch-2.7.12.patch, > YARN-5382-branch-2.7.15.patch, YARN-5382.06.patch, YARN-5382.07.patch, > YARN-5382.08.patch, YARN-5382.09.patch, YARN-5382.10.patch, > YARN-5382.11.patch, YARN-5382.12.patch, YARN-5382.13.patch, > YARN-5382.14.patch, YARN-5382.15.patch > > > ClientRMService will audit a kill request but only if it either fails to > issue the kill or if the kill is sent to an already finished application. It > does not create a log entry when the application is active which is arguably > the most important case to audit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler
[ https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-4329: Assignee: (was: Naganarasimha G R) > Allow fetching exact reason as to why a submitted app is in ACCEPTED state in > Fair Scheduler > > > Key: YARN-4329 > URL: https://issues.apache.org/jira/browse/YARN-4329 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler, resourcemanager >Reporter: Naganarasimha G R > > Similar to YARN-3946, it would be useful to capture possible reason why the > Application is in accepted state in FairScheduler -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage
[ https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414032#comment-15414032 ] Yishan Yang commented on YARN-4767: --- Is that possible to provide a patch for hadoop 2.7.2, it's kind of hard to back port this patch to 2.7.2 version. Since 2.8.0 won't be released shortly. Thanks > Network issues can cause persistent RM UI outage > > > Key: YARN-4767 > URL: https://issues.apache.org/jira/browse/YARN-4767 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Affects Versions: 2.7.2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-4767.001.patch, YARN-4767.002.patch, > YARN-4767.003.patch, YARN-4767.004.patch, YARN-4767.005.patch, > YARN-4767.006.patch, YARN-4767.007.patch > > > If a network issue causes an AM web app to resolve the RM proxy's address to > something other than what's listed in the allowed proxies list, the > AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy. > The RM proxy will then consume all available handler threads connecting to > itself over and over, resulting in an outage of the web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API
[ https://issues.apache.org/jira/browse/YARN-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414031#comment-15414031 ] Naganarasimha G R commented on YARN-5474: - Thanks for the review and commit [~rohithsharma] & [~sjlee0] > Typo mistake in AMRMClient#getRegisteredTimeineClient API > - > > Key: YARN-5474 > URL: https://issues.apache.org/jira/browse/YARN-5474 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Naganarasimha G R >Priority: Trivial > Labels: newbie > Fix For: 3.0.0-alpha1 > > Attachments: YARN-5474.v1.001.patch, YARN-5474.v1.002.patch > > > Just found that typo mistake in the API, It can be fixed since ATS is not > release in any version. > {code} > /** >* Get registered timeline client. >* @return the registered timeline client >*/ > public TimelineClient getRegisteredTimeineClient() { > return this.timelineClient; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4833) For Queue AccessControlException client retries multiple times on both RM
[ https://issues.apache.org/jira/browse/YARN-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413997#comment-15413997 ] Bibin A Chundatt commented on YARN-4833: Testcase failure is not related to patch attached.Tried running failed testcase after applying patch locally and is passing fine. YARN-5491 raised to handle random failure > For Queue AccessControlException client retries multiple times on both RM > - > > Key: YARN-4833 > URL: https://issues.apache.org/jira/browse/YARN-4833 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > Attachments: 0001-YARN-4833.patch, YARN-4833.0001.patch, > YARN-4833.0002.patch, YARN-4833.0003.patch > > > Submit application to queue where ACL is enabled and submitted user is not > having access. Client retries till failMaxattempt 10 times. > {noformat} > 16/03/18 10:01:06 INFO retry.RetryInvocationHandler: Exception while invoking > submitApplication of class ApplicationClientProtocolPBClientImpl over rm1. > Trying to fail over immediately. > org.apache.hadoop.security.AccessControlException: User hdfs does not have > permission to submit application_1458273884145_0001 to queue default > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:380) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:618) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:252) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:483) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2360) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2356) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2356) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateIOException(RPCUtil.java:80) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:119) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:272) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) > at com.sun.proxy.$Proxy23.submitApplication(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:261) > at > org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:295) > at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) > at
[jira] [Commented] (YARN-5137) Make DiskChecker pluggable in NodeManager
[ https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413985#comment-15413985 ] Hadoop QA commented on YARN-5137: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 384 unchanged - 2 fixed = 384 total (was 386) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 14s {color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 24s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 39s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.logaggregation.TestAggregatedLogFormat | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822670/YARN-5137.006.patch | | JIRA Issue | YARN-5137 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux bfcc4693d2bf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3cd386b | |
[jira] [Created] (YARN-5491) Random Failure TestCapacityScheduler#testCSQueueBlocked
Bibin A Chundatt created YARN-5491: -- Summary: Random Failure TestCapacityScheduler#testCSQueueBlocked Key: YARN-5491 URL: https://issues.apache.org/jira/browse/YARN-5491 Project: Hadoop YARN Issue Type: Bug Components: test Reporter: Bibin A Chundatt Random testcase failure in trunk for org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked https://builds.apache.org/job/PreCommit-YARN-Build/12694/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity/TestCapacityScheduler/testCSQueueBlocked/ {noformat} java.lang.AssertionError: B Used Resource should be 12 GB expected:<12288> but was:<11264> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testCSQueueBlocked(TestCapacityScheduler.java:3667) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state
[ https://issues.apache.org/jira/browse/YARN-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413931#comment-15413931 ] Rohith Sharma K S commented on YARN-5477: - Basically issue is with user facing App states. There is improvement task YARN-3232(patch available) exist for removing few states which are not necessarily exposed to users. I think, It would be better to continue over there. > ApplicationId should not be visible to client before NEW_SAVING state > - > > Key: YARN-5477 > URL: https://issues.apache.org/jira/browse/YARN-5477 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Yesha Vora >Priority: Critical > > we should not return applicationId to client before entering NEW_SAVING > state. > As per design, RM restart/failover is not supported when application is in > NEW state. Thus, It makes sense to return appId to client after entering to > NEW_SAVING state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5373) NPE listing wildcard directory in containerLaunch
[ https://issues.apache.org/jira/browse/YARN-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413916#comment-15413916 ] Karthik Kambatla commented on YARN-5373: Thanks for picking this up, [~templedf]. Quick high-level question. After chowning to the user who would run the container, can we setfacl to give access to user "yarn" as well? Comments on the patch itself: # container-executor.c ## The log messages for failure to open/read directory are missing the word NOT? ## After readdir, I see the patch resets errno. What happens if the first call to readdir fails? Don't we lose the errno and fail to log and return -1? May be reset before the readdir call? Skip resetting altogether? ## For the (dir == NULL), can we invert the operands to (NULL == dir)? # test-container-executor.c - typo: s/existant/existent On the tests, do we need tests with {{linux-container-executor.nonsecure-mode.limit-users}} turned on/off? > NPE listing wildcard directory in containerLaunch > - > > Key: YARN-5373 > URL: https://issues.apache.org/jira/browse/YARN-5373 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.9.0 >Reporter: Haibo Chen >Assignee: Daniel Templeton >Priority: Blocker > Attachments: YARN-5373.001.patch, YARN-5373.002.patch > > > YARN-4958 added support for wildcards in file localization. It introduces a > NPE > at > {code:java} > for (File wildLink : directory.listFiles()) { > sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName())); > } > {code} > When directory.listFiles returns null (only happens in a secure cluster), NPE > will cause the container fail to launch. > Hive, Oozie jobs fail as a result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5455) LinuxContainerExecutor needs Javadocs
[ https://issues.apache.org/jira/browse/YARN-5455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413917#comment-15413917 ] Daniel Templeton commented on YARN-5455: The checkstyle issues are all bogus. The unused imports are for the javadocs. The long line can't be made shorter. > LinuxContainerExecutor needs Javadocs > - > > Key: YARN-5455 > URL: https://issues.apache.org/jira/browse/YARN-5455 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-5455.001.patch, YARN-5455.002.patch, > YARN-5455.003.patch, YARN-5455.004.patch > > > 'Nuff said. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5455) LinuxContainerExecutor needs Javadocs
[ https://issues.apache.org/jira/browse/YARN-5455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413913#comment-15413913 ] Hadoop QA commented on YARN-5455: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 6 new + 9 unchanged - 7 fixed = 15 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 240 unchanged - 8 fixed = 240 total (was 248) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 58s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 21s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822837/YARN-5455.004.patch | | JIRA Issue | YARN-5455 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 764ad5f5fccf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3cd386b | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/12697/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12697/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12697/console | | Powered by | Apache Yetus 0.3.0
[jira] [Assigned] (YARN-5452) [YARN-3368] Support scheduler activities in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reassigned YARN-5452: Assignee: Wangda Tan > [YARN-3368] Support scheduler activities in new YARN UI > --- > > Key: YARN-5452 > URL: https://issues.apache.org/jira/browse/YARN-5452 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-5452.1.patch > > > YARN-4091 added scheduler activities REST API, we can support this in the new > YARN UI as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider
[ https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413907#comment-15413907 ] Daniel Templeton commented on YARN-4997: Thanks for the patch, [~Tao Jie]. Overall looks good to me. A couple of minor comments: {code} LOG.info(authorizer.getClass().getName() + " is destoryed."); {code} Probably best to make the log message at DEBUG level and only do the string concat if debug is on. {code} if (queueInfo == null) { authorizer.setPermission(allocsLoader.getDefaultPermissions(), UserGroupInformation.getCurrentUser()); return; } {code} Since that's in a small method, can we please use an _else_ instead of returning from the _if_? {code} AccessControlList operationAcl = acls.get( SchedulerUtils.toAccessType(operation)); {code} Tiny quibble... Can we split the line on the equals instead of on the paren? {code} authorizer .setPermission(permissions, UserGroupInformation.getCurrentUser()); {code} Similarly, can we break on the comma here instead of the dot? Finally, for the public and protected methods you added, please add javadocs. > Update fair scheduler to use pluggable auth provider > > > Key: YARN-4997 > URL: https://issues.apache.org/jira/browse/YARN-4997 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Tao Jie > Attachments: YARN-4997-001.patch, YARN-4997-002.patch > > > Now that YARN-3100 has made the authorization pluggable, it should be > supported by the fair scheduler. YARN-3100 only updated the capacity > scheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4833) For Queue AccessControlException client retries multiple times on both RM
[ https://issues.apache.org/jira/browse/YARN-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413901#comment-15413901 ] Hadoop QA commented on YARN-4833: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 59s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822829/YARN-4833.0003.patch | | JIRA Issue | YARN-4833 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 49aa228ddbc7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aba858 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/12694/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/12694/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/12694/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/12694/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > For
[jira] [Commented] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages
[ https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413878#comment-15413878 ] Sunil G commented on YARN-5334: --- It seems jenkins has not yet ran. Kicking it again. > [YARN-3368] Introduce REFRESH button in various UI pages > > > Key: YARN-5334 > URL: https://issues.apache.org/jira/browse/YARN-5334 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp >Reporter: Sunil G >Assignee: Sreenath Somarajapuram > Attachments: YARN-5334-YARN-3368-0001.patch, > YARN-5334-YARN-3368-0002.patch > > > It will be better if we have a common Refresh button in all pages to get the > latest information in all tables such as apps/nodes/queue etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5455) LinuxContainerExecutor needs Javadocs
[ https://issues.apache.org/jira/browse/YARN-5455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5455: --- Attachment: YARN-5455.004.patch Whoops. Cleaned up errors. > LinuxContainerExecutor needs Javadocs > - > > Key: YARN-5455 > URL: https://issues.apache.org/jira/browse/YARN-5455 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-5455.001.patch, YARN-5455.002.patch, > YARN-5455.003.patch, YARN-5455.004.patch > > > 'Nuff said. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API
[ https://issues.apache.org/jira/browse/YARN-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413873#comment-15413873 ] Hudson commented on YARN-5474: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10244 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10244/]) YARN-5474. Typo mistake in AMRMClient#getRegisteredTimeineClient API. (rohithsharmaks: rev 3cd386bd97c05f2bc5d95014f9cf34d0dc4588ee) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java > Typo mistake in AMRMClient#getRegisteredTimeineClient API > - > > Key: YARN-5474 > URL: https://issues.apache.org/jira/browse/YARN-5474 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Naganarasimha G R >Priority: Trivial > Labels: newbie > Fix For: 3.0.0-alpha1 > > Attachments: YARN-5474.v1.001.patch, YARN-5474.v1.002.patch > > > Just found that typo mistake in the API, It can be fixed since ATS is not > release in any version. > {code} > /** >* Get registered timeline client. >* @return the registered timeline client >*/ > public TimelineClient getRegisteredTimeineClient() { > return this.timelineClient; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sandflee updated YARN-5483: --- Attachment: YARN-5483-branch-2.6.patch YARN-5483-branch-2.7.patch > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413844#comment-15413844 ] sandflee edited comment on YARN-5483 at 8/9/16 5:08 PM: 1, update a new patch to fix some other diamond operator issues 2, in branch-2.7.patch {code} finishedContainersSentToAM.putIfAbsent(nodeId, new ArrayList<>()); {code} this leads to compile errors about "java.util.ArrayListcan't cast to java.util.List)", I don;t known why, so removed the related fix. was (Author: sandflee): 1, update a new patch to minor fix diamond operator issues 2, in branch-2.7.patch {code} finishedContainersSentToAM.putIfAbsent(nodeId, new ArrayList<>()); {code} this leads to compile errors about "java.util.ArrayListcan't cast to java.util.List)", I don;t known why, so removed the related fix. > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413857#comment-15413857 ] sandflee commented on YARN-5483: without YARN-5262 , there will be too many FINISHED_CONTAINERS_PULLED_BY_AM event, seems we should merge YARN-5262 to 2.6/2.7 too. > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5455) LinuxContainerExecutor needs Javadocs
[ https://issues.apache.org/jira/browse/YARN-5455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413853#comment-15413853 ] Hadoop QA commented on YARN-5455: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 4 new + 9 unchanged - 7 fixed = 13 total (was 16) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12822828/YARN-5455.003.patch | | JIRA Issue | YARN-5455 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bc7f75579370 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4aba858 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/12695/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/12695/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/12695/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | |
[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage
[ https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413848#comment-15413848 ] Karthik Kambatla commented on YARN-4767: [~templedf] - I am comfortable with the approach here. I have some nits to point out, but will post the comments along with review of tests. Can we add tests and make progress towards getting this in? [~xgong] and [~vinodkv] - chime in if you can. I will interpret your silence as a go-ahead :) > Network issues can cause persistent RM UI outage > > > Key: YARN-4767 > URL: https://issues.apache.org/jira/browse/YARN-4767 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Affects Versions: 2.7.2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-4767.001.patch, YARN-4767.002.patch, > YARN-4767.003.patch, YARN-4767.004.patch, YARN-4767.005.patch, > YARN-4767.006.patch, YARN-4767.007.patch > > > If a network issue causes an AM web app to resolve the RM proxy's address to > something other than what's listed in the allowed proxies list, the > AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy. > The RM proxy will then consume all available handler threads connecting to > itself over and over, resulting in an outage of the web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413844#comment-15413844 ] sandflee commented on YARN-5483: 1, update a new patch to minor fix diamond operator issues 2, in branch-2.7.patch {code} finishedContainersSentToAM.putIfAbsent(nodeId, new ArrayList<>()); {code} this leads to compile errors about "java.util.ArrayListcan't cast to java.util.List)", I don;t known why, so removed the related fix. > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5483) Optimize RMAppAttempt#pullJustFinishedContainers
[ https://issues.apache.org/jira/browse/YARN-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413841#comment-15413841 ] Daniel Templeton commented on YARN-5483: LGTM. +1 (non-binding) > Optimize RMAppAttempt#pullJustFinishedContainers > > > Key: YARN-5483 > URL: https://issues.apache.org/jira/browse/YARN-5483 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5483-branch-2.6.patch, YARN-5483-branch-2.7.patch, > YARN-5483.01.patch, YARN-5483.02.patch, YARN-5483.03.patch, > YARN-5483.04.patch, jprofiler-cpu.png > > > about 1000 app running on cluster, jprofiler found pullJustFinishedContainers > cost too much cpu. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5343) TestContinuousScheduling#testSortedNodes fail intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu reassigned YARN-5343: -- Assignee: Yufei Gu > TestContinuousScheduling#testSortedNodes fail intermittently > > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org