[jira] [Updated] (YARN-6369) [YARN-3368] Refactor of yarn-node pages in YARN-UI
[ https://issues.apache.org/jira/browse/YARN-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-6369: --- Summary: [YARN-3368] Refactor of yarn-node pages in YARN-UI (was: [YARN-3368] Refactor of yarn-app and yarn-app-attempt pages in YARN-UI) > [YARN-3368] Refactor of yarn-node pages in YARN-UI > -- > > Key: YARN-6369 > URL: https://issues.apache.org/jira/browse/YARN-6369 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6369) [YARN-3368] Refactor of yarn-node pages in YARN-UI
[ https://issues.apache.org/jira/browse/YARN-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-6369: --- Description: All yarn-node related pages are separate routes now. node related pages should be in a hierarchical style. > [YARN-3368] Refactor of yarn-node pages in YARN-UI > -- > > Key: YARN-6369 > URL: https://issues.apache.org/jira/browse/YARN-6369 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > > All yarn-node related pages are separate routes now. node related pages > should be in a hierarchical style. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6369) [YARN-3368] Refactor of yarn-node related pages in YARN-UI
[ https://issues.apache.org/jira/browse/YARN-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-6369: --- Summary: [YARN-3368] Refactor of yarn-node related pages in YARN-UI (was: [YARN-3368] Refactor of yarn-node pages in YARN-UI) > [YARN-3368] Refactor of yarn-node related pages in YARN-UI > -- > > Key: YARN-6369 > URL: https://issues.apache.org/jira/browse/YARN-6369 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > > All yarn-node related pages are separate routes now, yarn-node related pages > should be in a hierarchical style. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6369) [YARN-3368] Refactor of yarn-node pages in YARN-UI
[ https://issues.apache.org/jira/browse/YARN-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-6369: --- Description: All yarn-node related pages are separate routes now, yarn-node related pages should be in a hierarchical style. (was: All yarn-node related pages are separate routes now. node related pages should be in a hierarchical style.) > [YARN-3368] Refactor of yarn-node pages in YARN-UI > -- > > Key: YARN-6369 > URL: https://issues.apache.org/jira/browse/YARN-6369 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > > All yarn-node related pages are separate routes now, yarn-node related pages > should be in a hierarchical style. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
[ https://issues.apache.org/jira/browse/YARN-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164265#comment-16164265 ] Akira Ajisaka commented on YARN-7058: - Thank you for the information. I verified the commit of YARN-5753 on trunk can be cherry-picked to branch-2 cleanly, so I'll cherry-pick this. > Add null check in AMRMClientImpl#getMatchingRequest > --- > > Key: YARN-7058 > URL: https://issues.apache.org/jira/browse/YARN-7058 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 2.9.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Attachments: YARN-7058-branch-2.001.patch > > > As of YARN-4889, the behavior of AMRMClientImpl#getMatchingRequests has > slightly changed. > After YARN-4889, the method will throw NPE if the method is called before > calling addContainerRequest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5753) fix NPE in AMRMClientImpl.getMatchingRequests()
[ https://issues.apache.org/jira/browse/YARN-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated YARN-5753: Fix Version/s: 2.9.0 > fix NPE in AMRMClientImpl.getMatchingRequests() > --- > > Key: YARN-5753 > URL: https://issues.apache.org/jira/browse/YARN-5753 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: yarn5753.001.patch, yarn5753.002.patch > > > {code:java} > RemoteRequestsTable remoteRequestsTable = getTable(0); > List> matchingRequests = > remoteRequestsTable.getMatchingRequests(priority, resourceName, > executionType, capability); > {code} > remoteRequestsTable can be null, causing NPE. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5753) fix NPE in AMRMClientImpl.getMatchingRequests()
[ https://issues.apache.org/jira/browse/YARN-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164272#comment-16164272 ] Akira Ajisaka commented on YARN-5753: - Cherry-picked to branch-2. > fix NPE in AMRMClientImpl.getMatchingRequests() > --- > > Key: YARN-5753 > URL: https://issues.apache.org/jira/browse/YARN-5753 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: yarn5753.001.patch, yarn5753.002.patch > > > {code:java} > RemoteRequestsTable remoteRequestsTable = getTable(0); > List> matchingRequests = > remoteRequestsTable.getMatchingRequests(priority, resourceName, > executionType, capability); > {code} > remoteRequestsTable can be null, causing NPE. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
[ https://issues.apache.org/jira/browse/YARN-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164273#comment-16164273 ] Akira Ajisaka commented on YARN-7058: - Backported to branch-2. Closing this as duplicate. Thanks [~sarutak] for the report! > Add null check in AMRMClientImpl#getMatchingRequest > --- > > Key: YARN-7058 > URL: https://issues.apache.org/jira/browse/YARN-7058 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 2.9.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Attachments: YARN-7058-branch-2.001.patch > > > As of YARN-4889, the behavior of AMRMClientImpl#getMatchingRequests has > slightly changed. > After YARN-4889, the method will throw NPE if the method is called before > calling addContainerRequest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7157: Attachment: YARN-7157.004.patch While committing the patch I found few conflicts. So I rebased the patch and reattaching to run jenkins > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
Rohith Sharma K S created YARN-7188: --- Summary: TimelineSchemaCreator fails to create flowrun table Key: YARN-7188 URL: https://issues.apache.org/jira/browse/YARN-7188 Project: Hadoop YARN Issue Type: Bug Reporter: Rohith Sharma K S In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create flow run table. {noformat} 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in creating hbase tables: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164427#comment-16164427 ] Hadoop QA commented on YARN-7157: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 0s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 40s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 41s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7157 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886823/YARN-7157.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c93274623bfe 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fa6cc43 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17434/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17434/artifact/patchprocess/patch-unit-hadoop-yarn
[jira] [Updated] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7188: Description: In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create flow run table. {noformat} 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in creating hbase tables: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at {noformat} This is because coprocessor jar expected to be there always in hdfs location by default it is configured by was: In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create flow run table. {noformat} 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in creating hbase tables: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at {noformat} > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar expected to be there always in hdfs location > by default it is configured by -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7188: Description: In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create flow run table. {noformat} 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in creating hbase tables: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at {noformat} This is because coprocessor jar is always expected to be there always in hdfs location. By default it is configured to {code} public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = TIMELINE_SERVICE_PREFIX + "hbase.coprocessor.jar.hdfs.location"; /** default hdfs location for flowrun coprocessor jar. */ public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; {code} was: In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create flow run table. {noformat} 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in creating hbase tables: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at {noformat} This is because coprocessor jar expected to be there always in hdfs location by default it is configured by > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e
[jira] [Commented] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164439#comment-16164439 ] Rohith Sharma K S commented on YARN-7188: - It fails because while creating a table it except file location to be present in hdfs location. {noformat} 2017-09-13 15:21:39,051 INFO [main] flow.FlowRunTable: CoprocessorJarPath=/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar 2017-09-13 15:21:39,083 WARN [main] storage.TimelineSchemaCreator: Skip and continue on: org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks {noformat} I remember it used to work in lower versions of hbase without failing table creation. May be some modifications from HBase causing this issue! cc:/ [~vrushalic] [~varun_saxena] > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7157: -- Attachment: YARN-7157.005.patch Thanks [~rohithsharma]. Attaching new patch after fixing test case failure. > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7188: Target Version/s: 3.0.0-beta1 > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164516#comment-16164516 ] Bibin A Chundatt commented on YARN-7157: [~sunilg] Thank you for providing patch. # Can we think of grouping all the user level checks ? This would help in avoiding the additional filters. # Found one whitespace error too while applying. Next patch please handle the same > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164530#comment-16164530 ] Sunil G commented on YARN-7157: --- Thanks [~bibinchundatt] There are 2 aspects. One is user filter submitted from user side. This one is an admin filter. User filter need not have to check for access, as it just a contains check. > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7187) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-7187: --- Attachment: YARN-7187-branch-2.01.patch > branch-2 native compilation broken in hadoop-yarn-server-nodemanager > > > Key: YARN-7187 > URL: https://issues.apache.org/jira/browse/YARN-7187 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Varun Saxena >Priority: Blocker > Attachments: YARN-7187-branch-2.01.patch, YARN-7187-branch-2.patch > > > {noformat} > [WARNING] make[2]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] make[1]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c: > In function ‘all_numbers’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > error: ‘for’ loop initial declarations are only allowed in C99 mode > [WARNING]for (int i = 0; i < strlen(input); i++) { > [WARNING]^ > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > note: use option -std=c99 or -std=gnu99 to compile your code > [WARNING] make[2]: *** > [CMakeFiles/container.dir/main/native/container-executor/impl/utils/string-utils.c.o] > Error 1 > [WARNING] make[2]: *** Waiting for unfinished jobs > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c: > In function ‘tokenize_docker_command’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1193:7: > warning: unused variable ‘c’ [-Wunused-variable] > [WARNING]int c = 0; > [WARNING]^ > [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 > [WARNING] make: *** [all] Error 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164546#comment-16164546 ] Bibin A Chundatt commented on YARN-7157: [~sunilg] Looks like i didnt explain clearly. {code} // checkAccess can grab the scheduler lock so call it last boolean allowAccess = checkAccess(callerUGI, application.getUser(), ApplicationAccessType.VIEW_APP, application); if (scope == ApplicationsRequestScope.VIEWABLE && !allowAccess) { continue; } // Given RM is configured to display apps per user, skip apps to which // this caller doesn't have access to view. if (displayPerUserApps && !allowAccess) { continue; } {code} Move this part of code higher in loop so that for normal WEBUI access could avoid lots of checks when {{displayPerUserApps}} is enabled. > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164569#comment-16164569 ] Hadoop QA commented on YARN-7157: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 42s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7157 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886839/YARN-7157.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 637eb997f087 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | |
[jira] [Commented] (YARN-7187) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164582#comment-16164582 ] Hadoop QA commented on YARN-7187: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 25s{color} | {color:green} branch-2 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-nodemanager in branch-2 failed. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 1 unchanged - 2 fixed = 2 total (was 3) {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 11 new + 14 unchanged - 0 fixed = 25 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 52s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:eaf5c66 | | JIRA Issue | YARN-7187 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886855/YARN-7187-branch-2.01.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux c6075052599e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 055bde9 | | Default Java | 1.7.0_151 | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/17436/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | cc | https://builds.apache.org/job/PreCommit-YARN-Build/17436/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/17436/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17436/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17436/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17436/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically gene
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164584#comment-16164584 ] Sunil G commented on YARN-7157: --- [~bibinchundatt]. I have an additional thought here, {{checkAccess}} is slightly more costlier. So is it better to do all basic checks and get apps skipped as possible before checkAccess? > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7187) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164586#comment-16164586 ] Hadoop QA commented on YARN-7187: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 44s{color} | {color:green} branch-2 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-nodemanager in branch-2 failed. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 1 unchanged - 2 fixed = 2 total (was 3) {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 11 new + 14 unchanged - 0 fixed = 25 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 5s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:eaf5c66 | | JIRA Issue | YARN-7187 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886855/YARN-7187-branch-2.01.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 76477ae25d3f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 055bde9 | | Default Java | 1.7.0_151 | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/17437/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | cc | https://builds.apache.org/job/PreCommit-YARN-Build/17437/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/17437/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17437/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17437/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17437/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically gene
[jira] [Updated] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically
[ https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-7084: - Attachment: YARN-7084.001.patch Saw this fail again, and I had a bit of time to take a deeper look. The test is starting the monitor and then _immediately_ checking if the policy was edited: {code} try { monitor.serviceInit(conf); monitor.serviceStart(); } catch (Exception e) { fail("SchedulingMonitor failes to start."); } verify(mPolicy, times(1)).editSchedule(); {code} However looking at how the monitor actually start, it's an asynchronous thread pool that does the real work: {code} public void serviceStart() throws Exception { assert !stopped : "starting when already stopped"; ses = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName(getName()); return t; } }); handler = ses.scheduleAtFixedRate(new PreemptionChecker(), 0, monitorInterval, TimeUnit.MILLISECONDS); super.serviceStart(); } {code} Therefore there's no guarantee that when the start method returns that the thread pool has had time to pick up the scheduled task and execute it before the verify check. On the flip side, there's also no guarantee that the thread pool couldn't have edited the schedule multiple times before the verify check if the startup processing was particularly slow or the main thread was somehow stalled for a while. If the intent of the unit test is to simply verify that schedule editing commences when the monitor is started then I think it's better to use verification with timeout here. However I'm a little unclear on exactly what semantics the test is really trying to test. Pinging [~mshen]. I'm attaching a patch that implements the verification-with-timeout approach. The patch also simplifies the unit test by letting exceptions bubble up and fail the test directly rather than catching and failing with an assert. This has the benefit of being able to see the exception that caused the test failure directly rather than a generic failure message that some exception was thrown during the test. > TestSchedulingMonitor#testRMStarts fails sporadically > - > > Key: YARN-7084 > URL: https://issues.apache.org/jira/browse/YARN-7084 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Jason Lowe > Attachments: YARN-7084.001.patch > > > TestSchedulingMonitor has been failing sporadically in precommit builds. > Failures look like this: > {noformat} > Running > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec <<< > FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor > testRMStarts(org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor) > Time elapsed: 1.728 sec <<< FAILURE! > org.mockito.exceptions.verification.WantedButNotInvoked: > Wanted but not invoked: > schedulingEditPolicy.editSchedule(); > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58) > However, there were other interactions with this mock: > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.(SchedulingMonitor.java:50) > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61) > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:62) > at > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically
[ https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe reassigned YARN-7084: Assignee: Jason Lowe Affects Version/s: 2.8.2 2.9.0 2.7.4 3.0.0-alpha4 Target Version/s: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > TestSchedulingMonitor#testRMStarts fails sporadically > - > > Key: YARN-7084 > URL: https://issues.apache.org/jira/browse/YARN-7084 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha4, 2.7.4, 2.9.0, 2.8.2 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: YARN-7084.001.patch > > > TestSchedulingMonitor has been failing sporadically in precommit builds. > Failures look like this: > {noformat} > Running > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec <<< > FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor > testRMStarts(org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor) > Time elapsed: 1.728 sec <<< FAILURE! > org.mockito.exceptions.verification.WantedButNotInvoked: > Wanted but not invoked: > schedulingEditPolicy.editSchedule(); > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58) > However, there were other interactions with this mock: > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.(SchedulingMonitor.java:50) > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61) > -> at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:62) > at > org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7189) Docker doesn't remove containers that error out early
Eric Badger created YARN-7189: - Summary: Docker doesn't remove containers that error out early Key: YARN-7189 URL: https://issues.apache.org/jira/browse/YARN-7189 Project: Hadoop YARN Issue Type: Sub-task Reporter: Eric Badger Assignee: Eric Badger Once the docker run command is executed, the docker container is created unless the return code is 125 meaning that the run command itself failed (https://docs.docker.com/engine/reference/run/#exit-status). Any error that happens after the docker run needs to remove the container during cleanup. {noformat:title=container-executor.c:launch_docker_container_as_user} {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164845#comment-16164845 ] Bibin A Chundatt commented on YARN-7157: [~sunilg] Aggree with the fact that {{checkAccess}} is costlier for the first all for {{user}}. IIUC the usergroups gets cached after the first call for user, correct me if i am wrong. Also to add completeness to strict mode feature . We should handle in ATS and {{Jobhistory}} server too rt ? > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7189) Docker doesn't remove containers that error out early
[ https://issues.apache.org/jira/browse/YARN-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated YARN-7189: -- Description: Once the docker run command is executed, the docker container is created unless the return code is 125 meaning that the run command itself failed (https://docs.docker.com/engine/reference/run/#exit-status). Any error that happens after the docker run needs to remove the container during cleanup. {noformat:title=container-executor.c:launch_docker_container_as_user} snprintf(docker_command_with_binary, command_size, "%s %s", docker_binary, docker_command); fprintf(LOGFILE, "Launching docker container...\n"); FILE* start_docker = popen(docker_command_with_binary, "r"); {noformat} was: Once the docker run command is executed, the docker container is created unless the return code is 125 meaning that the run command itself failed (https://docs.docker.com/engine/reference/run/#exit-status). Any error that happens after the docker run needs to remove the container during cleanup. {noformat:title=container-executor.c:launch_docker_container_as_user} {noformat} > Docker doesn't remove containers that error out early > - > > Key: YARN-7189 > URL: https://issues.apache.org/jira/browse/YARN-7189 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Eric Badger >Assignee: Eric Badger > > Once the docker run command is executed, the docker container is created > unless the return code is 125 meaning that the run command itself failed > (https://docs.docker.com/engine/reference/run/#exit-status). Any error that > happens after the docker run needs to remove the container during cleanup. > {noformat:title=container-executor.c:launch_docker_container_as_user} > snprintf(docker_command_with_binary, command_size, "%s %s", docker_binary, > docker_command); > fprintf(LOGFILE, "Launching docker container...\n"); > FILE* start_docker = popen(docker_command_with_binary, "r"); > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164845#comment-16164845 ] Bibin A Chundatt edited comment on YARN-7157 at 9/13/17 3:53 PM: - [~sunilg] Aggree with the fact that {{checkAccess}} is costlier for first call for {{user}}. IIUC usergroups gets cached after the first call , correct me if i am wrong. Also to add completeness for per-user's feature shouldn't we handle in ATS and {{Jobhistory}} server too ? was (Author: bibinchundatt): [~sunilg] Aggree with the fact that {{checkAccess}} is costlier for the first all for {{user}}. IIUC the usergroups gets cached after the first call for user, correct me if i am wrong. Also to add completeness to strict mode feature . We should handle in ATS and {{Jobhistory}} server too rt ? > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164869#comment-16164869 ] Sunil G commented on YARN-7157: --- Yes. For different calls, check ll be done again. I ll track them in separate jira. If this is fine, [~rohithsharma]/[~bibinchundatt] please help to commit this. > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164872#comment-16164872 ] Varun Saxena commented on YARN-7188: Is it a problem? We mention in the documentation that you have to place the jar in the said location. And flow run coprocessor is required to retrieve values correctly from flow run table. > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C reassigned YARN-7188: Assignee: Vrushali C > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Vrushali C > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164880#comment-16164880 ] Vrushali C commented on YARN-7188: -- The coprocessor jar was earlier picked up from the classpath since it was a statically loaded coprocessor which was enabled for all tables. Hence we had special defensive checks in the coprocessor to ensure it works on flow run table only. Had the jar not been in the classpath, we would have likely had the same error during schema creation as well as during region server restart. As part of YARN-6094, we have updated the coprocessor to be a dynamically loaded, table level one. The documentation changes have also been made. > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically
[ https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164882#comment-16164882 ] Hadoop QA commented on YARN-7084: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 57s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7084 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886883/YARN-7084.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e8e35c21c2fd 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fa6cc43 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17438/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17438/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17438/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestSchedulingMonitor#testRMStarts fails sporadically >
[jira] [Commented] (YARN-7187) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164901#comment-16164901 ] Varun Saxena commented on YARN-7187: [~leftnoteasy], cherry-picked YARN-5719 into branch-2. One more issue in code seems to be that we calculate strlen on every iteration of for loop, which would be suboptimal. > branch-2 native compilation broken in hadoop-yarn-server-nodemanager > > > Key: YARN-7187 > URL: https://issues.apache.org/jira/browse/YARN-7187 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Varun Saxena >Priority: Blocker > Attachments: YARN-7187-branch-2.01.patch, YARN-7187-branch-2.patch > > > {noformat} > [WARNING] make[2]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] make[1]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c: > In function ‘all_numbers’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > error: ‘for’ loop initial declarations are only allowed in C99 mode > [WARNING]for (int i = 0; i < strlen(input); i++) { > [WARNING]^ > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > note: use option -std=c99 or -std=gnu99 to compile your code > [WARNING] make[2]: *** > [CMakeFiles/container.dir/main/native/container-executor/impl/utils/string-utils.c.o] > Error 1 > [WARNING] make[2]: *** Waiting for unfinished jobs > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c: > In function ‘tokenize_docker_command’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1193:7: > warning: unused variable ‘c’ [-Wunused-variable] > [WARNING]int c = 0; > [WARNING]^ > [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 > [WARNING] make: *** [all] Error 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath
Vrushali C created YARN-7190: Summary: Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath Key: YARN-7190 URL: https://issues.apache.org/jira/browse/YARN-7190 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vrushali C [~jlowe] had a good observation about the user classpath getting extra jars in hadoop 2.x brought in with TSv2. If users start picking up Hadoop 2,x's version of HBase jars instead of the ones they shipped with their job, it could be a problem. So when TSv2 is to be used in 2,x, the hbase related jars should come into only the NM classpath not the user classpath. Here is a list of some jars {code} commons-csv-1.0.jar commons-el-1.0.jar commons-httpclient-3.1.jar disruptor-3.3.0.jar findbugs-annotations-1.3.9-1.jar hbase-annotations-1.2.6.jar hbase-client-1.2.6.jar hbase-common-1.2.6.jar hbase-hadoop2-compat-1.2.6.jar hbase-hadoop-compat-1.2.6.jar hbase-prefix-tree-1.2.6.jar hbase-procedure-1.2.6.jar hbase-protocol-1.2.6.jar hbase-server-1.2.6.jar htrace-core-3.1.0-incubating.jar jamon-runtime-2.4.1.jar jasper-compiler-5.5.23.jar jasper-runtime-5.5.23.jar jcodings-1.0.8.jar joni-2.1.2.jar jsp-2.1-6.1.14.jar jsp-api-2.1-6.1.14.jar jsr311-api-1.1.1.jar metrics-core-2.2.0.jar servlet-api-2.5-6.1.14.jar {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7185) ContainerScheduler should only look at availableResource for GUARANTEED containers when OPPORTUNISTIC container queuing is enabled.
[ https://issues.apache.org/jira/browse/YARN-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164918#comment-16164918 ] Wangda Tan commented on YARN-7185: -- Thanks [~asuresh] for review and commit! > ContainerScheduler should only look at availableResource for GUARANTEED > containers when OPPORTUNISTIC container queuing is enabled. > --- > > Key: YARN-7185 > URL: https://issues.apache.org/jira/browse/YARN-7185 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sumana Sathish >Assignee: Tan, Wangda >Priority: Blocker > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7185.001.patch, YARN-7185.002.patch, > YARN-7185.003.patch > > > Found an issue: > When DefaultContainerCalculator is enabled and opportunistic container > allocation is disabled. It is possible that for a NM: > {code} > Σ(allocated-container.vcores) > nm.configured-vores. > {code} > When this happens, ContainerScheduler will report errors like: > bq. ContainerScheduler > (ContainerScheduler.java:pickOpportunisticContainersToKill(458)) - There are > no sufficient resources to start guaranteed. > This will be an incompatible change after 2.8 because before YARN-6706, we > can start containers when DefaultContainerCalculator is configured and vcores > is overallocated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7187) branch-2 native compilation broken in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16164953#comment-16164953 ] Wangda Tan commented on YARN-7187: -- [~varun_saxena], thanks for cherry-picking YARN-5719, as mentioned by Jason, I just pushed YARN-7014 to branch-2 as well. The strlen issue is fixed by YARN-6852 in trunk, probably we can add a separate fix as well. > branch-2 native compilation broken in hadoop-yarn-server-nodemanager > > > Key: YARN-7187 > URL: https://issues.apache.org/jira/browse/YARN-7187 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Varun Saxena >Priority: Blocker > Attachments: YARN-7187-branch-2.01.patch, YARN-7187-branch-2.patch > > > {noformat} > [WARNING] make[2]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] make[1]: Leaving directory > `/home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native' > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c: > In function ‘all_numbers’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > error: ‘for’ loop initial declarations are only allowed in C99 mode > [WARNING]for (int i = 0; i < strlen(input); i++) { > [WARNING]^ > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c:33:3: > note: use option -std=c99 or -std=gnu99 to compile your code > [WARNING] make[2]: *** > [CMakeFiles/container.dir/main/native/container-executor/impl/utils/string-utils.c.o] > Error 1 > [WARNING] make[2]: *** Waiting for unfinished jobs > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c: > In function ‘tokenize_docker_command’: > [WARNING] > /home/root1/Projects/Branch2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1193:7: > warning: unused variable ‘c’ [-Wunused-variable] > [WARNING]int c = 0; > [WARNING]^ > [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 > [WARNING] make: *** [all] Error 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6067) Applications API Service HA
[ https://issues.apache.org/jira/browse/YARN-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He resolved YARN-6067. --- Resolution: Fixed Because the api-server is stateless, users can achieve HA is by having a load balancer fronting multiple instances of api-server. If we merge the api-server into RM to achieve HA, then this is a dup of YARN-6626. > Applications API Service HA > --- > > Key: YARN-6067 > URL: https://issues.apache.org/jira/browse/YARN-6067 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha > > We need to start thinking about HA for the Applications API Service. How do > we achieve it? Should API Service become part of the RM process to get a lot > of things for free? Should there be some other strategy. We need to start the > discussion. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6393) Create a API class for yarn-native-service user-facing constants
[ https://issues.apache.org/jira/browse/YARN-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He reassigned YARN-6393: - Assignee: Jian He > Create a API class for yarn-native-service user-facing constants > > > Key: YARN-6393 > URL: https://issues.apache.org/jira/browse/YARN-6393 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Jian He >Assignee: Jian He > > User can use some constants in the json input spec file for later > substitution. > e.g. if user specifies $HOSTNAME in the env section of the input file, it'll > be substituted by AM with the actual host name. We'll need to create an API > class and clearly documents it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6393) Create a API class for yarn-native-service user-facing constants
[ https://issues.apache.org/jira/browse/YARN-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He resolved YARN-6393. --- Resolution: Fixed This is done as part of YARN-6405. > Create a API class for yarn-native-service user-facing constants > > > Key: YARN-6393 > URL: https://issues.apache.org/jira/browse/YARN-6393 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Jian He >Assignee: Jian He > > User can use some constants in the json input spec file for later > substitution. > e.g. if user specifies $HOSTNAME in the env section of the input file, it'll > be substituted by AM with the actual host name. We'll need to create an API > class and clearly documents it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design
[ https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165016#comment-16165016 ] Grant Sohn commented on YARN-7009: -- +1 (non-binding). > TestNMClient.testNMClientNoCleanupOnStop is flaky by design > --- > > Key: YARN-7009 > URL: https://issues.apache.org/jira/browse/YARN-7009 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7009.000.patch, YARN-7009.001.patch, > YARN-7009.002.patch > > > The sleeps to wait for a transition to reinit and than back to running is not > long enough, it can miss the reinit event. > {code} > java.lang.AssertionError: Exception is not expected: > org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform RE_INIT on > [container_1502735389852_0001_01_01]. Current state is [REINITIALIZING, > isReInitializing=true]. > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668) > at > org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214) > at > org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) > at > org.apache.hadoop.yarn.client.api.impl.TestNMClient.testReInitializeContainer(TestNMClient.java:567) > at > org.apache.hadoop.yarn.client.api.impl.TestNMClient.testContainerManagement(TestNMClient.java:405) > at > org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:214) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Cannot perform > RE_INIT on [container_1502735389852_0001_01_01]. Current state is > [REINITIALIZING, isReInitializing=true]. > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.preReInitializeOrLocalizeCheck(ContainerManagerImpl.java:1772) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1697) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.reInitializeContainer(ContainerManagerImpl.java:1668) > at > org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.reInitializeContainer(ContainerManagementProtocolPBServiceImpl.java:214) > at > org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) > at java.security.AccessCo
[jira] [Commented] (YARN-7157) Support displaying per-user's apps in RM UI page and in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165032#comment-16165032 ] Rohith Sharma K S commented on YARN-7157: - Assuming no more comments, I am heading towards committing the patch shorlty! > Support displaying per-user's apps in RM UI page and in secure cluster > -- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-7188) TimelineSchemaCreator fails to create flowrun table
[ https://issues.apache.org/jira/browse/YARN-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S resolved YARN-7188. - Resolution: Not A Problem thanks [~vrushalic] for pointing out about YARN-6094. I see in the doc section _Enable the coprocessor_ as well that clearly says about flow run co-processor able creation. Closing as Not a problem! Apologies for missing this point earlier and spamming!! > TimelineSchemaCreator fails to create flowrun table > --- > > Key: YARN-7188 > URL: https://issues.apache.org/jira/browse/YARN-7188 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Vrushali C > > In hbase-1.2.6 which is by default, TimelineSchemaCreator fails to create > flow run table. > {noformat} > 2017-09-13 15:15:54,934 ERROR storage.TimelineSchemaCreator: Error in > creating hbase tables: > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at conf or table descriptor if you want to > bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541) > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): > org.apache.hadoop.hbase.DoNotRetryIOException: File does not exist: > /hbase/coprocessor/hadoop-yarn-server-timelineservice.jar Set > hbase.table.sanity.checks to false at > {noformat} > This is because coprocessor jar is always expected to be there always in hdfs > location. By default it is configured to > {code} > public static final String FLOW_RUN_COPROCESSOR_JAR_HDFS_LOCATION = > TIMELINE_SERVICE_PREFIX > + "hbase.coprocessor.jar.hdfs.location"; > /** default hdfs location for flowrun coprocessor jar. */ > public static final String DEFAULT_HDFS_LOCATION_FLOW_RUN_COPROCESSOR_JAR = > "/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar"; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165044#comment-16165044 ] Daniel Templeton commented on YARN-7146: LGTM +1, feel free to commit. I won't be able to get to it until Thursday, probably. > Many RM unit tests failing with FairScheduler > - > > Key: YARN-7146 > URL: https://issues.apache.org/jira/browse/YARN-7146 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter > Attachments: YARN-7146.001.patch, YARN-7146.002.patch, > YARN-7146.003.patch, YARN-7146.004.patch > > > Many of the RM unit tests are failing when using the FairScheduler. > Here is a list of affected test classes: > {noformat} > TestYarnClient > TestApplicationCleanup > TestApplicationMasterLauncher > TestDecommissioningNodesWatcher > TestKillApplicationWithRMHA > TestNodeBlacklistingOnAMFailures > TestRM > TestRMAdminService > TestRMRestart > TestResourceTrackerService > TestWorkPreservingRMRestart > TestAMRMRPCNodeUpdates > TestAMRMRPCResponseId > TestAMRestart > TestApplicationLifetimeMonitor > TestNodesListManager > TestRMContainerImpl > TestAbstractYarnScheduler > TestSchedulerUtils > TestFairOrderingPolicy > TestAMRMTokens > TestDelegationTokenRenewer > {noformat} > Most of the test methods in these classes are failing, though some do succeed. > There's two main categories of issues: > # The test submits an application to the {{MockRM}} and waits for it to enter > a specific state, which it never does, and the test times out. We need to > call {{update()}} on the scheduler. > # The test throws a {{ClassCastException}} on {{FSQueueMetrics}} to > {{CSQueueMetrics}}. This is because {{QueueMetrics}} metrics are static, and > a previous test using FairScheduler initialized it, and the current test is > using CapacityScheduler. We need to reset the metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7157) Add admin configuration to filter per-user's apps in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7157: Summary: Add admin configuration to filter per-user's apps in secure cluster (was: Support displaying per-user's apps in RM UI page and in secure cluster) > Add admin configuration to filter per-user's apps in secure cluster > --- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Add admin configuration to filter per-user's apps in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165062#comment-16165062 ] Rohith Sharma K S commented on YARN-7157: - committed to trunk alone! Cherry-pick for branch-3 and branch-2 is failing. [~sunilg] Can you give branch-3/branch-2 patch for the same? > Add admin configuration to filter per-user's apps in secure cluster > --- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups
[ https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165065#comment-16165065 ] Wangda Tan commented on YARN-6852: -- [~tangzhankun], Thanks for adding ref to K8S ongoing proposals. I just quickly read both proposals, to me the hw-accelerator looks like a long term goal can be done 1-2 years later. IMHO, the usage of hw-accelerator on such platforms (K8S/YARN) are still in early phase, people are trying to move some workload from bare-metal or HPC to these platforms. It becomes important requirement once more workload needs GPU/FPGA landed. We can either do some non-intrusive changes like adding node attribute for device types / versions, or more comprehensive changes to support topology, etc. To me the first option will be straightforward, the 2nd option is not only a challenge for device isolation, it also changes how application asks resource, and how scheduler deal with asks. The k8s proposal to solve the scheduling problem looks too simple to me, it won't fit in YARN's scheduling performance requirement. For the device manager, it will be a nice-to-have feature, I will think more about it while working on YARN-6620. K8S proposal is very flexible to add new resource type but it is also very heavy-weighted. For example, different resource plugins need to implement their own logics to store state, etc. And managing plugin might be a challenge for today's YARN. > [YARN-6223] Native code changes to support isolate GPU devices by using > CGroups > --- > > Key: YARN-6852 > URL: https://issues.apache.org/jira/browse/YARN-6852 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-beta1 > > Attachments: YARN-6852.001.patch, YARN-6852.002.patch, > YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, > YARN-6852.006.patch, YARN-6852.007.patch, YARN-6852.008.patch, > YARN-6852.009.patch > > > This JIRA plan to add support of: > 1) Isolation in CGroups. (native side). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7157) Add admin configuration to filter per-user's apps in secure cluster
[ https://issues.apache.org/jira/browse/YARN-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165093#comment-16165093 ] Hudson commented on YARN-7157: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12861 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12861/]) YARN-7157. Add admin configuration to filter per-user's apps in secure (rohithsharmaks: rev 5324388cf2357b1f80efd0c34392f577bf417455) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml > Add admin configuration to filter per-user's apps in secure cluster > --- > > Key: YARN-7157 > URL: https://issues.apache.org/jira/browse/YARN-7157 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7157.001.patch, YARN-7157.002.patch, > YARN-7157.003.patch, YARN-7157.004.patch, YARN-7157.005.patch > > > A user who is accessing a secure cluster via a secure UI should be able to > see only his/her own apps. > This feature will not break any compatibility as it will turned off by default -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165119#comment-16165119 ] Devaraj K commented on YARN-6620: - Thanks [~leftnoteasy] for the responses. bq. My understanding of JAXBContext is mostly used when we need to convert between object and XML/JSON. Since output of nvidia-smi is a customized XML format, which doesn't follow JAXB standard. Is it still best practice to use JAXBContext under such use case? For example, FairScheduler parses XML file directly: AllocationFileLoaderService#reloadAllocations. JAXBContext can be used for any XML format, doesn't have to be in any specific format, I could see that the sample format in the patch can be converted to a Java Object ,so that we can eliminate the traversing and parsing logic in GpuDeviceInformationParser.java. bq. I considered this option before, unless there's strong need for this to run different command or call Nvidia native APIs directly, I would prefer to hard code to use nvidia-smi instead of introducing another abstraction layer. I'm open to do refactoring to support this case once we have such requirements. I think it would be useful if users have sym links created with different names than the hard coded name. I feel we don't have to add a new configuration for the executable instead we can have the binary name also as part of DEFAULT_NM_GPU_PATH_TO_EXEC and users can provide the path with the executable name for the configuration 'yarn.nodemanager.resource.gpu.path-to-executables'. > [YARN-6223] NM Java side code changes to support isolate GPU devices by using > CGroups > - > > Key: YARN-6620 > URL: https://issues.apache.org/jira/browse/YARN-6620 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6620.001.patch, YARN-6620.002.patch, > YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch > > > This JIRA plan to add support of: > 1) GPU configuration for NodeManagers > 2) Isolation in CGroups. (Java side). > 3) NM restart and recovery allocated GPU devices -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165134#comment-16165134 ] Wangda Tan commented on YARN-6620: -- Thanks [~devaraj.k] for the additional explanations, make sense to me, I will update patch to address above two comments. > [YARN-6223] NM Java side code changes to support isolate GPU devices by using > CGroups > - > > Key: YARN-6620 > URL: https://issues.apache.org/jira/browse/YARN-6620 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6620.001.patch, YARN-6620.002.patch, > YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch > > > This JIRA plan to add support of: > 1) GPU configuration for NodeManagers > 2) Isolation in CGroups. (Java side). > 3) NM restart and recovery allocated GPU devices -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4727) Unable to override the $HADOOP_CONF_DIR env variable for container
[ https://issues.apache.org/jira/browse/YARN-4727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165184#comment-16165184 ] Hudson commented on YARN-4727: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12862 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12862/]) YARN-4727. Unable to override the /home/ericp/run/conf/ env variable for (epayne: rev 729d05f5293acf63e7e4aa3bfbf29b999c9a2906) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java > Unable to override the $HADOOP_CONF_DIR env variable for container > -- > > Key: YARN-4727 > URL: https://issues.apache.org/jira/browse/YARN-4727 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.4.1, 2.5.2, 2.7.2, 2.6.4, 2.8.1 >Reporter: Terence Yim >Assignee: Jason Lowe > Attachments: YARN-4727.001.patch, YARN-4727.002.patch > > > Given the default config of "yarn.nodemanager.env-whitelist", application > should be able to set the env variable $HADOOP_CONF_DIR to value other than > the one in the NodeManager system environment. However, I believe due to a > bug in the > {{org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch}} > class, it is not possible so. > From the {{sanitizeEnv()}} method in the ContainerLaunch class > (https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L977) > {noformat} > putEnvIfNotNull(environment, > Environment.HADOOP_CONF_DIR.name(), > System.getenv(Environment.HADOOP_CONF_DIR.name()) > ); > if (!Shell.WINDOWS) { > environment.put("JVM_PID", "$$"); > } > String[] whitelist = conf.get(YarnConfiguration.NM_ENV_WHITELIST, > YarnConfiguration.DEFAULT_NM_ENV_WHITELIST).split(","); > > for(String whitelistEnvVariable : whitelist) { > putEnvIfAbsent(environment, whitelistEnvVariable.trim()); > } > ... > private static void putEnvIfAbsent( > Map environment, String variable) { > if (environment.get(variable) == null) { > putEnvIfNotNull(environment, variable, System.getenv(variable)); > } > } > {noformat} > So there two issues here. > 1. the environment is already set with the system environment of the NM in > the {{putEnvIfNotNull}} call, hence the {{putEnvIfAbsent}} call will never > set it to some new value > 2. Inside the {{putEnvIfAbsent}} call, it uses the system environment of the > NM, which it should be using the one from the {{launchContext}} instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4727) Unable to override the $HADOOP_CONF_DIR env variable for container
[ https://issues.apache.org/jira/browse/YARN-4727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165194#comment-16165194 ] Eric Payne commented on YARN-4727: -- +1 Thanks [~jlowe] > Unable to override the $HADOOP_CONF_DIR env variable for container > -- > > Key: YARN-4727 > URL: https://issues.apache.org/jira/browse/YARN-4727 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.4.1, 2.5.2, 2.7.2, 2.6.4, 2.8.1 >Reporter: Terence Yim >Assignee: Jason Lowe > Attachments: YARN-4727.001.patch, YARN-4727.002.patch > > > Given the default config of "yarn.nodemanager.env-whitelist", application > should be able to set the env variable $HADOOP_CONF_DIR to value other than > the one in the NodeManager system environment. However, I believe due to a > bug in the > {{org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch}} > class, it is not possible so. > From the {{sanitizeEnv()}} method in the ContainerLaunch class > (https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L977) > {noformat} > putEnvIfNotNull(environment, > Environment.HADOOP_CONF_DIR.name(), > System.getenv(Environment.HADOOP_CONF_DIR.name()) > ); > if (!Shell.WINDOWS) { > environment.put("JVM_PID", "$$"); > } > String[] whitelist = conf.get(YarnConfiguration.NM_ENV_WHITELIST, > YarnConfiguration.DEFAULT_NM_ENV_WHITELIST).split(","); > > for(String whitelistEnvVariable : whitelist) { > putEnvIfAbsent(environment, whitelistEnvVariable.trim()); > } > ... > private static void putEnvIfAbsent( > Map environment, String variable) { > if (environment.get(variable) == null) { > putEnvIfNotNull(environment, variable, System.getenv(variable)); > } > } > {noformat} > So there two issues here. > 1. the environment is already set with the system environment of the NM in > the {{putEnvIfNotNull}} call, hence the {{putEnvIfAbsent}} call will never > set it to some new value > 2. Inside the {{putEnvIfAbsent}} call, it uses the system environment of the > NM, which it should be using the one from the {{launchContext}} instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4727) Unable to override the $HADOOP_CONF_DIR env variable for container
[ https://issues.apache.org/jira/browse/YARN-4727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165211#comment-16165211 ] Hudson commented on YARN-4727: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12863 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12863/]) Revert 'YARN-4727. Unable to override the $HADOOP_CONF_DIR env variable (epayne: rev a3c44195bed724c02bb76859fe2690d6a9e8f2e9) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java YARN-4727. Unable to override the $HADOOP_CONF_DIR env variable for (epayne: rev 3860be7961580ac20dd505d665b580f0a04ac4f8) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java > Unable to override the $HADOOP_CONF_DIR env variable for container > -- > > Key: YARN-4727 > URL: https://issues.apache.org/jira/browse/YARN-4727 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.4.1, 2.5.2, 2.7.2, 2.6.4, 2.8.1 >Reporter: Terence Yim >Assignee: Jason Lowe > Attachments: YARN-4727.001.patch, YARN-4727.002.patch > > > Given the default config of "yarn.nodemanager.env-whitelist", application > should be able to set the env variable $HADOOP_CONF_DIR to value other than > the one in the NodeManager system environment. However, I believe due to a > bug in the > {{org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch}} > class, it is not possible so. > From the {{sanitizeEnv()}} method in the ContainerLaunch class > (https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L977) > {noformat} > putEnvIfNotNull(environment, > Environment.HADOOP_CONF_DIR.name(), > System.getenv(Environment.HADOOP_CONF_DIR.name()) > ); > if (!Shell.WINDOWS) { > environment.put("JVM_PID", "$$"); > } > String[] whitelist = conf.get(YarnConfiguration.NM_ENV_WHITELIST, > YarnConfiguration.DEFAULT_NM_ENV_WHITELIST).split(","); > > for(String whitelistEnvVariable : whitelist) { > putEnvIfAbsent(environment, whitelistEnvVariable.trim()); > } > ... > private static void putEnvIfAbsent( > Map environment, String variable) { > if (environment.get(variable) == null) { > putEnvIfNotNull(environment, variable, System.getenv(variable)); > } > } > {noformat} > So there two issues here. > 1. the environment is already set with the system environment of the NM in > the {{putEnvIfNotNull}} call, hence the {{putEnvIfAbsent}} call will never > set it to some new value > 2. Inside the {{putEnvIfAbsent}} call, it uses the system environment of the > NM, which it should be using the one from the {{launchContext}} instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7191) Improve yarn-service documentation
Jian He created YARN-7191: - Summary: Improve yarn-service documentation Key: YARN-7191 URL: https://issues.apache.org/jira/browse/YARN-7191 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jian He Assignee: Jian He -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7191) Improve yarn-service documentation
[ https://issues.apache.org/jira/browse/YARN-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165232#comment-16165232 ] Jian He commented on YARN-7191: --- Below comments are from [~aw], thank you for giving suggestions. I'll address them in this jira. {code} Somewhat. Greatly improved, but there’s still way too much “we’re working on this” and “here’s a link to a JIRA” and just general brokenness going on. Here’s some examples from concepts. Concepts! The document I’d expect to give me very basic “when we talk about X, we mean Y” definitions: "A host of scheduling features are being developed to support long running services.” Yeah, ok? How is this a concept? or "[YARN-3998](https://issues.apache.org/jira/browse/YARN-3998) implements a retry-policy to let NM re-launch a service container when it fails.” The patch itself went through nine revisions and a long discussion. Would an end user care about the details in that JIRA? If the answer to the last question is YES, then the documentation has failed. The whole point of documentation is so they don’t have to go digging into the details of the implementation, the decision process that got us there, etc. If they care enough about the details, they’ll run through the changelog and click on the JIRA link there. If the summary line of the changelog isn’t obvious, well… then we need better summaries. etc, etc. ... The sleep example is nice. Now, let’s see a non-toy example: multiple instances of Apache httpd or MariaDB or something real and not from the Hadoop echo chamber (e.g., non-JVM-based). If this is for “native” services, this shouldn’t be a problem, right? Give a real example and users will buy what you’re selling. I also think writing the docs and providing an example of doing something big and outside the team’s comfort zone will clarify where end users are going to need more help than what’s being provided. Getting a MariaDB instance or three up will help tremendously here. Which reminds me: something the documentation doesn’t cover is storage. What happens to it, where does it come from, etc, etc. That’s an important detail that I didn’t see covered. (I may have missed it.) … Why are there directions to enable other, partially unrelated services in here? Shouldn’t there be pointers to their specific documentation? Is the expectation that if the requirements for those other services change that contributors will need to update multiple documents? "Start the DNS server” Just… yikes. a) yarn classname … This is not how we do user-facing things. The fact it’s not really possible for a *daemon* to be put in the YarnCommands.md doc should be a giant red flag that something isn’t going correctly here. b) no jsvc support for something that it’s strongly hinted at wanting to run privileged = an instant -1 for failing basic security practices. There’s zero reason for it to be running continually as root. c) If this would have been hooked into the shell scripts appropriately, logs, user switching, etc would have been had for free. d) Where’s stop? Right. Since it’s outside the scripts, there is no pid support so one has to do all of that manually…. Given: "3. Supports reverse lookups (name based on IP). Note, this works only for Docker containers.” then: "It should not be used as a fully-functional corporate DNS.” Scratch corporate. It’s not a fully functional DNS server if it can’t do reverse lookups. (Which, ironically, means it’s not suitable for use with Apache Hadoop, given it requires both fwd and rev DNS ...) {code} > Improve yarn-service documentation > -- > > Key: YARN-7191 > URL: https://issues.apache.org/jira/browse/YARN-7191 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6384) Add configuratin to set max cpu usage when strict-resource-usage is false with cgroups
[ https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165235#comment-16165235 ] Miklos Szegedi commented on YARN-6384: -- [~lolee_k], CGROUP_CPU_PERIOD_US should never be set in non-strict mode. In strict mode, you could restrict to more than the vcores defaulting to 100%. > Add configuratin to set max cpu usage when strict-resource-usage is false > with cgroups > -- > > Key: YARN-6384 > URL: https://issues.apache.org/jira/browse/YARN-6384 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: dengkai > Attachments: YARN-6384-0.patch, YARN-6384-1.patch, YARN-6384-2.patch > > > When using cgroups on yarn, if > yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is > false, user may get very more cpu time than expected based on the vcores. > There should be a upper limit even resource-usage is not strict, just like a > percentage which user can get more than promised by vcores. I think it's > important in a shared cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7054) Yarn Service Phase 2
[ https://issues.apache.org/jira/browse/YARN-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-7054: -- Summary: Yarn Service Phase 2 (was: Yarn Native Service Phase 2) > Yarn Service Phase 2 > > > Key: YARN-7054 > URL: https://issues.apache.org/jira/browse/YARN-7054 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Jian He > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6570) No logs were found for running application, running container
[ https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated YARN-6570: - Attachment: YARN-6570-v2.patch > No logs were found for running application, running container > - > > Key: YARN-6570 > URL: https://issues.apache.org/jira/browse/YARN-6570 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sumana Sathish >Assignee: Junping Du >Priority: Critical > Attachments: YARN-6570.poc.patch, YARN-6570-v2.patch > > > 1.Obtain running containers from the following CLI for running application: > yarn container -list appattempt > 2. Couldnot fetch logs > {code} > Can not find any log file matching the pattern: ALL for the container > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6570) No logs were found for running application, running container
[ https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165266#comment-16165266 ] Junping Du commented on YARN-6570: -- Fix with a simple unit test. > No logs were found for running application, running container > - > > Key: YARN-6570 > URL: https://issues.apache.org/jira/browse/YARN-6570 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sumana Sathish >Assignee: Junping Du >Priority: Critical > Attachments: YARN-6570.poc.patch, YARN-6570-v2.patch > > > 1.Obtain running containers from the following CLI for running application: > yarn container -list appattempt > 2. Couldnot fetch logs > {code} > Can not find any log file matching the pattern: ALL for the container > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7149) Cross-queue preemption sometimes starves an underserved queue
[ https://issues.apache.org/jira/browse/YARN-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7149: - Attachment: YARN-7149.001.patch Rather than use this JIRA to revert the {{computeUserLimit}} behavior to pre-YARN-5889, patch {{YARN-7149.001.patch}} just adds {{minimumAllocation (min container size)}} to {{resourceUsed}}. I see this as a compromise between the old and the new behavior. Please let me know your thoughts. > Cross-queue preemption sometimes starves an underserved queue > - > > Key: YARN-7149 > URL: https://issues.apache.org/jira/browse/YARN-7149 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.9.0, 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: YARN-7149.001.patch, YARN-7149.demo.unit-test.patch > > > In branch 2 and trunk, I am consistently seeing some use cases where > cross-queue preemption does not happen when it should. I do not see this in > branch-2.8. > Use Case: > | | *Size* | *Minimum Container Size* | > |MyCluster | 20 GB | 0.5 GB | > | *Queue Name* | *Capacity* | *Absolute Capacity* | *Minimum User Limit > Percent (MULP)* | *User Limit Factor (ULF)* | > |Q1 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 | > |Q2 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 | > - {{User1}} launches {{App1}} in {{Q1}} and consumes all resources (20 GB) > - {{User2}} launches {{App2}} in {{Q2}} and requests 10 GB > - _Note: containers are 0.5 GB._ > - Preemption monitor kills 2 containers (equals 1 GB) from {{App1}} in {{Q1}}. > - Capacity Scheduler assigns 2 containers (equals 1 GB) to {{App2}} in {{Q2}}. > - _No more containers are ever preempted, even though {{Q2}} is far > underserved_ -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4933) Evaluate parent-slave DNS options to assess deployment options for DNS service
[ https://issues.apache.org/jira/browse/YARN-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-4933: -- Component/s: yarn-native-services > Evaluate parent-slave DNS options to assess deployment options for DNS service > -- > > Key: YARN-4933 > URL: https://issues.apache.org/jira/browse/YARN-4933 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Jonathan Maron >Assignee: Jonathan Maron > > Comments on YARN-4757 indicate that, in addition to the primary server to > YARN DNS service zone request forwarding implementation currently suggested, > it may be appropriate to also offer the ability to configure the DNS service > as a master server that can support zone transfers to slaves. Some other > features that are related and should be examined are: > - DNS NOTIFY > - AXFR > - IXFR -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6269) Evaluate SLIDER-1185 (container/application diagnostics) wrt ATSv2 and Yarn Logging enhancements
[ https://issues.apache.org/jira/browse/YARN-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-6269: Summary: Evaluate SLIDER-1185 (container/application diagnostics) wrt ATSv2 and Yarn Logging enhancements (was: Pull into native services SLIDER-1185 - container/application diagnostics for enhanced debugging) > Evaluate SLIDER-1185 (container/application diagnostics) wrt ATSv2 and Yarn > Logging enhancements > > > Key: YARN-6269 > URL: https://issues.apache.org/jira/browse/YARN-6269 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha > Fix For: yarn-native-services > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5817) Make yarn.cmd changes required for slider and servicesapi
[ https://issues.apache.org/jira/browse/YARN-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5817: -- Component/s: yarn-native-services > Make yarn.cmd changes required for slider and servicesapi > - > > Key: YARN-5817 > URL: https://issues.apache.org/jira/browse/YARN-5817 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Gour Saha > Fix For: yarn-native-services > > > As per YARN-5808 and other changes made to yarn script, there are probably > some corresponding changes required in > _hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd_. We need to identify and make > those changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5817) Make yarn.cmd changes required for slider and servicesapi
[ https://issues.apache.org/jira/browse/YARN-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5817: -- Issue Type: Bug (was: Sub-task) Parent: (was: YARN-7054) > Make yarn.cmd changes required for slider and servicesapi > - > > Key: YARN-5817 > URL: https://issues.apache.org/jira/browse/YARN-5817 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Gour Saha > Fix For: yarn-native-services > > > As per YARN-5808 and other changes made to yarn script, there are probably > some corresponding changes required in > _hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd_. We need to identify and make > those changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6161) YARN support for port allocation
[ https://issues.apache.org/jira/browse/YARN-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6161: -- Component/s: yarn-native-services > YARN support for port allocation > > > Key: YARN-6161 > URL: https://issues.apache.org/jira/browse/YARN-6161 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Billie Rinaldi > Fix For: yarn-native-services > > > Since there is no agent code in YARN native services, we need another > mechanism for allocating ports to containers. This is not necessary when > running Docker containers, but it will become important when an agent-less > docker-less provider is introduced. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7162) Remove XML excludes file format
[ https://issues.apache.org/jira/browse/YARN-7162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165369#comment-16165369 ] Grant Sohn commented on YARN-7162: -- +1 (non-binding) > Remove XML excludes file format > --- > > Key: YARN-7162 > URL: https://issues.apache.org/jira/browse/YARN-7162 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-7162.001.patch, YARN-7162.branch-2.001.patch > > > YARN-5536 aims to replace the XML format for the excludes file with a JSON > format. However, it looks like we won't have time for that for Hadoop 3 Beta > 1. The concern is that if we release it as-is, we'll now have to support the > XML format as-is for all of Hadoop 3.x, which we're either planning on > removing, or rewriting using a pluggable framework. > [This comment in > YARN-5536|https://issues.apache.org/jira/browse/YARN-5536?focusedCommentId=16126194&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126194] > proposed two quick solutions to prevent this compat issue. In this JIRA, > we're going to remove the XML format. If we later want to add it back in, > YARN-5536 can add it back, rewriting it to be in the pluggable framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6023) Allow multiple IPs in native services container ServiceRecord
[ https://issues.apache.org/jira/browse/YARN-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6023: -- Issue Type: Bug (was: Sub-task) Parent: (was: YARN-7054) > Allow multiple IPs in native services container ServiceRecord > - > > Key: YARN-6023 > URL: https://issues.apache.org/jira/browse/YARN-6023 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Billie Rinaldi > > Currently ProviderUtils.updateServiceRecord sets a single IP as "yarn:ip" in > the ServiceRecord, and ignores any additional IPs. The Registry DNS > implementation in the YARN-4757 feature branch reads the "yarn:ip" and uses > it to create a DNS record. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6161) YARN support for port allocation
[ https://issues.apache.org/jira/browse/YARN-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6161: -- Issue Type: Bug (was: Sub-task) Parent: (was: YARN-7054) > YARN support for port allocation > > > Key: YARN-6161 > URL: https://issues.apache.org/jira/browse/YARN-6161 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Billie Rinaldi > Fix For: yarn-native-services > > > Since there is no agent code in YARN native services, we need another > mechanism for allocating ports to containers. This is not necessary when > running Docker containers, but it will become important when an agent-less > docker-less provider is introduced. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6391) Support specifying extra options from yarn-native-service CLI
[ https://issues.apache.org/jira/browse/YARN-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6391: -- Issue Type: Bug (was: Sub-task) Parent: (was: YARN-7054) > Support specifying extra options from yarn-native-service CLI > - > > Key: YARN-6391 > URL: https://issues.apache.org/jira/browse/YARN-6391 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Jian He > > The CLI has been changed to take the same json input spec as YARN-4692. > We should also have a way to allow for substituting individual field of the > json spec file from CLI. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4933) Evaluate parent-slave DNS options to assess deployment options for DNS service
[ https://issues.apache.org/jira/browse/YARN-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-4933: -- Issue Type: Bug (was: Sub-task) Parent: (was: YARN-7054) > Evaluate parent-slave DNS options to assess deployment options for DNS service > -- > > Key: YARN-4933 > URL: https://issues.apache.org/jira/browse/YARN-4933 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Jonathan Maron >Assignee: Jonathan Maron > > Comments on YARN-4757 indicate that, in addition to the primary server to > YARN DNS service zone request forwarding implementation currently suggested, > it may be appropriate to also offer the ability to configure the DNS service > as a master server that can support zone transfers to slaves. Some other > features that are related and should be examined are: > - DNS NOTIFY > - AXFR > - IXFR -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6570) No logs were found for running application, running container
[ https://issues.apache.org/jira/browse/YARN-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165380#comment-16165380 ] Hadoop QA commented on YARN-6570: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 34s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 2 new + 150 unchanged - 0 fixed = 152 total (was 150) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 24s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.TestEventFlow | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | | hadoop.yarn.server.nodemanager.TestNodeManagerResync | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-6570 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886952/YARN-6570-v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4dc5cc20bb4e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f153e60 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17439/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17439/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17439/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-serve
[jira] [Commented] (YARN-7149) Cross-queue preemption sometimes starves an underserved queue
[ https://issues.apache.org/jira/browse/YARN-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165379#comment-16165379 ] Hadoop QA commented on YARN-7149: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7149 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886958/YARN-7149.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 783e8db84b62 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f153e60 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17440/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17440/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17440/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cross-queue preemption sometimes starves an underserved queue > - > > Key: YARN-714
[jira] [Updated] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT
[ https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-7102: --- Attachment: YARN-7102.v4.patch > NM heartbeat stuck when responseId overflows MAX_INT > > > Key: YARN-7102 > URL: https://issues.apache.org/jira/browse/YARN-7102 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Critical > Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, > YARN-7102.v3.patch, YARN-7102.v4.patch > > > ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM > heartbeat in YARN-6640, please refer to YARN-6640 for details. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7174) Add retry logic in LogsCLI when fetch running application logs
[ https://issues.apache.org/jira/browse/YARN-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165395#comment-16165395 ] Junping Du commented on YARN-7174: -- Thanks [~xgong] for reporting the issue and delivering a fix for it. I quickly go through the patch, a couple of comments: {noformat} +int maxRetries = 30; +long retryInterval = 1000; {noformat} Sounds like we were defining the default max retries and retry intervals inline code, but better separated it out - may be too heavy to define in YarnConfiguration but at least define it in front of class. After adding two options to log CLI, we should add documentation change in yarn log CLI page (https://hadoop.apache.org/docs/r2.8.0/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#logs). Looks like we remove the verification of output of yarn log cli usage. Any special reason to do this? If not, we should add it back to prevent careless/unexpected change of usage output. > Add retry logic in LogsCLI when fetch running application logs > -- > > Key: YARN-7174 > URL: https://issues.apache.org/jira/browse/YARN-7174 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7174.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT
[ https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165432#comment-16165432 ] Hadoop QA commented on YARN-7102: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} YARN-7102 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7102 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886971/YARN-7102.v4.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17441/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NM heartbeat stuck when responseId overflows MAX_INT > > > Key: YARN-7102 > URL: https://issues.apache.org/jira/browse/YARN-7102 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Critical > Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, > YARN-7102.v3.patch, YARN-7102.v4.patch > > > ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM > heartbeat in YARN-6640, please refer to YARN-6640 for details. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165450#comment-16165450 ] Hudson commented on YARN-7146: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12865 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12865/]) YARN-7146. Many RM unit tests failing with FairScheduler (rkanter) (rkanter: rev bb34ae955496c1aa595dc1186153d605a41f5378) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ParameterizedSchedulerTestBase.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulingWithAllocationRequestId.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestReservationSystem.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestNodeBlacklistingOnAMFailures.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAdminService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/TestFairOrderingPolicy.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestClientToAMTokens.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java > Many RM unit tests failing with FairScheduler > - > > Key: YARN-7146 > URL: https://issues.apache.org/jira/browse/YARN-7146 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0-beta1, 3.1.0 > > Attachments: YARN-7146.001.patch, YARN-7146.002.patch, > YARN-7146.003.patch, YARN-7146.004.patch > > > Many of the RM unit tests are failing when using the FairScheduler. > Here is a list of affected test classes: > {noformat} > TestYarnClient > TestApplicationCleanup > TestApplicationMasterLauncher > TestDecommissioningNodesWatcher > TestKillApplicationWithRMHA > TestNodeBlacklistingOnAMFailures > TestRM > TestRMAdminService > TestRMRestart > TestResourceTrackerService > TestWorkPreservingRMRestart > TestAMRMRPCNodeUpdates > TestAMRMRPCResponseId > TestAMRestart > TestApplicationLifetimeMonitor > TestNodesListManager > TestRMContainerImpl > TestAbstractYarnScheduler > TestSchedulerUtils > Test
[jira] [Created] (YARN-7192) Add a pluggable StateMachine Listener that is notified of NM Container State changes
Arun Suresh created YARN-7192: - Summary: Add a pluggable StateMachine Listener that is notified of NM Container State changes Key: YARN-7192 URL: https://issues.apache.org/jira/browse/YARN-7192 Project: Hadoop YARN Issue Type: Bug Reporter: Arun Suresh Assignee: Arun Suresh This JIRA is to add support for a plugggable class in the NodeManager that is notified of changes to the Container StateMachine state and the events that caused the change. The proposal is to modify the basic StateMachine class add support for a hook that is called before and after a transition. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7192) Add a pluggable StateMachine Listener that is notified of NM Container State changes
[ https://issues.apache.org/jira/browse/YARN-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-7192: -- Attachment: YARN-7192.001.patch Attaching initial patch > Add a pluggable StateMachine Listener that is notified of NM Container State > changes > > > Key: YARN-7192 > URL: https://issues.apache.org/jira/browse/YARN-7192 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7192.001.patch > > > This JIRA is to add support for a plugggable class in the NodeManager that is > notified of changes to the Container StateMachine state and the events that > caused the change. > The proposal is to modify the basic StateMachine class add support for a hook > that is called before and after a transition. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7193) Implement REST API for register Yarnfile in application catalog
Eric Yang created YARN-7193: --- Summary: Implement REST API for register Yarnfile in application catalog Key: YARN-7193 URL: https://issues.apache.org/jira/browse/YARN-7193 Project: Hadoop YARN Issue Type: Sub-task Components: applications Affects Versions: 3.1.0 Reporter: Eric Yang Assignee: Eric Yang For support ability to register and index Yarnfile, we need a set of REST API to register, search, and recommend applications from application catalog. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7193) Implement REST API for register Yarnfile in application catalog
[ https://issues.apache.org/jira/browse/YARN-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7193: Attachment: YARN-7193.yarn-native-services.001.patch Implement REST API for application catalog. > Implement REST API for register Yarnfile in application catalog > --- > > Key: YARN-7193 > URL: https://issues.apache.org/jira/browse/YARN-7193 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications >Affects Versions: 3.1.0 >Reporter: Eric Yang >Assignee: Eric Yang > Attachments: YARN-7193.yarn-native-services.001.patch > > > For support ability to register and index Yarnfile, we need a set of REST API > to register, search, and recommend applications from application catalog. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7193) Implement REST API for register Yarnfile in application catalog
[ https://issues.apache.org/jira/browse/YARN-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165498#comment-16165498 ] Hadoop QA commented on YARN-7193: - (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-YARN-Build/17443/console in case of problems. > Implement REST API for register Yarnfile in application catalog > --- > > Key: YARN-7193 > URL: https://issues.apache.org/jira/browse/YARN-7193 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications >Affects Versions: 3.1.0 >Reporter: Eric Yang >Assignee: Eric Yang > Attachments: YARN-7193.yarn-native-services.001.patch > > > For support ability to register and index Yarnfile, we need a set of REST API > to register, search, and recommend applications from application catalog. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7193) Implement REST API for register Yarnfile in application catalog
[ https://issues.apache.org/jira/browse/YARN-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7193: Attachment: YARN-7193.yarn-native-services.002.patch Missed two license files, update license again. > Implement REST API for register Yarnfile in application catalog > --- > > Key: YARN-7193 > URL: https://issues.apache.org/jira/browse/YARN-7193 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications >Affects Versions: 3.1.0 >Reporter: Eric Yang >Assignee: Eric Yang > Attachments: YARN-7193.yarn-native-services.001.patch, > YARN-7193.yarn-native-services.002.patch > > > For support ability to register and index Yarnfile, we need a set of REST API > to register, search, and recommend applications from application catalog. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7193) Implement REST API for register Yarnfile in application catalog
[ https://issues.apache.org/jira/browse/YARN-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165506#comment-16165506 ] Hadoop QA commented on YARN-7193: - (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-YARN-Build/17444/console in case of problems. > Implement REST API for register Yarnfile in application catalog > --- > > Key: YARN-7193 > URL: https://issues.apache.org/jira/browse/YARN-7193 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications >Affects Versions: 3.1.0 >Reporter: Eric Yang >Assignee: Eric Yang > Attachments: YARN-7193.yarn-native-services.001.patch, > YARN-7193.yarn-native-services.002.patch > > > For support ability to register and index Yarnfile, we need a set of REST API > to register, search, and recommend applications from application catalog. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor
[ https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165527#comment-16165527 ] Miklos Szegedi commented on YARN-6623: -- Thank you for the patch [~vvasudev] and for the reviews [~leftnoteasy] and [~ebadger]. Sorry about the delay, I had a chance to look at the latest patch now. I have some comments but before that, does the jira target 3.0-beta1? {code} 84 printWriter.println(" " + entry.getKey() + "=" + StringUtils 85 .join(",", entry.getValue())); {code} writeCommandToTempFile: entry.getKey() can still contain an = in the latest patch, which is an issue. It probably needs to be filtered in addCommandArguments. {code} 701char *get_config_path(const char *argv0) { 702 char *executable_file = get_executable((char *) argv0); 703 if (!executable_file) { 704fprintf(ERRORFILE, "realpath of executable: %s\n", 705errno != 0 ? strerror(errno) : "unknown"); 706return NULL; 707 } {code} It is probably a good idea to check for {{executable_file\[0\] != 0}} as well {code} 1150 size_t command_size = MIN(sysconf(_SC_ARG_MAX), 128*1024); 1151 char *buffer = alloc_and_clear_memory(command_size, sizeof(char)); 1152 ret = get_docker_command(command_file, &CFG, buffer, EXECUTOR_PATH_MAX); {code} The code passes in a different size than the actual size of the buffer. {code} 157inline void* alloc_and_clear_memory(size_t num, size_t size) { 158 void *ret = calloc(num, size); 159 if (ret == NULL) { 160exit(OUT_OF_MEMORY); 161 } 162 return ret; 163} {code} It might be a good idea to print an error message here. {code} 42static int add_to_buffer(char *buff, const size_t bufflen, const char *string) { {code} Why do not you use strncat inside? It would spare one of the strlen’s. {code} 105if(prefix != 0) { 106 tmp_ptr = strchr(values[i], prefix); 107 if (tmp_ptr == NULL) { ... {code} This feels a little bit less readable. I would suggest having a len instead of tmp_ptr defaulted to strlen(tmp_ptr); Also, am I right that we are checking only if the left device is allowed? {code} 150 if (ret != 0) { 151memset(out, 0, outlen); 152 } {code} out\[0\]=0 should be sufficient, if outlen > 0 and ret != 0 {code} 162 if (0 == strncmp(container_name, CONTAINER_NAME_PREFIX, strlen(CONTAINER_NAME_PREFIX))) { {code} There is no need of an strlen here, sizeof is sufficient and calculates compile time {code} 283 ret = add_docker_config_param(&command_config, out, outlen); 284 if (ret != 0) { 285 return BUFFER_TOO_SMALL; {code} Container name is not freed. {code} 330 if (ret != 0) { 331 return BUFFER_TOO_SMALL; 332 } {code} Image name is not freed. {code} 381 quote_and_append_arg(&tmp_buffer, &tmp_buffer_size, " ", image_name); {code} That space might need to be added to the quote_and_append_arg function for safety reasons. {code} 564 * 2. If the path is a directory, add a '/' at the end( if not present) {code} There is a small typo here. {code} 585 if (len <= 0) { 586 return NULL; 587 } {code} There is a missing free here {code} 731 strncpy(tmp_buffer_2, values[i], strlen(values[i])); 732 strncpy(tmp_buffer_2 + strlen(values[i]), ro_suffix, strlen(ro_suffix)); {code} Why do you use strncpy here? Why not strcpy and strcat? {code} 739 memset(tmp_buffer, 0, tmp_buffer_size); {code} Clearing the first character should be sufficient. > Add support to turn off launching privileged containers in the > container-executor > - > > Key: YARN-6623 > URL: https://issues.apache.org/jira/browse/YARN-6623 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev >Priority: Blocker > Attachments: YARN-6623.001.patch, YARN-6623.002.patch, > YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, > YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, > YARN-6623.009.patch > > > Currently, launching privileged containers is controlled by the NM. We should > add a flag to the container-executor.cfg allowing admins to disable launching > privileged containers at the container-executor level. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-3661) Basic Federation UI
[ https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-3661: -- Attachment: YARN-3661-005.patch > Basic Federation UI > > > Key: YARN-3661 > URL: https://issues.apache.org/jira/browse/YARN-3661 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Giovanni Matteo Fumarola >Assignee: Íñigo Goiri > Attachments: YARN-3661-000.patch, YARN-3661-001.patch, > YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, > YARN-3661-005.patch > > > The UIs provided by each RM, provide a correct "local" view of what is > running in a sub-cluster. In the context of federation we need new > UIs that can track load, jobs, users across sub-clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2664) Improve RM webapp to expose info about reservations.
[ https://issues.apache.org/jira/browse/YARN-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-2664: -- Attachment: YARN-2664.19.patch > Improve RM webapp to expose info about reservations. > > > Key: YARN-2664 > URL: https://issues.apache.org/jira/browse/YARN-2664 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Carlo Curino >Assignee: Íñigo Goiri > Labels: BB2015-05-TBR > Attachments: legal.patch, PlannerPage_screenshot.pdf, > screenshot_reservation_UI.pdf, YARN-2664.10.patch, YARN-2664.11.patch, > YARN-2664.12.patch, YARN-2664.13.patch, YARN-2664.14.patch, > YARN-2664.15.patch, YARN-2664.16.patch, YARN-2664.17.patch, > YARN-2664.18.patch, YARN-2664.19.patch, YARN-2664.1.patch, YARN-2664.2.patch, > YARN-2664.3.patch, YARN-2664.4.patch, YARN-2664.5.patch, YARN-2664.6.patch, > YARN-2664.7.patch, YARN-2664.8.patch, YARN-2664.9.patch, YARN-2664.patch > > > YARN-1051 provides a new functionality in the RM to ask for reservation on > resources. Exposing this through the webapp GUI is important. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6840) Implement zookeeper based store for scheduler configuration updates
[ https://issues.apache.org/jira/browse/YARN-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-6840: Attachment: YARN-6840-YARN-5734.004.patch > Implement zookeeper based store for scheduler configuration updates > --- > > Key: YARN-6840 > URL: https://issues.apache.org/jira/browse/YARN-6840 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Jonathan Hung > Attachments: YARN-6840-YARN-5734.001.patch, > YARN-6840-YARN-5734.002.patch, YARN-6840-YARN-5734.003.patch, > YARN-6840-YARN-5734.004.patch > > > Right now there is only in-memory and leveldb based configuration store > supported. Need one which supports RM HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7192) Add a pluggable StateMachine Listener that is notified of NM Container State changes
[ https://issues.apache.org/jira/browse/YARN-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165560#comment-16165560 ] Hadoop QA commented on YARN-7192: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 8 new + 440 unchanged - 0 fixed = 448 total (was 440) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 35s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 35s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 73m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl | | | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater | | | hadoop.yarn.server.nodemanager.TestNodeManagerResync | | | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown | | | hadoop.yarn.server.nodemanager.TestNodeManagerReboot | | | hadoop.yarn.server.nodemanager.TestNodeStatusUpdaterForLabels | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | | hadoop.yarn.server.nodemanager.containermanager.container.TestContainer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7192 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886984/YARN-7192.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Commented] (YARN-6840) Implement zookeeper based store for scheduler configuration updates
[ https://issues.apache.org/jira/browse/YARN-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165563#comment-16165563 ] Jonathan Hung commented on YARN-6840: - Thanks [~leftnoteasy] for the review. Attached 004 addressing most of these comments: bq. Move following code: To refreshAll, and parameter isActiveTransition and private void refreshQueues can be removed. Done bq. Is it a better idea to move public void refreshQueues() logic to scheduler#reinitialize(...). It's not a good idea to invoke AdminService#refreshQueues from MutableCSConfigurationProvider. Not sure about this, then we are doing the reservation system reinitialization inside scheduler, so every time scheduler#reinitialize is called, the reservation system is also initialized, not sure if this is the desired behavior. Also we would need to duplicate the reservation system reinitialization for all schedulers, or make ResourceScheduler an abstract class and add it there. Unless you meant duplicating the reservation system reinitialization logic inside MutableCSConfigurationProvider#mutateConfiguration? I think this makes more sense, but then we have duplicate code between this and AdminService. bq. In MutableCSConfigurationProvider: It's better to remove:... Done bq. In MutableConfScheduler: Similar to above, it's better to remove:... Done bq. refreshConfiguration -> reloadConfigurationFromStore Done bq. createAndStartZKManager can be merged to rm#getZKManager() Renamed to getAndStartZKManager bq. Getter API of ResourceManager should be exposed by RMContext. We should never use ref to ResourceManager directly. Done for ZKConfigurationStore, I left the references in ZKRMStateStore since this class has other direct references to rm object. We can handle this in a separate ticket if you'd like. bq. setResourceManager can be removed and you can pass RMContext ref to initialize. Done bq. What happens if Configuration schedConf passed to a already initialized store? For leveldb and zk, it will ignore it and use the scheduler configuration persisted in the store. bq. Could you add Javadocs to following methods: Done > Implement zookeeper based store for scheduler configuration updates > --- > > Key: YARN-6840 > URL: https://issues.apache.org/jira/browse/YARN-6840 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Jonathan Hung > Attachments: YARN-6840-YARN-5734.001.patch, > YARN-6840-YARN-5734.002.patch, YARN-6840-YARN-5734.003.patch, > YARN-6840-YARN-5734.004.patch > > > Right now there is only in-memory and leveldb based configuration store > supported. Need one which supports RM HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
Xuan Gong created YARN-7194: --- Summary: Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat Key: YARN-7194 URL: https://issues.apache.org/jira/browse/YARN-7194 Project: Hadoop YARN Issue Type: Sub-task Reporter: Xuan Gong Assignee: Xuan Gong -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6840) Implement zookeeper based store for scheduler configuration updates
[ https://issues.apache.org/jira/browse/YARN-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165563#comment-16165563 ] Jonathan Hung edited comment on YARN-6840 at 9/14/17 1:06 AM: -- Thanks [~leftnoteasy] for the review. Attached 004 addressing most of these comments. Also added a couple of sleeps in TestZKConfigurationStore due to a race condition to fix some test failures. bq. Move following code: To refreshAll, and parameter isActiveTransition and private void refreshQueues can be removed. Done bq. Is it a better idea to move public void refreshQueues() logic to scheduler#reinitialize(...). It's not a good idea to invoke AdminService#refreshQueues from MutableCSConfigurationProvider. Not sure about this, then we are doing the reservation system reinitialization inside scheduler, so every time scheduler#reinitialize is called, the reservation system is also initialized, not sure if this is the desired behavior. Also we would need to duplicate the reservation system reinitialization for all schedulers, or make ResourceScheduler an abstract class and add it there. Unless you meant duplicating the reservation system reinitialization logic inside MutableCSConfigurationProvider#mutateConfiguration? I think this makes more sense, but then we have duplicate code between this and AdminService. bq. In MutableCSConfigurationProvider: It's better to remove:... Done bq. In MutableConfScheduler: Similar to above, it's better to remove:... Done bq. refreshConfiguration -> reloadConfigurationFromStore Done bq. createAndStartZKManager can be merged to rm#getZKManager() Renamed to getAndStartZKManager bq. Getter API of ResourceManager should be exposed by RMContext. We should never use ref to ResourceManager directly. Done for ZKConfigurationStore, I left the references in ZKRMStateStore since this class has other direct references to rm object. We can handle this in a separate ticket if you'd like. bq. setResourceManager can be removed and you can pass RMContext ref to initialize. Done bq. What happens if Configuration schedConf passed to a already initialized store? For leveldb and zk, it will ignore it and use the scheduler configuration persisted in the store. bq. Could you add Javadocs to following methods: Done was (Author: jhung): Thanks [~leftnoteasy] for the review. Attached 004 addressing most of these comments: bq. Move following code: To refreshAll, and parameter isActiveTransition and private void refreshQueues can be removed. Done bq. Is it a better idea to move public void refreshQueues() logic to scheduler#reinitialize(...). It's not a good idea to invoke AdminService#refreshQueues from MutableCSConfigurationProvider. Not sure about this, then we are doing the reservation system reinitialization inside scheduler, so every time scheduler#reinitialize is called, the reservation system is also initialized, not sure if this is the desired behavior. Also we would need to duplicate the reservation system reinitialization for all schedulers, or make ResourceScheduler an abstract class and add it there. Unless you meant duplicating the reservation system reinitialization logic inside MutableCSConfigurationProvider#mutateConfiguration? I think this makes more sense, but then we have duplicate code between this and AdminService. bq. In MutableCSConfigurationProvider: It's better to remove:... Done bq. In MutableConfScheduler: Similar to above, it's better to remove:... Done bq. refreshConfiguration -> reloadConfigurationFromStore Done bq. createAndStartZKManager can be merged to rm#getZKManager() Renamed to getAndStartZKManager bq. Getter API of ResourceManager should be exposed by RMContext. We should never use ref to ResourceManager directly. Done for ZKConfigurationStore, I left the references in ZKRMStateStore since this class has other direct references to rm object. We can handle this in a separate ticket if you'd like. bq. setResourceManager can be removed and you can pass RMContext ref to initialize. Done bq. What happens if Configuration schedConf passed to a already initialized store? For leveldb and zk, it will ignore it and use the scheduler configuration persisted in the store. bq. Could you add Javadocs to following methods: Done > Implement zookeeper based store for scheduler configuration updates > --- > > Key: YARN-6840 > URL: https://issues.apache.org/jira/browse/YARN-6840 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Jonathan Hung > Attachments: YARN-6840-YARN-5734.001.patch, > YARN-6840-YARN-5734.002.patch, YARN-6840-YARN-5734.003.patch, > YARN-6840-YARN-5734.004.patch > > > Right now there is only in-memory and leveldb based configuration store > supported.
[jira] [Commented] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
[ https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165572#comment-16165572 ] Xuan Gong commented on YARN-7194: - Uploaded a simple fix: When we are not in partial log aggregation, we would not try to delete the checksum file in postWrite > Log aggregation status is always Failed with the newly added log aggregation > IndexedFileFormat > -- > > Key: YARN-7194 > URL: https://issues.apache.org/jira/browse/YARN-7194 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
[ https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165571#comment-16165571 ] Xuan Gong commented on YARN-7194: - It happens when we are *not* in partial log aggregation. When we are not in partial log aggregation, we will not create intermediate checksum file, so, when we do postWrite, we would call {code} deleteFileWithRetries(fc, ugi, remoteLogCheckSumFile); {code} which will throw out exception, and complain that the remoteLogCheckSumFile can not be found. So, we would mark the log aggregation status as Failed when we catch the exception. > Log aggregation status is always Failed with the newly added log aggregation > IndexedFileFormat > -- > > Key: YARN-7194 > URL: https://issues.apache.org/jira/browse/YARN-7194 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
[ https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-7194: Attachment: YARN-7194.1.patch > Log aggregation status is always Failed with the newly added log aggregation > IndexedFileFormat > -- > > Key: YARN-7194 > URL: https://issues.apache.org/jira/browse/YARN-7194 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7194.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
[ https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165576#comment-16165576 ] Xuan Gong commented on YARN-7194: - Trivial fix without testcases > Log aggregation status is always Failed with the newly added log aggregation > IndexedFileFormat > -- > > Key: YARN-7194 > URL: https://issues.apache.org/jira/browse/YARN-7194 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7194.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3661) Basic Federation UI
[ https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165595#comment-16165595 ] Hadoop QA commented on YARN-3661: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 13s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.router.webapp.TestNodesPage | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-3661 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886993/YARN-3661-005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 3cd1fd1fdcf7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchproces
[jira] [Commented] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat
[ https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165603#comment-16165603 ] Hadoop QA commented on YARN-7194: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 36s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | Potentially dangerous use of non-short-circuit logic in org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.postWrite(LogAggregationFileControllerContext) At LogAggregationIndexedFileController.java:logic in org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.postWrite(LogAggregationFileControllerContext) At LogAggregationIndexedFileController.java:[line 396] | | Failed junit tests | hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | YARN-7194 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887001/YARN-7194.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 25786756f7d4 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 73aed34 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17448/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html | | unit | https://b
[jira] [Updated] (YARN-3661) Basic Federation UI
[ https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-3661: -- Attachment: YARN-3661-006.patch > Basic Federation UI > > > Key: YARN-3661 > URL: https://issues.apache.org/jira/browse/YARN-3661 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Giovanni Matteo Fumarola >Assignee: Íñigo Goiri > Attachments: YARN-3661-000.patch, YARN-3661-001.patch, > YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, > YARN-3661-005.patch, YARN-3661-006.patch > > > The UIs provided by each RM, provide a correct "local" view of what is > running in a sub-cluster. In the context of federation we need new > UIs that can track load, jobs, users across sub-clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org