[jira] [Commented] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers
[ https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726514#comment-16726514 ] Weiwei Yang commented on YARN-9038: --- Hi [~sunilg] Could you pls help to review latest patch? Thanks > [CSI] Add ability to publish/unpublish volumes on node managers > --- > > Key: YARN-9038 > URL: https://issues.apache.org/jira/browse/YARN-9038 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Labels: CSI > Attachments: YARN-9038.001.patch, YARN-9038.002.patch, > YARN-9038.003.patch, YARN-9038.004.patch, YARN-9038.005.patch, > YARN-9038.006.patch > > > We need to add ability to publish volumes on node managers in staging area, > under NM's local dir. And then mount the path to docker container to make it > visible in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers
[ https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726485#comment-16726485 ] Hadoop QA commented on YARN-9038: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 34s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 494 unchanged - 0 fixed = 495 total (was 494) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 36s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 47s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 34s{color} | {color:green} hadoop-yarn-services-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-csi in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License wa
[jira] [Commented] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2
[ https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726473#comment-16726473 ] Abhishek Modi commented on YARN-9149: - I tested it and I was getting start time and end time in container report: {code} Container Report : Container-Id : container_1545304151367_0001_01_01 Start-Time : 1545304173615 Finish-Time : 1545304181667 State : COMPLETE Execution-Type : GUARANTEED LOG-URL : null Host : abmodi:39365 NodeHttpAddress : http://abmodi:8042 Diagnostics : {code} [~rohithsharma] could you please share more details where you foudn start time and finish time missing. Meanwhile I will fix the log-url issue as part of this jira. > yarn container -status misses logUrl when integrated with ATSv2 > --- > > Key: YARN-9149 > URL: https://issues.apache.org/jira/browse/YARN-9149 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Abhishek Modi >Priority: Major > > Post YARN-8303, yarn client can be integrated with ATSv2. But log url and > start and end time is printing data is wrong! > {code} > Container Report : > Container-Id : container_1545035586969_0001_01_01 > Start-Time : 0 > Finish-Time : 0 > State : COMPLETE > Execution-Type : GUARANTEED > LOG-URL : null > Host : localhost:25006 > NodeHttpAddress : localhost:25008 > Diagnostics : > {code} > # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. > This need set for proper log url based on yarn.log.server.web-service.url > # TimelineEntityV2Converter#convertToContainerReport parses start/end time > wrongly. Comparison should happen with entityType but below code is doing > entityId > {code} > if (events != null) { > for (TimelineEvent event : events) { > if (event.getId().equals( > ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) { > createdTime = event.getTimestamp(); > } else if (event.getId().equals( > ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) { > finishedTime = event.getTimestamp(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726451#comment-16726451 ] Tao Yang commented on YARN-8925: Thanks [~cheersyang] for the review and commit. Attached a patch for branch-3.2 which solve the conflicts in test case and fix serveral checkstyle warnings. > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925-branch-3.2.001.patch, YARN-8925.001.patch, > YARN-8925.002.patch, YARN-8925.003.patch, YARN-8925.004.patch, > YARN-8925.005.patch, YARN-8925.006.patch, YARN-8925.007.patch, > YARN-8925.008.patch, YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Yang updated YARN-8925: --- Attachment: YARN-8925-branch-3.2.001.patch > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925-branch-3.2.001.patch, YARN-8925.001.patch, > YARN-8925.002.patch, YARN-8925.003.patch, YARN-8925.004.patch, > YARN-8925.005.patch, YARN-8925.006.patch, YARN-8925.007.patch, > YARN-8925.008.patch, YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726427#comment-16726427 ] Weiwei Yang commented on YARN-8925: --- Hi [~Tao Yang] I've committed the patch to trunk, but I also want to get this in branch-3.2. It has some conflicts though, majorly on test case classes. could u help to provide a patch for branch-3.2? Thx > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925.001.patch, YARN-8925.002.patch, > YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch, > YARN-8925.006.patch, YARN-8925.007.patch, YARN-8925.008.patch, > YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726425#comment-16726425 ] Hudson commented on YARN-8925: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15651 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15651/]) YARN-8925. Updating distributed node attributes only when necessary. (wwei: rev f659485ee83f3f34e3717631983adfc8fa1e53dc) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RegisterNodeManagerResponse.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelUtil.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RegisterNodeManagerRequestPBImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RegisterNodeManagerResponsePBImpl.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdaterForAttributes.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NodeAttributesManagerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RegisterNodeManagerRequest.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestNodeLabelUtil.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NodeHeartbeatResponse.java > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925.001.patch, YARN-8925.002.patch, > YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch, > YARN-8925.006.patch, YARN-8925.007.patch, YARN-8925.008.patch, > YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726401#comment-16726401 ] Weiwei Yang commented on YARN-8925: --- Hi [~Tao Yang] Yep, I think I forgot to set that config. Since this is validated on your cluster with upon configuration and it works fine. I have no objections. +1 to v10 patch. There are some minor checkstyle issues, I think I can take care of them while committing it. Thanks for getting this done. > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925.001.patch, YARN-8925.002.patch, > YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch, > YARN-8925.006.patch, YARN-8925.007.patch, YARN-8925.008.patch, > YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers
[ https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-9038: -- Attachment: YARN-9038.006.patch > [CSI] Add ability to publish/unpublish volumes on node managers > --- > > Key: YARN-9038 > URL: https://issues.apache.org/jira/browse/YARN-9038 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Labels: CSI > Attachments: YARN-9038.001.patch, YARN-9038.002.patch, > YARN-9038.003.patch, YARN-9038.004.patch, YARN-9038.005.patch, > YARN-9038.006.patch > > > We need to add ability to publish volumes on node managers in staging area, > under NM's local dir. And then mount the path to docker container to make it > visible in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726387#comment-16726387 ] Xun Liu commented on YARN-5168: --- [~eyang], Sorry, I neglected to check and missed. I will fix this problem right away. > Add port mapping handling when docker container use bridge network > -- > > Key: YARN-5168 > URL: https://issues.apache.org/jira/browse/YARN-5168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jun Gong >Assignee: Xun Liu >Priority: Major > Labels: Docker > Attachments: YARN-5168.001.patch, YARN-5168.002.patch, > YARN-5168.003.patch, YARN-5168.004.patch, YARN-5168.005.patch, > YARN-5168.006.patch, YARN-5168.007.patch, YARN-5168.008.patch, > YARN-5168.009.patch, YARN-5168.010.patch, YARN-5168.011.patch, > YARN-5168.012.patch, YARN-5168.013.patch, YARN-5168.014.patch, > YARN-5168.015.patch, YARN-5168.016.patch, YARN-5168.017.patch, > YARN-5168.018.patch, YARN-5168.019.patch, YARN-5168.020.patch, > exposedPorts1.png, exposedPorts2.png > > > YARN-4007 addresses different network setups when launching the docker > container. We need support port mapping when docker container uses bridge > network. > The following problems are what we faced: > 1. Add "-P" to map docker container's exposed ports to automatically. > 2. Add "-p" to let user specify specific ports to map. > 3. Add service registry support for bridge network case, then app could find > each other. It could be done out of YARN, however it might be more convenient > to support it natively in YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-5168: -- Attachment: YARN-5168.020.patch > Add port mapping handling when docker container use bridge network > -- > > Key: YARN-5168 > URL: https://issues.apache.org/jira/browse/YARN-5168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jun Gong >Assignee: Xun Liu >Priority: Major > Labels: Docker > Attachments: YARN-5168.001.patch, YARN-5168.002.patch, > YARN-5168.003.patch, YARN-5168.004.patch, YARN-5168.005.patch, > YARN-5168.006.patch, YARN-5168.007.patch, YARN-5168.008.patch, > YARN-5168.009.patch, YARN-5168.010.patch, YARN-5168.011.patch, > YARN-5168.012.patch, YARN-5168.013.patch, YARN-5168.014.patch, > YARN-5168.015.patch, YARN-5168.016.patch, YARN-5168.017.patch, > YARN-5168.018.patch, YARN-5168.019.patch, YARN-5168.020.patch, > exposedPorts1.png, exposedPorts2.png > > > YARN-4007 addresses different network setups when launching the docker > container. We need support port mapping when docker container uses bridge > network. > The following problems are what we faced: > 1. Add "-P" to map docker container's exposed ports to automatically. > 2. Add "-p" to let user specify specific ports to map. > 3. Add service registry support for bridge network case, then app could find > each other. It could be done out of YARN, however it might be more convenient > to support it natively in YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9152) Auxiliary service REST API query does not return running services
[ https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726354#comment-16726354 ] Hudson commented on YARN-9152: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15650 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15650/]) YARN-9152. Improved AuxServices REST API output.Contributed (eyang: rev a80d32107498ea4b15b5a7c7d142ec41c387129a) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/AuxiliaryServicesInfo.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesAuxServices.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/JAXBContextResolver.java > Auxiliary service REST API query does not return running services > - > > Key: YARN-9152 > URL: https://issues.apache.org/jira/browse/YARN-9152 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9152.1.patch > > > Auxiliary service is configured with: > {code} > { > "services": [ > { > "name": "mapreduce_shuffle", > "version": "2", > "configuration": { > "properties": { > "class.name": "org.apache.hadoop.mapred.ShuffleHandler", > "mapreduce.shuffle.transfer.buffer.size": "102400", > "mapreduce.shuffle.port": "13563" > } > } > } > ] > } > {code} > Node manager log shows the service is registered: > {code} > 2018-12-19 22:38:57,466 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Reading auxiliary services manifest hdfs:/tmp/aux.json > 2018-12-19 22:38:57,827 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Initialized auxiliary service mapreduce_shuffle > 2018-12-19 22:38:57,828 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Adding auxiliary service mapreduce_shuffle version 2 > {code} > REST API query shows: > {code} > $ curl --negotiate -u : > http://eyang-3.openstacklocal:8042/ws/v1/node/auxiliaryservices > {"services":{}} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9132) Add file permission check for auxiliary services manifest file
[ https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726338#comment-16726338 ] Hadoop QA commented on YARN-9132: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9132 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12952589/YARN-9132.3.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3a10732b693c 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a668f8e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22937/testReport/ | | Max. process+thread count | 338 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22937/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add file permission check for auxiliary
[jira] [Commented] (YARN-9152) Auxiliary service REST API query does not return running services
[ https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726343#comment-16726343 ] Hadoop QA commented on YARN-9152: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 7 new + 6 unchanged - 0 fixed = 13 total (was 6) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 45s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9152 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12952586/YARN-9152.1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 326bd4636615 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a668f8e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22938/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22938/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-9131) Document usage of Dynamic auxiliary services
[ https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726336#comment-16726336 ] Hudson commented on YARN-9131: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15649 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15649/]) YARN-9131. Updated document usage for dynamic auxiliary service. (eyang: rev 7affa3053c9660ea8aee2e3bfe748bbefbae16ea) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/PluggableShuffleAndPluggableSort.md > Document usage of Dynamic auxiliary services > > > Key: YARN-9131 > URL: https://issues.apache.org/jira/browse/YARN-9131 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, > YARN-9131.4.patch, YARN-9131.5.patch > > > This is a follow up issue to document YARN-9075 for admin to control which > aux service to add or remove. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9131) Document usage of Dynamic auxiliary services
[ https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726323#comment-16726323 ] Hadoop QA commented on YARN-9131: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 39m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9131 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12952587/YARN-9131.5.patch | | Optional Tests | dupname asflicense mvnsite | | uname | Linux e78620b3828e 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a668f8e | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 308 (vs. ulimit of 1) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22936/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Document usage of Dynamic auxiliary services > > > Key: YARN-9131 > URL: https://issues.apache.org/jira/browse/YARN-9131 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, > YARN-9131.4.patch, YARN-9131.5.patch > > > This is a follow up issue to document YARN-9075 for admin to control which > aux service to add or remove. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9132) Add file permission check for auxiliary services manifest file
[ https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-9132: - Attachment: YARN-9132.3.patch > Add file permission check for auxiliary services manifest file > -- > > Key: YARN-9132 > URL: https://issues.apache.org/jira/browse/YARN-9132 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9132.1.patch, YARN-9132.2.patch, YARN-9132.3.patch > > > The manifest file in HDFS must be owned by YARN admin or YARN service user > only. This check helps to prevent loading of malware into node manager JVM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9132) Add file permission check for auxiliary services manifest file
[ https://issues.apache.org/jira/browse/YARN-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726284#comment-16726284 ] Billie Rinaldi commented on YARN-9132: -- Patch 3 performs recursive check for group and others write permission. > Add file permission check for auxiliary services manifest file > -- > > Key: YARN-9132 > URL: https://issues.apache.org/jira/browse/YARN-9132 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9132.1.patch, YARN-9132.2.patch, YARN-9132.3.patch > > > The manifest file in HDFS must be owned by YARN admin or YARN service user > only. This check helps to prevent loading of malware into node manager JVM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9131) Document usage of Dynamic auxiliary services
[ https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726277#comment-16726277 ] Billie Rinaldi commented on YARN-9131: -- Patch 5 is documentation only. I moved the formatting fixes to YARN-9152. > Document usage of Dynamic auxiliary services > > > Key: YARN-9131 > URL: https://issues.apache.org/jira/browse/YARN-9131 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, > YARN-9131.4.patch, YARN-9131.5.patch > > > This is a follow up issue to document YARN-9075 for admin to control which > aux service to add or remove. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9131) Document usage of Dynamic auxiliary services
[ https://issues.apache.org/jira/browse/YARN-9131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-9131: - Attachment: YARN-9131.5.patch > Document usage of Dynamic auxiliary services > > > Key: YARN-9131 > URL: https://issues.apache.org/jira/browse/YARN-9131 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9131.1.patch, YARN-9131.2.patch, YARN-9131.3.patch, > YARN-9131.4.patch, YARN-9131.5.patch > > > This is a follow up issue to document YARN-9075 for admin to control which > aux service to add or remove. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9152) Auxiliary service REST API query does not return running services
[ https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726275#comment-16726275 ] Billie Rinaldi commented on YARN-9152: -- It looks like the services were empty because there was an admin user check. Aux service name, version, and start time could be viewed by non-admin users, so I have attached a patch that removes that check. I also noticed an issue with json serialization for the auxiliaryservices endpoint, so I fixed that as well. > Auxiliary service REST API query does not return running services > - > > Key: YARN-9152 > URL: https://issues.apache.org/jira/browse/YARN-9152 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9152.1.patch > > > Auxiliary service is configured with: > {code} > { > "services": [ > { > "name": "mapreduce_shuffle", > "version": "2", > "configuration": { > "properties": { > "class.name": "org.apache.hadoop.mapred.ShuffleHandler", > "mapreduce.shuffle.transfer.buffer.size": "102400", > "mapreduce.shuffle.port": "13563" > } > } > } > ] > } > {code} > Node manager log shows the service is registered: > {code} > 2018-12-19 22:38:57,466 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Reading auxiliary services manifest hdfs:/tmp/aux.json > 2018-12-19 22:38:57,827 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Initialized auxiliary service mapreduce_shuffle > 2018-12-19 22:38:57,828 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Adding auxiliary service mapreduce_shuffle version 2 > {code} > REST API query shows: > {code} > $ curl --negotiate -u : > http://eyang-3.openstacklocal:8042/ws/v1/node/auxiliaryservices > {"services":{}} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9152) Auxiliary service REST API query does not return running services
[ https://issues.apache.org/jira/browse/YARN-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-9152: - Attachment: YARN-9152.1.patch > Auxiliary service REST API query does not return running services > - > > Key: YARN-9152 > URL: https://issues.apache.org/jira/browse/YARN-9152 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Billie Rinaldi >Priority: Major > Attachments: YARN-9152.1.patch > > > Auxiliary service is configured with: > {code} > { > "services": [ > { > "name": "mapreduce_shuffle", > "version": "2", > "configuration": { > "properties": { > "class.name": "org.apache.hadoop.mapred.ShuffleHandler", > "mapreduce.shuffle.transfer.buffer.size": "102400", > "mapreduce.shuffle.port": "13563" > } > } > } > ] > } > {code} > Node manager log shows the service is registered: > {code} > 2018-12-19 22:38:57,466 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Reading auxiliary services manifest hdfs:/tmp/aux.json > 2018-12-19 22:38:57,827 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Initialized auxiliary service mapreduce_shuffle > 2018-12-19 22:38:57,828 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: > Adding auxiliary service mapreduce_shuffle version 2 > {code} > REST API query shows: > {code} > $ curl --negotiate -u : > http://eyang-3.openstacklocal:8042/ws/v1/node/auxiliaryservices > {"services":{}} > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726205#comment-16726205 ] Hadoop QA commented on YARN-5168: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 9 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 12s{color} | {color:green} root: The patch generated 0 new + 909 unchanged - 7 fixed = 909 total (was 916) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 20s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 51s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} |
[jira] [Commented] (YARN-9108) FederationIntercepter merge home and second response local variable spell mistake
[ https://issues.apache.org/jira/browse/YARN-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726074#comment-16726074 ] Abhishek Modi commented on YARN-9108: - Thanks [~elgoiri] for review. I have uploaded a new patch with checkstyle fixes. Will update the patch with tests for checking ResourceRequest. > FederationIntercepter merge home and second response local variable spell > mistake > - > > Key: YARN-9108 > URL: https://issues.apache.org/jira/browse/YARN-9108 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Affects Versions: 3.3.0 >Reporter: Morty Zhong >Assignee: Abhishek Modi >Priority: Minor > Attachments: YARN-9108.001.patch, YARN-9108.002.patch > > > method 'mergeAllocateResponse' in class FederationIntercepter.java line 1315 > the left variable `par2` should be `par1` > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par2.getContainers().addAll(par2.getContainers()); > } > {code} > should be > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par1.getContainers().addAll(par2.getContainers());//edited line > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9108) FederationIntercepter merge home and second response local variable spell mistake
[ https://issues.apache.org/jira/browse/YARN-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726069#comment-16726069 ] Íñigo Goiri commented on YARN-9108: --- Thanks [~abmodi] for the patch. * Can you fix the checkstyles? * Can you add a check for other things like the ResourceRequest? > FederationIntercepter merge home and second response local variable spell > mistake > - > > Key: YARN-9108 > URL: https://issues.apache.org/jira/browse/YARN-9108 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Affects Versions: 3.3.0 >Reporter: Morty Zhong >Assignee: Abhishek Modi >Priority: Minor > Attachments: YARN-9108.001.patch, YARN-9108.002.patch > > > method 'mergeAllocateResponse' in class FederationIntercepter.java line 1315 > the left variable `par2` should be `par1` > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par2.getContainers().addAll(par2.getContainers()); > } > {code} > should be > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par1.getContainers().addAll(par2.getContainers());//edited line > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2
[ https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726046#comment-16726046 ] Abhishek Modi commented on YARN-9149: - [~rohithsharma] I checked the code and found that id is only being set. {code} TimelineEvent tEvent = new TimelineEvent(); tEvent.setId(ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE); tEvent.setTimestamp(createdTime); entity.addEvent(tEvent); {code} I will further debug it what's causing the issue. > yarn container -status misses logUrl when integrated with ATSv2 > --- > > Key: YARN-9149 > URL: https://issues.apache.org/jira/browse/YARN-9149 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Abhishek Modi >Priority: Major > > Post YARN-8303, yarn client can be integrated with ATSv2. But log url and > start and end time is printing data is wrong! > {code} > Container Report : > Container-Id : container_1545035586969_0001_01_01 > Start-Time : 0 > Finish-Time : 0 > State : COMPLETE > Execution-Type : GUARANTEED > LOG-URL : null > Host : localhost:25006 > NodeHttpAddress : localhost:25008 > Diagnostics : > {code} > # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. > This need set for proper log url based on yarn.log.server.web-service.url > # TimelineEntityV2Converter#convertToContainerReport parses start/end time > wrongly. Comparison should happen with entityType but below code is doing > entityId > {code} > if (events != null) { > for (TimelineEvent event : events) { > if (event.getId().equals( > ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) { > createdTime = event.getTimestamp(); > } else if (event.getId().equals( > ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) { > finishedTime = event.getTimestamp(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726020#comment-16726020 ] Eric Yang commented on YARN-5168: - [~liuxun323] It looks like ContainerHistoryData is still passing exposedPorts as newInstance parameter. Can we update this as well? > Add port mapping handling when docker container use bridge network > -- > > Key: YARN-5168 > URL: https://issues.apache.org/jira/browse/YARN-5168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jun Gong >Assignee: Xun Liu >Priority: Major > Labels: Docker > Attachments: YARN-5168.001.patch, YARN-5168.002.patch, > YARN-5168.003.patch, YARN-5168.004.patch, YARN-5168.005.patch, > YARN-5168.006.patch, YARN-5168.007.patch, YARN-5168.008.patch, > YARN-5168.009.patch, YARN-5168.010.patch, YARN-5168.011.patch, > YARN-5168.012.patch, YARN-5168.013.patch, YARN-5168.014.patch, > YARN-5168.015.patch, YARN-5168.016.patch, YARN-5168.017.patch, > YARN-5168.018.patch, YARN-5168.019.patch, exposedPorts1.png, exposedPorts2.png > > > YARN-4007 addresses different network setups when launching the docker > container. We need support port mapping when docker container uses bridge > network. > The following problems are what we faced: > 1. Add "-P" to map docker container's exposed ports to automatically. > 2. Add "-p" to let user specify specific ports to map. > 3. Add service registry support for bridge network case, then app could find > each other. It could be done out of YARN, however it might be more convenient > to support it natively in YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2
[ https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725975#comment-16725975 ] Rohith Sharma K S commented on YARN-9149: - Sure.. assigned to you! > yarn container -status misses logUrl when integrated with ATSv2 > --- > > Key: YARN-9149 > URL: https://issues.apache.org/jira/browse/YARN-9149 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Abhishek Modi >Priority: Major > > Post YARN-8303, yarn client can be integrated with ATSv2. But log url and > start and end time is printing data is wrong! > {code} > Container Report : > Container-Id : container_1545035586969_0001_01_01 > Start-Time : 0 > Finish-Time : 0 > State : COMPLETE > Execution-Type : GUARANTEED > LOG-URL : null > Host : localhost:25006 > NodeHttpAddress : localhost:25008 > Diagnostics : > {code} > # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. > This need set for proper log url based on yarn.log.server.web-service.url > # TimelineEntityV2Converter#convertToContainerReport parses start/end time > wrongly. Comparison should happen with entityType but below code is doing > entityId > {code} > if (events != null) { > for (TimelineEvent event : events) { > if (event.getId().equals( > ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) { > createdTime = event.getTimestamp(); > } else if (event.getId().equals( > ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) { > finishedTime = event.getTimestamp(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2
[ https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned YARN-9149: --- Assignee: Abhishek Modi (was: Rohith Sharma K S) > yarn container -status misses logUrl when integrated with ATSv2 > --- > > Key: YARN-9149 > URL: https://issues.apache.org/jira/browse/YARN-9149 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Abhishek Modi >Priority: Major > > Post YARN-8303, yarn client can be integrated with ATSv2. But log url and > start and end time is printing data is wrong! > {code} > Container Report : > Container-Id : container_1545035586969_0001_01_01 > Start-Time : 0 > Finish-Time : 0 > State : COMPLETE > Execution-Type : GUARANTEED > LOG-URL : null > Host : localhost:25006 > NodeHttpAddress : localhost:25008 > Diagnostics : > {code} > # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. > This need set for proper log url based on yarn.log.server.web-service.url > # TimelineEntityV2Converter#convertToContainerReport parses start/end time > wrongly. Comparison should happen with entityType but below code is doing > entityId > {code} > if (events != null) { > for (TimelineEvent event : events) { > if (event.getId().equals( > ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) { > createdTime = event.getTimestamp(); > } else if (event.getId().equals( > ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) { > finishedTime = event.getTimestamp(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster
[ https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725927#comment-16725927 ] Jason Lowe commented on YARN-6523: -- Thanks for updating the patch! I think it is really close now. NodeHeartbeatResponsePBImpl can be more efficient on the handling of the system credentials for apps collection. Rather than making a copy of it and delay until mergeLocalToBuilder is called to set it on the builder (which will also make a copy), it can be handled more like the token sequence number where we just get it and set it on the proto/builder when it is get/set on the PBImpl. For example: {code} @Override public void setSystemCredentialsForApps( Collection systemCredentialsForAppsProto) { maybeInitBuilder(); builder.clearSystemCredentialsForApps(); if (systemCredentialsForAppsProto != null) { builder.addAllSystemCredentialsForApps(systemCredentialsForAppsProto); } } @Override public Collection getSystemCredentialsForApps() { NodeHeartbeatResponseProtoOrBuilder p = viaProto ? proto : builder; return p.getSystemCredentialsForAppsList(); } {code} Other than that the patch looks good to me. > Newly retrieved security Tokens are sent as part of each heartbeat to each > node from RM which is not desirable in large cluster > --- > > Key: YARN-6523 > URL: https://issues.apache.org/jira/browse/YARN-6523 > Project: Hadoop YARN > Issue Type: Improvement > Components: RM >Affects Versions: 2.8.0, 2.7.3 >Reporter: Naganarasimha G R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-6523.001.patch, YARN-6523.002.patch, > YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, > YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, > YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch > > > Currently as part of heartbeat response RM sets all application's tokens > though all applications might not be active on the node. On top of it > NodeHeartbeatResponsePBImpl converts tokens for each app into > SystemCredentialsForAppsProto. Hence for each node and each heartbeat too > many SystemCredentialsForAppsProto objects were getting created. > We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with > 8GB RAM configured for RM -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6149) Allow port range to be specified while starting NM Timeline collector manager.
[ https://issues.apache.org/jira/browse/YARN-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725877#comment-16725877 ] Abhishek Modi commented on YARN-6149: - [~varun_saxena] [~rohithsharma] [~vrushalic] Could you please review it. > Allow port range to be specified while starting NM Timeline collector manager. > -- > > Key: YARN-6149 > URL: https://issues.apache.org/jira/browse/YARN-6149 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Abhishek Modi >Priority: Major > Attachments: YARN-6149.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-5168: -- Attachment: YARN-5168.019.patch > Add port mapping handling when docker container use bridge network > -- > > Key: YARN-5168 > URL: https://issues.apache.org/jira/browse/YARN-5168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jun Gong >Assignee: Xun Liu >Priority: Major > Labels: Docker > Attachments: YARN-5168.001.patch, YARN-5168.002.patch, > YARN-5168.003.patch, YARN-5168.004.patch, YARN-5168.005.patch, > YARN-5168.006.patch, YARN-5168.007.patch, YARN-5168.008.patch, > YARN-5168.009.patch, YARN-5168.010.patch, YARN-5168.011.patch, > YARN-5168.012.patch, YARN-5168.013.patch, YARN-5168.014.patch, > YARN-5168.015.patch, YARN-5168.016.patch, YARN-5168.017.patch, > YARN-5168.018.patch, YARN-5168.019.patch, exposedPorts1.png, exposedPorts2.png > > > YARN-4007 addresses different network setups when launching the docker > container. We need support port mapping when docker container uses bridge > network. > The following problems are what we faced: > 1. Add "-P" to map docker container's exposed ports to automatically. > 2. Add "-p" to let user specify specific ports to map. > 3. Add service registry support for bridge network case, then app could find > each other. It could be done out of YARN, however it might be more convenient > to support it natively in YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5168) Add port mapping handling when docker container use bridge network
[ https://issues.apache.org/jira/browse/YARN-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-5168: -- Attachment: (was: YARN-5168.019.patch) > Add port mapping handling when docker container use bridge network > -- > > Key: YARN-5168 > URL: https://issues.apache.org/jira/browse/YARN-5168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jun Gong >Assignee: Xun Liu >Priority: Major > Labels: Docker > Attachments: YARN-5168.001.patch, YARN-5168.002.patch, > YARN-5168.003.patch, YARN-5168.004.patch, YARN-5168.005.patch, > YARN-5168.006.patch, YARN-5168.007.patch, YARN-5168.008.patch, > YARN-5168.009.patch, YARN-5168.010.patch, YARN-5168.011.patch, > YARN-5168.012.patch, YARN-5168.013.patch, YARN-5168.014.patch, > YARN-5168.015.patch, YARN-5168.016.patch, YARN-5168.017.patch, > YARN-5168.018.patch, exposedPorts1.png, exposedPorts2.png > > > YARN-4007 addresses different network setups when launching the docker > container. We need support port mapping when docker container uses bridge > network. > The following problems are what we faced: > 1. Add "-P" to map docker container's exposed ports to automatically. > 2. Add "-p" to let user specify specific ports to map. > 3. Add service registry support for bridge network case, then app could find > each other. It could be done out of YARN, however it might be more convenient > to support it natively in YARN. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6149) Allow port range to be specified while starting NM Timeline collector manager.
[ https://issues.apache.org/jira/browse/YARN-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725864#comment-16725864 ] Hadoop QA commented on YARN-6149: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-6149 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12952526/YARN-6149.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4dff8510e0f9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ea621fa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22934/testReport/ | | Max. process+thread count | 340 (vs.
[jira] [Commented] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2
[ https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725851#comment-16725851 ] Abhishek Modi commented on YARN-9149: - [~rohithsharma] should I take this up if you haven't started working on it. > yarn container -status misses logUrl when integrated with ATSv2 > --- > > Key: YARN-9149 > URL: https://issues.apache.org/jira/browse/YARN-9149 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Major > > Post YARN-8303, yarn client can be integrated with ATSv2. But log url and > start and end time is printing data is wrong! > {code} > Container Report : > Container-Id : container_1545035586969_0001_01_01 > Start-Time : 0 > Finish-Time : 0 > State : COMPLETE > Execution-Type : GUARANTEED > LOG-URL : null > Host : localhost:25006 > NodeHttpAddress : localhost:25008 > Diagnostics : > {code} > # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. > This need set for proper log url based on yarn.log.server.web-service.url > # TimelineEntityV2Converter#convertToContainerReport parses start/end time > wrongly. Comparison should happen with entityType but below code is doing > entityId > {code} > if (events != null) { > for (TimelineEvent event : events) { > if (event.getId().equals( > ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) { > createdTime = event.getTimestamp(); > } else if (event.getId().equals( > ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) { > finishedTime = event.getTimestamp(); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers
[ https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725833#comment-16725833 ] Hadoop QA commented on YARN-9038: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 37s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 494 unchanged - 0 fixed = 495 total (was 494) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 40s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 33s{color} | {color:green} hadoop-yarn-services-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 44s{color} | {color:red} hadoop-yarn-csi in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings.
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725825#comment-16725825 ] Tao Yang commented on YARN-8925: Hi, [~cheersyang]. Perhaps there is no {{yarn.nodemanager.node-attributes.provider.fetch-interval-ms}} which defines the interval of fetching node attributes from the provider and default is 10 minutes? > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925.001.patch, YARN-8925.002.patch, > YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch, > YARN-8925.006.patch, YARN-8925.007.patch, YARN-8925.008.patch, > YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary
[ https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725717#comment-16725717 ] Weiwei Yang commented on YARN-8925: --- Hi [~Tao Yang] Today I applied the patch to a v3.2 cluster. I use configurable based node attributes provider. {code} yarn.node-attribute.fs-store.root-dir /home/wwei/hadoop-3.2.0/hadoop-data/yarn/nodeattributes yarn.nodemanager.node-attributes.provider config yarn.nodemanager.node-attributes.provider.configured-node-attributes osType,STRING,redhat:osVersion,STRING,2.8 yarn.nodemanager.node-attributes.resync-interval-ms 1000 {code} when I update the node attribute value of \{{osVersion}} from 2.8 to 2.9. It was not updated to RM. I checked via [http://RM:8088/ws/v1/cluster/nodes.|http://rm:8088/ws/v1/cluster/nodes.] Could u pls take a look? Thanks > Updating distributed node attributes only when necessary > > > Key: YARN-8925 > URL: https://issues.apache.org/jira/browse/YARN-8925 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.2.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Major > Labels: performance > Attachments: YARN-8925.001.patch, YARN-8925.002.patch, > YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch, > YARN-8925.006.patch, YARN-8925.007.patch, YARN-8925.008.patch, > YARN-8925.009.patch, YARN-8925.010.patch > > > Currently if distributed node attributes exist, even though there is no > change, updating for distributed node attributes will happen in every > heartbeat between NM and RM. Updating process will hold > NodeAttributesManagerImpl#writeLock and may have some influence in a large > cluster. We have found nodes UI of a large cluster is opened slowly and most > time it's waiting for the lock in NodeAttributesManagerImpl. I think this > updating should be called only when necessary to enhance the performance of > related process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9108) FederationIntercepter merge home and second response local variable spell mistake
[ https://issues.apache.org/jira/browse/YARN-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725713#comment-16725713 ] Abhishek Modi commented on YARN-9108: - [~botong] [~elgoiri] [~giovanni.fumarola] could you please review it. > FederationIntercepter merge home and second response local variable spell > mistake > - > > Key: YARN-9108 > URL: https://issues.apache.org/jira/browse/YARN-9108 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Affects Versions: 3.3.0 >Reporter: Morty Zhong >Assignee: Abhishek Modi >Priority: Minor > Attachments: YARN-9108.001.patch > > > method 'mergeAllocateResponse' in class FederationIntercepter.java line 1315 > the left variable `par2` should be `par1` > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par2.getContainers().addAll(par2.getContainers()); > } > {code} > should be > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par1.getContainers().addAll(par2.getContainers());//edited line > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9108) FederationIntercepter merge home and second response local variable spell mistake
[ https://issues.apache.org/jira/browse/YARN-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Modi updated YARN-9108: Attachment: YARN-9108.002.patch > FederationIntercepter merge home and second response local variable spell > mistake > - > > Key: YARN-9108 > URL: https://issues.apache.org/jira/browse/YARN-9108 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Affects Versions: 3.3.0 >Reporter: Morty Zhong >Assignee: Abhishek Modi >Priority: Minor > Attachments: YARN-9108.001.patch, YARN-9108.002.patch > > > method 'mergeAllocateResponse' in class FederationIntercepter.java line 1315 > the left variable `par2` should be `par1` > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par2.getContainers().addAll(par2.getContainers()); > } > {code} > should be > {code:java} > if (par1 != null && par2 != null) { > par1.getResourceRequest().addAll(par2.getResourceRequest()); > par1.getContainers().addAll(par2.getContainers());//edited line > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers
[ https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-9038: -- Attachment: YARN-9038.005.patch > [CSI] Add ability to publish/unpublish volumes on node managers > --- > > Key: YARN-9038 > URL: https://issues.apache.org/jira/browse/YARN-9038 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Labels: CSI > Attachments: YARN-9038.001.patch, YARN-9038.002.patch, > YARN-9038.003.patch, YARN-9038.004.patch, YARN-9038.005.patch > > > We need to add ability to publish volumes on node managers in staging area, > under NM's local dir. And then mount the path to docker container to make it > visible in the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9108) FederationIntercepter merge home and second response local variable spell mistake
[ https://issues.apache.org/jira/browse/YARN-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725676#comment-16725676 ] Hadoop QA commented on YARN-9108: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 10 new + 0 unchanged - 0 fixed = 10 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | YARN-9108 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12952462/YARN-9108.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ab3772f1e8f5 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ea621fa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22932/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22932/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodem