[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16324792#comment-16324792 ] Hudson commented on YARN-7468: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13491 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13491/]) YARN-7468. Provide means for container network policy control. (Xuan (wangda: rev edcc3a95d5248883492f2960f4fd22e09612ee9c) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkTagMappingManagerFactory.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkTagMappingJsonManager.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestNetworkPacketTaggingHandlerImpl.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNetworkTagMappingJsonManager.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkTagMappingManager.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkPacketTaggingHandlerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong > Fix For: 3.1.0 > > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > YARN-7468.trunk.4.patch, YARN-7468.trunk.5.patch, [YARN-7468] [Design] > Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16323093#comment-16323093 ] Wangda Tan commented on YARN-7468: -- Patch looks good, +1, thanks [~xgong]. Will commit tomorrow if no objections. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > YARN-7468.trunk.4.patch, YARN-7468.trunk.5.patch, [YARN-7468] [Design] > Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16322925#comment-16322925 ] genericqa commented on YARN-7468: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 221 unchanged - 0 fixed = 224 total (was 221) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7468 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12905727/YARN-7468.trunk.5.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a21adc54d2c0 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bc285da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbug
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16322796#comment-16322796 ] Xuan Gong commented on YARN-7468: - [~leftnoteasy] Updated. Thanks > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > YARN-7468.trunk.4.patch, YARN-7468.trunk.5.patch, [YARN-7468] [Design] > Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321489#comment-16321489 ] Wangda Tan commented on YARN-7468: -- [~xgong] thanks for updating the patch, there's still "parser"s left in the patch, could you update? you can find them from https://issues.apache.org/jira/secure/attachment/12905564/YARN-7468.trunk.4.patch#file-4 Also, javadocs warnings are related. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > YARN-7468.trunk.4.patch, [YARN-7468] [Design] Provide means for container > network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321462#comment-16321462 ] genericqa commented on YARN-7468: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 221 unchanged - 0 fixed = 224 total (was 221) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 3 new + 9 unchanged - 0 fixed = 12 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7468 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12905564/YARN-7468.trunk.4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 23d11d4cc214 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provide
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321108#comment-16321108 ] Wangda Tan commented on YARN-7468: -- Thanks [~xgong], more comments beyond javadocs/findbugs warnings and UT failures. 1) Inside ResourceHandlerModule: To me, the following changes are incompatible: {code} String handler = conf.get(YarnConfiguration.NM_NETWORK_RESOURCE_HANDLER, YarnConfiguration.DEFAULT_NM_NETWORK_RESOURCE_HANDLER); if (handler.equals(TrafficControlBandwidthHandlerImpl.class.getName())) { return getOutboundBandwidthResourceHandler(conf); } else if (handler.equals( NetworkPacketTaggingHandlerImpl.class.getName())) { return getNetworkTaggingHandler(conf); } else { throw new YarnRuntimeException( "Unsupported handler specified in the configuraiton:" + YarnConfiguration.NM_NETWORK_RESOURCE_HANDLER + ". The supported handler could be either " + NetworkPacketTaggingHandlerImpl.class.getName() + " or " + TrafficControlBandwidthHandlerImpl.class.getName() + "."); } {code} User has to config NM_NETWORK_RESOURCE_HANDLER in order to use TrafficControlBandwidthHandlerImpl. We should not touch existing logics to initialize TrafficControlBandwidthHandlerImpl, and add a new config like NM_NETWORK_TAG_PREFIX + ".enabled" to control tagging implementation. Since the two classes cannot be used at the same time, an additional check need to be added to ResourceHandlerModule to avoid this happen. 2) A couple of renames: - NM_NETWORK_TAG_MAPPING_PARSER to NM_NETWORK_TAG_MAPPING_MANAGER/CONVERTER (or any better name you prefered). This could be beyond a parser of text file. We need to rename related configs/Factories, etc. - Since cgroup cannot accept an arbitary String as network tag, suggest to rename getNetworkTagID to getNetworkTagHexID 3) Other minor comments: - createNetworkTagMappingParser could be private. - getBytesSentPerContainer should be removed. - There're a couple of javadocs inside NetworkPacketTaggingHandlerImpl mentioned "bandwidth", which should be removed/updated. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > [YARN-7468] [Design] Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319490#comment-16319490 ] genericqa commented on YARN-7468: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 14 new + 221 unchanged - 0 fixed = 235 total (was 221) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 59s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7468 | | JIRA Patch URL | https://issues.apache.org/jira/sec
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319394#comment-16319394 ] Xuan Gong commented on YARN-7468: - Thanks for the review. Attached a new patch for all the comments > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong >Priority: Minor > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, YARN-7468.trunk.3.patch, > [YARN-7468] [Design] Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16314127#comment-16314127 ] Wangda Tan commented on YARN-7468: -- Thanks [~xgong], 1) Instead of reusing OutboundBandwidthResourceHandler, suggest to directly implement tagging class from ResourceHandler since OutboundBandwidthResourceHandler is an empty class. 2) In the configuration, suggest to add new configs to yarn.nodemanager.network-tagging.*, and not touch existing configs. 3) Similarly, inside ResourceHandlerModule, add a new method (like getNetworkTaggingHandler). 4) Inside NetworkPacketTaggingHandlerImpl, it looks like the containerIdClassIdMap is not read by anyone, I think we can simplify the impl a bit by removing containerIdClassIdMap, we may not need to do anything inside reacquireContainer as well. 5) Suggestion to NetworkTagMappingParser: I think what we really need is not a parser, instead we need an abstract to get classid from Container. So I recommend to: - initial -> initialize - getNetworkTagID, changing parameter from username to {{org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container}} > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Assignee: Xuan Gong >Priority: Minor > Attachments: YARN-7468.trunk.1.patch, YARN-7468.trunk.1.patch, > YARN-7468.trunk.2.patch, YARN-7468.trunk.2.patch, [YARN-7468] [Design] > Provide means for container network policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293493#comment-16293493 ] genericqa commented on YARN-7468: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 221 unchanged - 0 fixed = 226 total (was 221) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 53s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7468 | | JIRA Patch URL | htt
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293279#comment-16293279 ] genericqa commented on YARN-7468: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 7m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 43s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 21 new + 215 unchanged - 0 fixed = 236 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 59s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 28s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 20s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 0s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7468 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902434/YARN-7468.trunk.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5edaa74e9100 3.13.0-129-generic #178-Ubuntu SMP F
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283834#comment-16283834 ] Xuan Gong commented on YARN-7468: - uploaded a new doc. Please take a look. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Priority: Minor > Attachments: [YARN-7468] [Design] Provide means for container network > policy control.pdf > > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282899#comment-16282899 ] Xuan Gong commented on YARN-7468: - I am working on a design doc. Will upload it tomorrow. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Clay B. >Priority: Minor > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276159#comment-16276159 ] Xuan Gong commented on YARN-7468: - Thanks, [~clayb] for creating the Jira. In general, we are trying to isolate network access for applications launched by users/groups. Ideally, YARN should be able to isolate both of egress and ingress network for launched containers. For the first step, we only focus on egress network isolation(We will look at ingress network in the future). For example, we only allow privileged users the ability to copy sensitive data out from a cluster. [~clayb] has described many interesting use-cases from user's perspective. From YARN's perspective, * YARN will not/should not enforce isolation itself - admins should use their tools like iptables * YARN should tag the traffic going out of YARN containers to enable DMZ like use-cases Here, we can follow in the footsteps of YARN-2140; using the same cgroups network classifier, we can filter the packets without having to use network namespaces. > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Reporter: Clay B. >Priority: Minor > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250526#comment-16250526 ] Vinod Kumar Vavilapalli commented on YARN-7468: --- bq. This is not a request for full-scale software-defined-networking integration into YARN. Glad you pointed this out! Though it will be interesting to see how such an integration will look like and what fundamental building blocks will be needed in YARN. bq. 1A. We would setup iptables rules statically beforehand to ensure traffic for the various YARN agreed upon cgroup contexts, bridge devices or network namespaces could only flow where we want; we'd do this via out-of-band configuration management – no need for YARN to do this setup. If these rules have to be static, they cannot be tied to specific apps, but only to more static concepts like user-name / group-name or queue name. The NM doesn't know the queue information, so may be we should stick to user information. Of course, this means user information is the same on all the machines in the YARN cluster. This is already be a requirement in secure clusters. bq. 2. Then, when a user submit's a job, YARN would setup the OS control (cgroup, network namespace or the bridge interface) for those processes to match the user's name, a queue or some other deterministic handle. (We would use that handle for our configuration-managed matching iptables rules which would be pre-configured.) I think we could use the same underlying linux functionality as that of traffic shaping to tag the traffic from containers depending on the admin specific rules. To reuse YARN-2140, we could split the underlying related container-executor functionality into some sort of a networking module similar to what YARN-6852 did with GPU module (but not cgroups - that part still remains to be cleaned up). > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Reporter: Clay B. >Priority: Minor > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16246778#comment-16246778 ] Allen Wittenauer commented on YARN-7468: bq. Ideally, I'd have all the external endpoints secured to disallow this cluster from talking back except for very fine-grained allowances – it's a big world and I can't. It also won't prevent DDoS attacks anyway. Plus, while most of the Hadoop ecosystem has ACL support, in most cases it's not particularly well implemented, and that is before the dynamic reconfiguration use case you've effectively presented here. bq. In all fairness, I could use tcpspy and have it record the PID of processes today too In the short term, it's probably easier to just force the use of LCE but with a wrapper around container-executor to set up the control information you want. Since the NM and c-e talk pretty much exclusively through a CLI (with all the security concerns that brings with it...), this setup should be pretty trivial to do and give you all the information you need to setup extra cgroups or whatever. That said, c-e probably should be more pluggable to allow people to run their own bits. [I've been a proponent of c-e getting switched over to do dlopen()'s vs. the current static compiling for features. This is a great example where it'd be extremely useful.] > Provide means for container network policy control > -- > > Key: YARN-7468 > URL: https://issues.apache.org/jira/browse/YARN-7468 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Reporter: Clay B. >Priority: Minor > > To prevent data exfiltration from a YARN cluster, it would be very helpful to > have "firewall" rules able to map to a user/queue's containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7468) Provide means for container network policy control
[ https://issues.apache.org/jira/browse/YARN-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16246626#comment-16246626 ] Clay B. commented on YARN-7468: --- For the driving use-case, I run secure clusters (secured on the inside to keep data from leaking back out); think of them as a drop box where users can build models with restricted data. (Or my favorite analogy is a [glovebox|https://en.wikipedia.org/wiki/File:Vacuum_Dry_Box.jpg] -- things can go in but once in, they may be tainted and can't come out except by very special decontamination.) As such, I need to ensure that network-wise the cluster is reachable from/to the local HDFS'es, HBase, databases, etc. Yet, only users permissioned for data ingest jobs should reach out and pull data. We can vet for example Oozie jobs to ensure they do only as we expect but how do we keep a user from reaching out to the same HBase or HDFS (when they otherwise have access) and storing data (or how do we allow a user to push reports to a simple service)? Ideally, I'd have all the external endpoints secured to disallow this cluster from talking back except for very fine-grained allowances -- it's a big world and I can't. So, I'd like a way to setup firewall rule equivalents with some help from YARN on the secure cluster. The process I have in mind looks like the following workflow: 1A. We would setup iptables rules statically beforehand to ensure traffic for the various YARN agreed upon cgroup contexts, bridge devices or network namespaces could only flow where we want; we'd do this via out-of-band configuration management -- no need for YARN to do this setup. 1B. A user interactively logging onto a machine would be placed into a default cgroup/network namespace so they are strictly limited. They would only be permitted to talk to the local: YARN RM, HDFS namenodes, datanodes and Oozie for job submission. (This would prevent outbound scp and allow them to only submit a job or view logs); this would be configured via our out-of-band configuration management as well. 2. Then, when a user submit's a job, YARN would setup the OS control (cgroup, network namespace or the bridge interface) for those processes to match the user's name, a queue or some other deterministic handle. (We would use that handle for our configuration-managed matching iptables rules which would be pre-configured.) 2A. An ingest user for a particular database would be permissioned to reach out to a remote database to do ingest to the local HDFS to write data and to the necessary YARN ports. (All external YARN jobs should have strict review but even if we did not strictly review, connections could only flow to this one remote location -- that one database and what that one role account could read -- likely data only from one database.) 2B. A role account or human account for running ETL and adhoc intra-cluster jobs would not be allowed to talk off the cluster. (Jobs could be arbitrary and unreviewed -- but host-based network control - software firewall - would limit that one user; yea!) 2C. An egress user responsible for writing scrubbed data back out (e.g. reports) could reach out to a specific remote service endpoint to publish data, the local HDFS and YARN. (All jobs should again get strict review but the network controls would ensure data leakage from this account was limited to that one service and what that one role account could read on HDFS.) 3. Other uses could also use this technique: 3A. YARN already uses cgroups for traffic shaping using {{tc}} to shape a container's traffic; see JIRAs around YARN-2140. 3B. In general, we could audit what traffic comes from which users and affect only bad flows or bill back for network usage. Today, I worry if we have a pathological application reach out to a service and knock it down, I only know the machines and have to correlate {{netstat}} to see what user that is (or hope I have a strong correlation)[2]. If I have OS network control, I can ask the host-based firewall to log which users/devices (namespace bridges, etc.) are talking to that service's IP to know who's running the pathological job and throttle it opposed to kill it. This is not a request for full-scale software-defined-networking integration into YARN. For example, I suspect many YARN operators would not have the organizational support or man-power to integrate something like the [Cloud Native Computing Foundation's Container Network Interface|https://github.com/containernetworking/cni/blob/master/SPEC.md] via [Project Calico|https://www.projectcalico.org/]. The hope is this does bring the "policy-driven network security" aspect of these Projects in reach of those who operate their YARN clusters and the underlying OS. [1]: http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ [2]: In all fairness, I could use [{{tcpspy}}|https://dire