[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values
[ https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648749#comment-16648749 ] Hadoop QA commented on YARN-8842: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 35s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 27 new + 142 unchanged - 2 fixed = 169 total (was 144) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 10s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8842 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943734/YARN-8842.010.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f5c545dfcab0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 28ca5c9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (YARN-8448) AM HTTPS Support
[ https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648742#comment-16648742 ] Hadoop QA commented on YARN-8448: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 21s{color} | {color:green} root generated 0 new + 1317 unchanged - 10 fixed = 1317 total (was 1327) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 27s{color} | {color:orange} root: The patch generated 20 new + 625 unchanged - 8 fixed = 645 total (was 633) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 52s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 39s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 53s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch
[jira] [Updated] (YARN-8852) [Submarine] Add documentation for submarine installation details
[ https://issues.apache.org/jira/browse/YARN-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8852: -- Summary: [Submarine] Add documentation for submarine installation details (was: Add documentation for submarine installation details) > [Submarine] Add documentation for submarine installation details > > > Key: YARN-8852 > URL: https://issues.apache.org/jira/browse/YARN-8852 > Project: Hadoop YARN > Issue Type: Sub-task > Components: documentation, submarine >Reporter: Zac Zhou >Assignee: Zac Zhou >Priority: Major > Fix For: 3.2.0 > > Attachments: YARN-8852.001.patch, YARN-8852.002.patch, > YARN-8852.003.patch > > > To help the beginners to install and use the submarine, A detailed guide is > needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) [Submarine] Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Summary: [Submarine] Add submarine installation scripts (was: Add submarine installation scripts) > [Submarine] Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Critical > Attachments: YARN-8870.001.patch, YARN-8870.004.patch, > YARN-8870.005.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8842) Update QueueMetrics with custom resource values
[ https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-8842: - Attachment: YARN-8842.010.patch > Update QueueMetrics with custom resource values > > > Key: YARN-8842 > URL: https://issues.apache.org/jira/browse/YARN-8842 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8842.001.patch, YARN-8842.002.patch, > YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, > YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, > YARN-8842.009.patch, YARN-8842.010.patch > > > This is the 2nd dependent jira of YARN-8059. > As updating the metrics is an independent step from handling preemption, this > jira only deals with the queue metrics update of custom resources. > The following metrics should be updated: > * allocated resources > * available resources > * pending resources > * reserved resources > * aggregate seconds preempted -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values
[ https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648649#comment-16648649 ] Szilard Nemeth commented on YARN-8842: -- I cleaned up the patch a little bit. Let's wait for jenkins with the results. > Update QueueMetrics with custom resource values > > > Key: YARN-8842 > URL: https://issues.apache.org/jira/browse/YARN-8842 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8842.001.patch, YARN-8842.002.patch, > YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, > YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, > YARN-8842.009.patch, YARN-8842.010.patch > > > This is the 2nd dependent jira of YARN-8059. > As updating the metrics is an independent step from handling preemption, this > jira only deals with the queue metrics update of custom resources. > The following metrics should be updated: > * allocated resources > * available resources > * pending resources > * reserved resources > * aggregate seconds preempted -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8448) AM HTTPS Support
[ https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648614#comment-16648614 ] Robert Kanter commented on YARN-8448: - Thanks for the feedback {{haibochen}}! Some comments: {quote}In KeyStoreTestUtil.bytesToKeyStore(), should we use try clause for the inputstream? {quote} This isn't actually neceessary because the inputstream is a {{ByteArrayInputStream}}. It's read methods don't throw {{IOException}} because it's not doing any real IO (so there's nothing to handle here), and it's {{close}} method is empty (so it does nothing). {quote}testLaunchContainerCopyFiles(boolean https) has a lot of if-statements which I think justified having two different methods, each calling some utility methods. Can you try to break it into two? Likewise for testContainerLaunch(boolean https). {quote} I think trying to split these out into utility methods will actually be harder to follow. While there are a number of if statements checking if it's using HTTPS or not, each check only does a small thing. For instance, in {{testLaunchContainerCopyFiles}}, the only real difference is that whether or not we have the keystore and truststore, and so there's an if statement to write those files, to add them to the {{ContainerStartContext}}, and to check that they exist - the rest of the test is identical. {quote}In the host verifier, does the peer certificates come in any order? Right now the code assumes that the 1st one is always signed by the ca cert. {quote} I can't find any docs on the ordering, but it shouldn't matter anyways because both certs are signed with the same key (the CA key). You can see that we use the CA's public key to verify both certs in the custom {{X509TrustManager}}. The only reason we also verify (one of the) certs in the custom {{HostnameVerifier}} is because we need to determine if we should ignore the hostname of the certificate, or if we should fallback to the default one, which does check the hostname; this was a convinent way to check if it's one of our certs vs a real cert. {quote}KeyPairGenerator is created locally. Is there a security reason not to reuse KeyPairGenerator? {quote} >From what I can tell, there's no security issue with reusing a >{{KeyPairGenerator}}, but it's unclear if it's thread safe; so it's safest to >assume it isn't. That seems to be what people suggest (see >[here|https://stackoverflow.com/questions/25691151/is-keypairgenerator-generatekeypair-thread-safe] > and >[here|http://bouncy-castle.1462172.n4.nabble.com/is-key-generation-thread-safe-td4658456.html]) {quote}In the custom X509TrustManager, how would the defaultTrustManager verify the identify of the AM? {quote} If we determine that the cert was issued by the RM ({{issuedByRM==true}}), then at the end of the method, we check that the Subject is "CN=". That will only match if the RM connected to the AM it thought it was connecting to. The 009 patch: - Rebased on the latest trunk - Addressed the cc warning - Moved the secret keys to a new class, {{AMSecretKeys}}, in the {{hadoop-yarn-server-common}} module - Updated the wording of the config property in {{YarnConfiguration}} and {{yarn-default.xml}} - Changed the default to NONE, as per our offline discussion. In summary, we don't need to generate certificates in a default non-HTTPS environment. If the user sets up HTTPS for Hadoop, they can also change the config to LENIENT or STRICT to get the AM certificates. - Moved {{KEYSTORE_FILE_LOCATION}}, {{KEYSTORE_PASSWORD}}, {{TRUSTSTORE_FILE_LOCATION}}, and {{TRUSTSTORE_PASSWORD}} to {{ApplicationConstants}}, and added javadoc - {{DefaultLinuxContainerRuntime}} and {{DockerLinuxContainerRuntime}} are now more defensive about null-checking for _both_ the keystore and truststore (that shouldn't happen, but it is safer to check both in case that changes in the future for some reason) - In the C code, updated {{get_container_keystore_file}} and {{get_container_truststore_file}} to to say "am container keystore" and "am container truststore" - Put back the exit code to {{OUT_OF_MEMORY}} for the string concat; I had misread this before - Removed the unnecessary checks before freeing possible NULL pointers - Renamed {{COULD_NOT_CREATE_KEYSTORE_FILE}} to {{COULD_NOT_CREATE_KEYSTORE_COPY}} and {{COULD_NOT_CREATE_TRUSTSTORE_FILE}} to {{COULD_NOT_CREATE_TRUSTSTORE_COPY}} because we're copying and it's more consistent with {{COULD_NOT_CREATE_SCRIPT_COPY}}. Also renamed {{COULD_NOT_CREATE_CREDENTIALS_FILE}} to {{COULD_NOT_CREATE_CREDENTIALS_COPY}} for the same reason. - Renamed {{logpath}} to {{container_log_path}} and {{logpathapp}} to {{app_log_path}} in {{test_launch_container}} - Added {{@VisibleForTesting}} to {{ProxyCA#getCaCert}} and {{ProxyCA#getCaKeyPair}} - Split up {{TestProxyCA#testCreateTrustManager}} and {{TestProxyCA#testCreateHostnameVerifier}}
[jira] [Updated] (YARN-8448) AM HTTPS Support
[ https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-8448: Attachment: YARN-8448.009.patch > AM HTTPS Support > > > Key: YARN-8448 > URL: https://issues.apache.org/jira/browse/YARN-8448 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: YARN-8448.001.patch, YARN-8448.002.patch, > YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, > YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch, > YARN-8448.009.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-8870: - Target Version/s: 3.2.0 > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Critical > Attachments: YARN-8870.001.patch, YARN-8870.004.patch, > YARN-8870.005.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-8870: - Priority: Critical (was: Major) > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Critical > Attachments: YARN-8870.001.patch, YARN-8870.004.patch, > YARN-8870.005.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication
[ https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648563#comment-16648563 ] Hadoop QA commented on YARN-8869: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8869 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943723/YARN-8869.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 197bdb113574 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ddc9649 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22171/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22171/testReport/ | | Max. process+thread count | 544 (vs. ulimit of 1) | | modules | C:
[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication
[ https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-8869: Attachment: YARN-8869.004.patch > YARN Service Client might not work correctly with RM REST API for Kerberos > authentication > - > > Key: YARN-8869 > URL: https://issues.apache.org/jira/browse/YARN-8869 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0, 3.1.1 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Blocker > Attachments: YARN-8869.001.patch, YARN-8869.002.patch, > YARN-8869.003.patch, YARN-8869.004.patch > > > ApiServiceClient uses WebResource instead of Builder to pass Kerberos > authorization header. This may not work sometimes, and it is because > WebResource.header() could be bind mashed new Builder instance in some > condition. This article explained the details: > https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648498#comment-16648498 ] Peter Bacsko commented on YARN-8872: LGTM +1 non-binding from me > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, YARN-8872.02.patch, > jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8448) AM HTTPS Support
[ https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648434#comment-16648434 ] Haibo Chen commented on YARN-8448: -- For the ProxyCA related changes, I have a few questions/comments. 1) In the host verifier, does the peer certificates come in any order? Right now the code assumes that the 1st one is always signed by the ca cert. 2) Add @VisibleForTesting to getCaCert and getCaKeyPair? 3) KeyPairGenerator is created locally. Is there a security reason not to reuse KeyPairGenerator? 4) In the custom X509TrustManager, how would the defaultTrustManager verify the identify of the AM? 5) testCreateTrustManager() seem to have a lot of cases. Failing one would cause the following ones not to be executed. Can we split it into a few separate methods? Likewise for testCreateHostnameVerifier. > AM HTTPS Support > > > Key: YARN-8448 > URL: https://issues.apache.org/jira/browse/YARN-8448 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: YARN-8448.001.patch, YARN-8448.002.patch, > YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, > YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values
[ https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648407#comment-16648407 ] Szilard Nemeth commented on YARN-8842: -- Hi [~wilfreds]! Thanks for the review! > Update QueueMetrics with custom resource values > > > Key: YARN-8842 > URL: https://issues.apache.org/jira/browse/YARN-8842 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-8842.001.patch, YARN-8842.002.patch, > YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, > YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, > YARN-8842.009.patch > > > This is the 2nd dependent jira of YARN-8059. > As updating the metrics is an independent step from handling preemption, this > jira only deals with the queue metrics update of custom resources. > The following metrics should be updated: > * allocated resources > * available resources > * pending resources > * reserved resources > * aggregate seconds preempted -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648396#comment-16648396 ] Hadoop QA commented on YARN-8872: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 6 new + 179 unchanged - 6 fixed = 185 total (was 185) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 16s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 10s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 50s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | | Inconsistent synchronization of org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.map; locked 66% of time Unsynchronized access at FileSystemCounterGroup.java:66% of time Unsynchronized access at FileSystemCounterGroup.java:[line 281] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue |
[jira] [Assigned] (YARN-8874) NM does not do any authorization in ContainerManagerImpl.signalToContainer()
[ https://issues.apache.org/jira/browse/YARN-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Modi reassigned YARN-8874: --- Assignee: Abhishek Modi > NM does not do any authorization in ContainerManagerImpl.signalToContainer() > > > Key: YARN-8874 > URL: https://issues.apache.org/jira/browse/YARN-8874 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.2.0 >Reporter: Haibo Chen >Assignee: Abhishek Modi >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated YARN-8872: - Attachment: YARN-8872.02.patch > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, YARN-8872.02.patch, > jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8710) Service AM should set a finite limit on NM container max retries
[ https://issues.apache.org/jira/browse/YARN-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648298#comment-16648298 ] Suma Shivaprasad commented on YARN-8710: Thanks [~billie.rinaldi] > Service AM should set a finite limit on NM container max retries > - > > Key: YARN-8710 > URL: https://issues.apache.org/jira/browse/YARN-8710 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-8710.1.patch, YARN-8710.2.patch > > > Container retries are currently set to a default of -1 in > AbstractProviderService.buildContainerRetry. If this is not overridden via > service spec with a finite value for yarn.service.container-failure.retry.max > , this causes infinite NM reties for the container for ALWAYS/ON_FAILURE > restart policy . Ideally it should try a finite number of time on the same NM > and subsequently Service AM can retry on another node. > We can set this to default value of 3. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8448) AM HTTPS Support
[ https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648293#comment-16648293 ] Haibo Chen commented on YARN-8448: -- A few minor comments/questions about the c code changes. 1) In container-executor.c#get_container_keystore_file(), do you think if is more specific to say 'AM container keystore'? Similar question for get_container_truststore_file(). 2) in create_script_paths(), the error code when checking get_container_launcher_file() and such should kept as OUT_OF_MEMORY given they are just string concatenation. 3) Looks like we follow c99 standard, so freeing a NULL pointer is not a problem, so we can remove the if(https=1) check when freeing the related pointers. 4) Let's rename COULD_NOT_CREATE_KEYSTORE_FILE to COULD_NOT_CREATE_KEYSTORE_COPY and COULD_NOT_CREATE_TRUSTSTORE_FILE to COULD_NOT_CREATE_TRUSTSTORE_COPY, given the c code makes a copy. 5) In test_launch_container(), "logpath" => "container_log_path", " logpathapp" => "app_log_path" > AM HTTPS Support > > > Key: YARN-8448 > URL: https://issues.apache.org/jira/browse/YARN-8448 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: YARN-8448.001.patch, YARN-8448.002.patch, > YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, > YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648286#comment-16648286 ] Peter Bacsko commented on YARN-8872: [~mi...@cloudera.com] yes, technically it's all correct. I was just speaking strictly from theoretical POV. But I think we're on the same page. I vote for adding synchronized to {{size()}}. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648270#comment-16648270 ] Misha Dmitriev commented on YARN-8872: -- [~pbacsko] I think the situation here is the same as before. Both before and after this change, the {{size()}} method can never see {{map}} in a really inconsistent (half-constructed) state, because this object (a {{ConcurrentSkipListMap)}} is first fully constructed, and then the {{map}} reference is set to point to it. You are right that if {{findCounter()}} and {{size()}} run concurrently after that point, then the first method can keep adding objects to {{map}} and the second one may iterate a smaller number of objects (or none at all) and return a smaller size. But the same thing could happen before this change. Note also that since this is a concurrent map implementation, iterating and adding/removing elements concurrently is safe (will not cause exceptions). According to the javadoc of {{ConcurrentSkipListMap.values()}}, "The view's {{iterator}} is a "weakly consistent" iterator that will never throw [{{ConcurrentModificationException}}|https://docs.oracle.com/javase/7/docs/api/java/util/ConcurrentModificationException.html], and guarantees to traverse elements as they existed upon construction of the iterator, and may (but is not guaranteed to) reflect any modifications subsequent to construction." However, making {{size()}} synchronized will still make the code a little more predictable, at least in tests if nothing else. So I can make this change if you would like. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648248#comment-16648248 ] Peter Bacsko commented on YARN-8872: [~mi...@cloudera.com] I think we might have a problem with synchronization. The problem is that the {{map}} instance is created in a sync block, the {{size()}} method accesses it in an unsynced method. Now theoretically it's possible that invoking {{size()}} runs in together with {{findCounter()}} so it sees the map while it's being created, ie. in an inconsistent state. It wouldn't be an issue with a collection which store size as an {{int}} let's say, but here, we do iterations and stuff inside {{ConcurrentSkipListMap}}. So basically we have to make {{size()}} synchronized as well. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648235#comment-16648235 ] Misha Dmitriev commented on YARN-8872: -- I would leave this decision to [~haibochen]. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648233#comment-16648233 ] Peter Bacsko commented on YARN-8872: Thanks for the patch [~mi...@cloudera.com]! To me it looks good. One thing though, I believe this should be a MAPREDUCE- JIRA, because JHS is an MR component. However, I don't have the permissions to change the type of the ticket, maybe you have? Or perhaps [~haibochen] can do it. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory
[ https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648225#comment-16648225 ] Misha Dmitriev commented on YARN-8872: -- Regarding the problems in Hadoop QA report above: # No tests are added because this is a performance improvement, no change in functionality # I believe there is no problem with synchronization in FileSystemCounterGroup.java. The {{map}} object is created lazily in the synchronized method {{findCounter()}}, so according to the Java Memory Model, once it's created, it's visible to all the code, both synchronized and unsynchronized. In other words, the unsynchronized method {{write()}} (line 281 that findbugs complains about) will never think that {{map == null}} if {{map}} has actually been initialized. In other aspects it will work same as before. > Optimize collections used by Yarn JHS to reduce its memory > -- > > Key: YARN-8872 > URL: https://issues.apache.org/jira/browse/YARN-8872 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: YARN-8872.01.patch, jhs-bad-collections.png > > > We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big > heap in a large clusters, handling large MapReduce jobs. The heap is large > (over 32GB) and 21.4% of it is wasted due to various suboptimal Java > collections, mostly maps and lists that are either empty or contain only one > element. In such under-populated collections considerable amount of memory is > still used by just the internal implementation objects. See the attached > excerpt from the jxray report for the details. If certain collections are > almost always empty, they should be initialized lazily. If others almost > always have just 1 or 2 elements, they should be initialized with the > appropriate initial capacity of 1 or 2 (the default capacity is 16 for > HashMap and 10 for ArrayList). > Based on the attached report, we should do the following: > # {{FileSystemCounterGroup.map}} - initialize lazily > # {{CompletedTask.attempts}} - initialize with capacity 2, given most tasks > only have one or two attempts > # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity > # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it > contains one diagnostic message most of the time > # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to > use the more wasteful LinkedList here) and initialize with capacity 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication
[ https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648203#comment-16648203 ] Hadoop QA commented on YARN-8869: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api: The patch generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8869 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943672/YARN-8869.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3c342b7675be 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f1342cd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/22169/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22169/testReport/ | | Max. process+thread count | 544 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (YARN-8778) Add Command Line interface to invoke interactive docker shell
[ https://issues.apache.org/jira/browse/YARN-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648179#comment-16648179 ] Eric Yang commented on YARN-8778: - [~Zian Chen] Any update? Mind if I take this one? > Add Command Line interface to invoke interactive docker shell > - > > Key: YARN-8778 > URL: https://issues.apache.org/jira/browse/YARN-8778 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zian Chen >Assignee: Zian Chen >Priority: Major > Labels: Docker > > CLI will be the mandatory interface we are providing for a user to use > interactive docker shell feature. We will need to create a new class > “InteractiveDockerShellCLI” to read command line into the servlet and pass > all the way down to docker executor. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication
[ https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-8869: Attachment: YARN-8869.003.patch > YARN Service Client might not work correctly with RM REST API for Kerberos > authentication > - > > Key: YARN-8869 > URL: https://issues.apache.org/jira/browse/YARN-8869 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0, 3.1.1 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Blocker > Attachments: YARN-8869.001.patch, YARN-8869.002.patch, > YARN-8869.003.patch > > > ApiServiceClient uses WebResource instead of Builder to pass Kerberos > authorization header. This may not work sometimes, and it is because > WebResource.header() could be bind mashed new Builder instance in some > condition. This article explained the details: > https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-8874) NM does not do any authorization in ContainerManagerImpl.signalToContainer()
Haibo Chen created YARN-8874: Summary: NM does not do any authorization in ContainerManagerImpl.signalToContainer() Key: YARN-8874 URL: https://issues.apache.org/jira/browse/YARN-8874 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 3.2.0 Reporter: Haibo Chen -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648080#comment-16648080 ] Hadoop QA commented on YARN-8870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 5s{color} | {color:red} The patch generated 273 new + 0 unchanged - 0 fixed = 273 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 38s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 2s{color} | {color:red} The patch generated 6 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943652/YARN-8870.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml shellcheck shelldocs | | uname | Linux bcb0cfe2f52b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6e0e6da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | shellcheck | v0.4.6 | | shellcheck |
[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a stop container request in its audit log
[ https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648077#comment-16648077 ] Haibo Chen commented on YARN-8864: -- Thanks [~wilfreds] for the patch! I believe the user in startContainer request handling is also incorrect. Let's fix that too + the checkstyle issue. > NM incorrectly logs container user as the user who sent a stop container > request in its audit log > - > > Key: YARN-8864 > URL: https://issues.apache.org/jira/browse/YARN-8864 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.2.0 >Reporter: Haibo Chen >Assignee: Wilfred Spiegelenburg >Priority: Major > Attachments: YARN-8864.001.patch > > > As in ContainerManagerImpl.java > {code:java} > protected void stopContainerInternal(ContainerId containerID) > throws YarnException, IOException { > ... > NMAuditLogger.logSuccess(container.getUser(), > AuditConstants.STOP_CONTAINER, >"ContainerManageImpl", > containerID.getApplicationAttemptId().getApplicationId(), containerID); > }{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8864) NM incorrectly logs container user as the user who sent a start/stop container request in its audit log
[ https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-8864: - Summary: NM incorrectly logs container user as the user who sent a start/stop container request in its audit log (was: NM incorrectly logs container user as the user who sent a stop container request in its audit log) > NM incorrectly logs container user as the user who sent a start/stop > container request in its audit log > --- > > Key: YARN-8864 > URL: https://issues.apache.org/jira/browse/YARN-8864 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.2.0 >Reporter: Haibo Chen >Assignee: Wilfred Spiegelenburg >Priority: Major > Attachments: YARN-8864.001.patch > > > As in ContainerManagerImpl.java > {code:java} > protected void stopContainerInternal(ContainerId containerID) > throws YarnException, IOException { > ... > NMAuditLogger.logSuccess(container.getUser(), > AuditConstants.STOP_CONTAINER, >"ContainerManageImpl", > containerID.getApplicationAttemptId().getApplicationId(), containerID); > }{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications
[ https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648076#comment-16648076 ] Hadoop QA commented on YARN-8775: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 0 new + 25 unchanged - 2 fixed = 25 total (was 27) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8775 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943656/YARN-8775.003.patch | | Optional Tests | dupname asflicense compile javac
[jira] [Comment Edited] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications
[ https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648045#comment-16648045 ] Haibo Chen edited comment on YARN-8775 at 10/12/18 3:44 PM: Thanks for the update, [~bsteinbach]. A few comments 1) Let's add a @VisibleForTesting annotation for checkDirs() to indicate it is used only by tests, not public API 2) In prepareDirToFail(String dir), I think we shall fail fast by throwing an IOException if the file deletion is unsuccessful. Otherwise, the following file.createNewFile() would be unsuccessful and the directory would remain, rather than being replaced by a file with the same name. That is, prepareDirToFail() won't do what it's supposed to do. It is harder to figure out why from the logs when unit tests fail. 3) "Make check interval high enough to never run it during the test." => "Set disk check interval high enough so that it never runs during the test." 4) The comment where the interval is set shall be updated. "// set disk health check interval to a small value (say 1 sec)." => "// set disk health check interval to a large value to effectively disable disk health check done internally in LocalDirsHandlerService" was (Author: haibochen): Thanks for the update, [~bsteinbach]. A few comments 1) Let's add a @VisibleForTesting annotation for checkDirs() to indicate it is used by tests, not public API 2) In prepareDirToFail(String dir), I think we shall fail fast by throwing an IOException if the file deletion is unsuccessful. Otherwise, the following file.createNewFile() would be unsuccessful and the directory would remain, rather than being replaced by a file with the same name. That is, prepareDirToFail() won't do what it's supposed to do. It is harder to figure out why from the logs. > TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File > modifications > -- > > Key: YARN-8775 > URL: https://issues.apache.org/jira/browse/YARN-8775 > Project: Hadoop YARN > Issue Type: Bug > Components: test, yarn >Affects Versions: 3.0.0 >Reporter: Antal Bálint Steinbach >Assignee: Antal Bálint Steinbach >Priority: Major > Attachments: YARN-8775.001.patch, YARN-8775.002.patch, > YARN-8775.003.patch > > > The test can fail sometimes when file operations were done during the check > done by the thread in _LocalDirsHandlerService._ > {code:java} > java.lang.AssertionError: NodeManager could not identify disk failure. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99) > Stderr > 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > to fail. > 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > to fail. > 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] > nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - > Directory > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > error, Not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1, > removing from list of valid directories > 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] > localizer.ResourceLocalizationService > (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not > initialize log dir > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > java.io.FileNotFoundException: Destination
[jira] [Commented] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications
[ https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16648045#comment-16648045 ] Haibo Chen commented on YARN-8775: -- Thanks for the update, [~bsteinbach]. A few comments 1) Let's add a @VisibleForTesting annotation for checkDirs() to indicate it is used by tests, not public API 2) In prepareDirToFail(String dir), I think we shall fail fast by throwing an IOException if the file deletion is unsuccessful. Otherwise, the following file.createNewFile() would be unsuccessful and the directory would remain, rather than being replaced by a file with the same name. That is, prepareDirToFail() won't do what it's supposed to do. It is harder to figure out why from the logs. > TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File > modifications > -- > > Key: YARN-8775 > URL: https://issues.apache.org/jira/browse/YARN-8775 > Project: Hadoop YARN > Issue Type: Bug > Components: test, yarn >Affects Versions: 3.0.0 >Reporter: Antal Bálint Steinbach >Assignee: Antal Bálint Steinbach >Priority: Major > Attachments: YARN-8775.001.patch, YARN-8775.002.patch, > YARN-8775.003.patch > > > The test can fail sometimes when file operations were done during the check > done by the thread in _LocalDirsHandlerService._ > {code:java} > java.lang.AssertionError: NodeManager could not identify disk failure. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99) > Stderr > 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > to fail. > 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > to fail. > 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] > nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - > Directory > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > error, Not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1, > removing from list of valid directories > 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] > localizer.ResourceLocalizationService > (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not > initialize log dir > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > java.io.FileNotFoundException: Destination exists and is not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515) > at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496) > at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081) > at > org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178) > at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743) > at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) > at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDir(ResourceLocalizationService.java:1324)
[jira] [Commented] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications
[ https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647982#comment-16647982 ] Antal Bálint Steinbach commented on YARN-8775: -- Hi [~haibochen] , I agree. The 3rd patch contains your suggestions. Thanks for the review. > TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File > modifications > -- > > Key: YARN-8775 > URL: https://issues.apache.org/jira/browse/YARN-8775 > Project: Hadoop YARN > Issue Type: Bug > Components: test, yarn >Affects Versions: 3.0.0 >Reporter: Antal Bálint Steinbach >Assignee: Antal Bálint Steinbach >Priority: Major > Attachments: YARN-8775.001.patch, YARN-8775.002.patch, > YARN-8775.003.patch > > > The test can fail sometimes when file operations were done during the check > done by the thread in _LocalDirsHandlerService._ > {code:java} > java.lang.AssertionError: NodeManager could not identify disk failure. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99) > Stderr > 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > to fail. > 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > to fail. > 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] > nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - > Directory > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > error, Not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1, > removing from list of valid directories > 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] > localizer.ResourceLocalizationService > (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not > initialize log dir > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > java.io.FileNotFoundException: Destination exists and is not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515) > at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496) > at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081) > at > org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178) > at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743) > at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) > at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDir(ResourceLocalizationService.java:1324) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDirs(ResourceLocalizationService.java:1318) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.access$000(ResourceLocalizationService.java:141) > at >
[jira] [Updated] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications
[ https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antal Bálint Steinbach updated YARN-8775: - Attachment: YARN-8775.003.patch > TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File > modifications > -- > > Key: YARN-8775 > URL: https://issues.apache.org/jira/browse/YARN-8775 > Project: Hadoop YARN > Issue Type: Bug > Components: test, yarn >Affects Versions: 3.0.0 >Reporter: Antal Bálint Steinbach >Assignee: Antal Bálint Steinbach >Priority: Major > Attachments: YARN-8775.001.patch, YARN-8775.002.patch, > YARN-8775.003.patch > > > The test can fail sometimes when file operations were done during the check > done by the thread in _LocalDirsHandlerService._ > {code:java} > java.lang.AssertionError: NodeManager could not identify disk failure. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202) > at > org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99) > Stderr > 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > to fail. > 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures > (TestDiskFailures.java:prepareDirToFail(277)) - Prepared > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > to fail. > 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] > nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - > Directory > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1 > error, Not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1, > removing from list of valid directories > 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] > localizer.ResourceLocalizationService > (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not > initialize log dir > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > java.io.FileNotFoundException: Destination exists and is not a directory: > /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3 > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515) > at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496) > at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081) > at > org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178) > at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747) > at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743) > at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) > at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDir(ResourceLocalizationService.java:1324) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDirs(ResourceLocalizationService.java:1318) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.access$000(ResourceLocalizationService.java:141) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$2.onDirsChanged(ResourceLocalizationService.java:269) > at >
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: YARN-8870.005.patch > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch, YARN-8870.004.patch, > YARN-8870.005.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8873) Add CSI java-based client library
[ https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-8873: -- Attachment: YARN-8873.001.patch > Add CSI java-based client library > - > > Key: YARN-8873 > URL: https://issues.apache.org/jira/browse/YARN-8873 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: YARN-8873.001.patch > > > Build a java-based client to talk to CSI drivers, through CSI gRPC services. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-8873) Add CSI java-based client library
Weiwei Yang created YARN-8873: - Summary: Add CSI java-based client library Key: YARN-8873 URL: https://issues.apache.org/jira/browse/YARN-8873 Project: Hadoop YARN Issue Type: Sub-task Reporter: Weiwei Yang Assignee: Weiwei Yang Build a java-based client to talk to CSI drivers, through CSI gRPC services. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647780#comment-16647780 ] Hadoop QA commented on YARN-8870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 2s{color} | {color:red} The patch generated 273 new + 0 unchanged - 0 fixed = 273 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 13s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 31s{color} | {color:red} The patch generated 6 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943628/YARN-8870.004.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 3abd7270e8a1 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e36ae96 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-YARN-Build/22166/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22166/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/22166/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 400 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22166/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch, YARN-8870.004.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: YARN-8870.004.patch > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch, YARN-8870.004.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7018) Interface for adding extra behavior to node heartbeats
[ https://issues.apache.org/jira/browse/YARN-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikandan R updated YARN-7018: --- Attachment: YARN-7018.POC.004.patch > Interface for adding extra behavior to node heartbeats > -- > > Key: YARN-7018 > URL: https://issues.apache.org/jira/browse/YARN-7018 > Project: Hadoop YARN > Issue Type: New Feature > Components: resourcemanager >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Major > Attachments: YARN-7018.POC.001.patch, YARN-7018.POC.002.patch, > YARN-7018.POC.003.patch, YARN-7018.POC.004.patch > > > This JIRA tracks an interface for plugging in new behavior to node heartbeat > processing. Adding a formal interface for additional node heartbeat > processing would allow admins to configure new functionality that is > scheduler-independent without needing to replace the entire scheduler. For > example, both YARN-5202 and YARN-5215 had approaches where node heartbeat > processing was extended to implement new functionality that was essentially > scheduler-independent and could be implemented as a plugin with this > interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7018) Interface for adding extra behavior to node heartbeats
[ https://issues.apache.org/jira/browse/YARN-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647710#comment-16647710 ] Manikandan R edited comment on YARN-7018 at 10/12/18 9:40 AM: -- Thanks for your comments. Incorporated all suggestions and kept the plugin also in Scheduler itself to make it simple. Please review. If required, we can open new jira for implementations! May be CS to start with. was (Author: maniraj...@gmail.com): Thanks for your comments. Incorporated all suggestions and kept the plugin also in Scheduler itself to make it simple. Please review. If required, we can open new jira for implementations! > Interface for adding extra behavior to node heartbeats > -- > > Key: YARN-7018 > URL: https://issues.apache.org/jira/browse/YARN-7018 > Project: Hadoop YARN > Issue Type: New Feature > Components: resourcemanager >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Major > Attachments: YARN-7018.POC.001.patch, YARN-7018.POC.002.patch, > YARN-7018.POC.003.patch, YARN-7018.POC.004.patch > > > This JIRA tracks an interface for plugging in new behavior to node heartbeat > processing. Adding a formal interface for additional node heartbeat > processing would allow admins to configure new functionality that is > scheduler-independent without needing to replace the entire scheduler. For > example, both YARN-5202 and YARN-5215 had approaches where node heartbeat > processing was extended to implement new functionality that was essentially > scheduler-independent and could be implemented as a plugin with this > interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7018) Interface for adding extra behavior to node heartbeats
[ https://issues.apache.org/jira/browse/YARN-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647710#comment-16647710 ] Manikandan R commented on YARN-7018: Thanks for your comments. Incorporated all suggestions and kept the plugin also in Scheduler itself to make it simple. Please review. If required, we can open new jira for implementations! > Interface for adding extra behavior to node heartbeats > -- > > Key: YARN-7018 > URL: https://issues.apache.org/jira/browse/YARN-7018 > Project: Hadoop YARN > Issue Type: New Feature > Components: resourcemanager >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Major > Attachments: YARN-7018.POC.001.patch, YARN-7018.POC.002.patch, > YARN-7018.POC.003.patch > > > This JIRA tracks an interface for plugging in new behavior to node heartbeat > processing. Adding a formal interface for additional node heartbeat > processing would allow admins to configure new functionality that is > scheduler-independent without needing to replace the entire scheduler. For > example, both YARN-5202 and YARN-5215 had approaches where node heartbeat > processing was extended to implement new functionality that was essentially > scheduler-independent and could be implemented as a plugin with this > interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647699#comment-16647699 ] Hadoop QA commented on YARN-8870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 3s{color} | {color:red} The patch generated 273 new + 0 unchanged - 0 fixed = 273 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 20s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 6 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | YARN-8870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943610/YARN-8870.003.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux fee2ea3da2b7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5da0422 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-YARN-Build/22165/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/22165/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/22165/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 307 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/22165/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime
[jira] [Comment Edited] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647678#comment-16647678 ] Xun Liu edited comment on YARN-8870 at 10/12/18 9:04 AM: - [cancel patch] The patch file was submitted and the jenkins were not processed. Cancel patch and Retry was (Author: liuxun323): The patch file was submitted and the jenkins were not processed. Retry > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: (was: YARN-8870.001.patch) > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: (was: YARN-8870.002.patch) > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: (was: YARN-8870.003.patch) > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: (was: YARN-8870.003.patch) > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch, YARN-8870.002.patch, > YARN-8870.003.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8870) Add submarine installation scripts
[ https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xun Liu updated YARN-8870: -- Attachment: YARN-8870.003.patch > Add submarine installation scripts > -- > > Key: YARN-8870 > URL: https://issues.apache.org/jira/browse/YARN-8870 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xun Liu >Assignee: Xun Liu >Priority: Major > Attachments: YARN-8870.001.patch, YARN-8870.002.patch, > YARN-8870.003.patch > > > In order to reduce the deployment difficulty of Hadoop > {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel > modification and other components, I specially developed this installation > script to deploy Hadoop \{Submarine} > runtime environment, providing one-click installation Scripts, which can also > be used to install, uninstall, start, and stop individual components step by > step. > > {color:#ff}design d{color}{color:#FF}ocument:{color} > [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads
[ https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647563#comment-16647563 ] Hudson commented on YARN-3879: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15187 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15187/]) YARN-3879 [Storage implementation] Create HDFS backing storage (vrushali: rev bca928d3c7b88f39e9bc1784889596f0b00964d4) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java > [Storage implementation] Create HDFS backing storage implementation for ATS > reads > - > > Key: YARN-3879 > URL: https://issues.apache.org/jira/browse/YARN-3879 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Tsuyoshi Ozawa >Assignee: Abhishek Modi >Priority: Major > Labels: YARN-5355, YARN-7055 > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.3, 3.1.2 > > Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, > YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, > YARN-3879.005.patch, YARN-3879.006.patch > > > Reader version of YARN-3841 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org