[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141133#comment-16141133 ] Li Lu commented on YARN-6999: - Patch LGTM. The patch is trivial for unit tests. Findbugs warning appears to be irrelevant. I'll wait for ~24 hrs before commit. > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.002.patch, yarn-6999.003.patch, yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7051) FifoIntraQueuePreemptionPlugin can get concurrent modification exception
[ https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141115#comment-16141115 ] Sunil G commented on YARN-7051: --- Thanks [~eepayne] for clarification. Adding to same thought, {{getAllApplications}} in LeafQueue then has to be under readlock also, correct? > FifoIntraQueuePreemptionPlugin can get concurrent modification exception > > > Key: YARN-7051 > URL: https://issues.apache.org/jira/browse/YARN-7051 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption, yarn >Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > Attachments: YARN-7051.001.patch, YARN-7051.002.patch > > > {{FifoIntraQueuePreemptionPlugin#calculateUsedAMResourcesPerQueue}} has the > following code: > {code} > Collection runningApps = leafQueue.getApplications(); > Resource amUsed = Resources.createResource(0, 0); > for (FiCaSchedulerApp app : runningApps) { > {code} > {{runningApps}} is unmodifiable but not concurrent. This caused the > preemption monitor thread to crash in the RM in one of our clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5244) Documentation required for DNS Server implementation
[ https://issues.apache.org/jira/browse/YARN-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha resolved YARN-5244. - Resolution: Fixed plugged into the site documentation and committed > Documentation required for DNS Server implementation > > > Key: YARN-5244 > URL: https://issues.apache.org/jira/browse/YARN-5244 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Maron >Assignee: Jonathan Maron > Attachments: dns overview.png, dns record creation.jpeg, dns record > removal.jpeg, yarn_dns_server.md > > > The DNS server requires documentation describing its functionality etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7079) to support nodemanager ports management
[ https://issues.apache.org/jira/browse/YARN-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141072#comment-16141072 ] 田娟娟 commented on YARN-7079: --- Yeah, available ports management in NM is also enforced here。 available ports against request ports is checked before running containers. > to support nodemanager ports management > - > > Key: YARN-7079 > URL: https://issues.apache.org/jira/browse/YARN-7079 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: 田娟娟 > Attachments: YARN_7079.001.patch > > > Just like the vcores and memory, ports is also important resource > information to job allocation . So we add the ports management logic to yarn. > It can meet the user jobs' ports request, and never allocate two jobs(with > same port requirement) to one machine. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7099) ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged yarn users.
[ https://issues.apache.org/jira/browse/YARN-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-7099: - Attachment: YARN-7099.000.patch > ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged > yarn users. > - > > Key: YARN-7099 > URL: https://issues.apache.org/jira/browse/YARN-7099 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-7099.000.patch > > > canWrite is failing below: > {code} > if (candidate.isDirectory() && candidate.canWrite()) { > pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); > } else { > LOG.warn("The following cgroup is not a directory or it is not" > + " writable" + candidate.getAbsolutePath()); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5244) Documentation required for DNS Server implementation
[ https://issues.apache.org/jira/browse/YARN-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha reassigned YARN-5244: --- Assignee: Jonathan Maron > Documentation required for DNS Server implementation > > > Key: YARN-5244 > URL: https://issues.apache.org/jira/browse/YARN-5244 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Maron >Assignee: Jonathan Maron > Attachments: dns overview.png, dns record creation.jpeg, dns record > removal.jpeg, yarn_dns_server.md > > > The DNS server requires documentation describing its functionality etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5244) Documentation required for DNS Server implementation
[ https://issues.apache.org/jira/browse/YARN-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5244: Target Version/s: yarn-native-services > Documentation required for DNS Server implementation > > > Key: YARN-5244 > URL: https://issues.apache.org/jira/browse/YARN-5244 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Maron >Assignee: Jonathan Maron > Attachments: dns overview.png, dns record creation.jpeg, dns record > removal.jpeg, yarn_dns_server.md > > > The DNS server requires documentation describing its functionality etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7072) Add a new log aggregation file format controller
[ https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141034#comment-16141034 ] Wangda Tan commented on YARN-7072: -- Thanks [~xgong], Comments: Overall suggestion: - Please check appendable when LogAggregationIndexFileFormat is chosen. - Suggest to put all TFile controller related implementation to ...filecontroller.tfile, and all Indexed controller impl to ...filecontroller.ifile (or some better name) - IndexedFileAggregatedLogsBlock is a little bit lengthy, suggest to move it to a separate file and break down render() method. 1) {{LogAggregationIndexedFileController#initializeWriter}}. IIUC, current logic is: a. Rolling aggregation append to remote file everytime. b. There's a separate checksum file which records last succeeded write location. It means last write succeeded if the checksum file doesn't exist. c. When checksum file not existed, write last succeeded watermark. Otherwise read meta information out. Questions/comments: - For c. I think we should read meta information from original file as well when checksum file not existed (When we doing rolling aggregation and last aggregation succeeded, since everytime we delete checksum file after aggregation succeeded. see {{postWrite}}). - {{Path remoteLogFile}}, should be final. - IIUC, the dummyBytes is separator so we know what is the last succeeded write. If so, probably a simple "\n" is not enough. - Renames: fsDataOutputStream => checksumFileOutputStream, fsDataInputStream => checksumFileInputStream. 2) {{LogAggregationIndexedFileController#write}} - It's better to make following warning log to be more specific: {code} if(fileLength < newLength) { LOG.warn("Aggregated logs truncated by approximately "+ (newLength-fileLength) +" bytes."); } {code} For example, we can report: "because log file is modified during aggregation, so it might be trunked by X bytes." - When IOException cached, it's better to log full stacktrace to log file instead of only message. {code} outputStreamState.getOutputStream().write( message.getBytes(Charset.forName("UTF-8"))); {code} 3) {{LogAggregationIndexedFileController#loadIndexedLogsMeta}}. - It looks like loadIndexedLogsMeta did seek twice, is it possible to read last x-MB (say, 64MB) data directly (which assumes in most cases total size of file meta less than x-MB, so we don't have to do seek twice, seek operation could be expensive. ByteArrayInputStream could be used to read from a cached memory. 4) {{LogAggregationIndexedFileController#readAggregatedLogs}} - Why sort is needed? Could that possibly makes different sequence of file content stored in log file (e.g. in serialized-file, we have container1_stdout, container3_stderr, container2_stdout) which could lead to unnecessary seek operation. - Output format related logic should be common and shared by all controller impl: {code} StringBuilder sb = new StringBuilder(); String endOfFile = "End of LogType:" + candidate.getFileName(); sb.append("\n" + endOfFile + "\n"); sb.append(StringUtils.repeat("*", endOfFile.length() + 50) + "\n\n"); {code} Otherwise we may have different output format for different controller impl. Others: - It looks like {{getFilteredFiles}} could be getAllChecksumFiles, since suffix never accept input other than CHECK_SUM_FILE_SUFFIX. - Is it possible that there's more than two checksum files? Could we check it inside {{getFilteredFiles}} and throw exception when we find such? - {{LogAggregationFileController#createPrintStream}}, should use {{LogCLIHelper#createPrintStream}} instead. 5) {{LogAggregationIndexedFileController#readAggregatedLogsMeta}} Haven't reviewed details of the method yet, however I found it may have some overlap of readAggregatedLogs, and some data structure looks very similar, TODO: will review this part later. > Add a new log aggregation file format controller > > > Key: YARN-7072 > URL: https://issues.apache.org/jira/browse/YARN-7072 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7072-trunk.001.patch, YARN-7072.trunk.002.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7049) FSAppAttempt preemption related fields have confusing names
[ https://issues.apache.org/jira/browse/YARN-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141025#comment-16141025 ] Hudson commented on YARN-7049: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12239 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12239/]) YARN-7049. FSAppAttempt preemption related fields have confusing names. (yufei: rev 9e2699ac2c99d8df85191dd7fbf9468b00f5b5aa) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java > FSAppAttempt preemption related fields have confusing names > --- > > Key: YARN-7049 > URL: https://issues.apache.org/jira/browse/YARN-7049 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.1 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7049.001.patch, YARN-7049.002.patch > > > FSAppAttempt fields tracking containers/resources queued for preemption can > use better names -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7099) ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged yarn users.
[ https://issues.apache.org/jira/browse/YARN-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-7099: - Description: canWrite is failing below: {code} if (candidate.isDirectory() && candidate.canWrite()) { pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); } else { LOG.warn("The following cgroup is not a directory or it is not" + " writable" + candidate.getAbsolutePath()); } {code} was: canWrite is failing {code} if (candidate.isDirectory() && candidate.canWrite()) { pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); } else { LOG.warn("The following cgroup is not a directory or it is not" + " writable" + candidate.getAbsolutePath()); } {code} > ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged > yarn users. > - > > Key: YARN-7099 > URL: https://issues.apache.org/jira/browse/YARN-7099 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > > canWrite is failing below: > {code} > if (candidate.isDirectory() && candidate.canWrite()) { > pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); > } else { > LOG.warn("The following cgroup is not a directory or it is not" > + " writable" + candidate.getAbsolutePath()); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7099) ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged yarn users.
Miklos Szegedi created YARN-7099: Summary: ResourceHandlerModule.parseConfiguredCGroupPath only works for privileged yarn users. Key: YARN-7099 URL: https://issues.apache.org/jira/browse/YARN-7099 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Reporter: Miklos Szegedi Assignee: Miklos Szegedi Priority: Minor canWrite is failing {code} if (candidate.isDirectory() && candidate.canWrite()) { pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); } else { LOG.warn("The following cgroup is not a directory or it is not" + " writable" + candidate.getAbsolutePath()); } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141015#comment-16141015 ] Yeliang Cang commented on YARN-7047: Thanks [~ajisakaa] for the review! > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, > YARN-7047-branch-2.002.patch, YARN-7047-branch-2.003.patch, > YARN-7047-branch-2.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7098) LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING
Brook Zhou created YARN-7098: Summary: LocalizerRunner should immediately send heartbeat response LocalizerStatus.DIE when the Container transitions from LOCALIZING to KILLING Key: YARN-7098 URL: https://issues.apache.org/jira/browse/YARN-7098 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Reporter: Brook Zhou Assignee: Brook Zhou Priority: Minor Currently, the following can happen: 1. ContainerLocalizer heartbeats to ResourceLocalizationService. 2. LocalizerTracker.processHeartbeat verifies that there is a LocalizerRunner for the localizerId (containerId). 3. Container receives kill event, goes from LOCALIZING -> KILLING. The LocalizerRunner for the localizerId is removed from LocalizerTracker. 4. Since check (2) passed, LocalizerRunner sends heartbeat response with LocalizerStatus.LIVE and the next file to download. What should happen here is that (4) sends a LocalizerStatus.DIE, since (3) happened before the heartbeat response in (4). This saves the container from potentially downloading an extra resource which will end up being deleted anyway. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7049) FSAppAttempt preemption related fields have confusing names
[ https://issues.apache.org/jira/browse/YARN-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141009#comment-16141009 ] Yufei Gu commented on YARN-7049: +1. Thanks for the patch, [~kasha]. Committed to trunk and branch2. > FSAppAttempt preemption related fields have confusing names > --- > > Key: YARN-7049 > URL: https://issues.apache.org/jira/browse/YARN-7049 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.1 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-7049.001.patch, YARN-7049.002.patch > > > FSAppAttempt fields tracking containers/resources queued for preemption can > use better names -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6612) Update fair scheduler policies to be aware of resource types
[ https://issues.apache.org/jira/browse/YARN-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140970#comment-16140970 ] Hadoop QA commented on YARN-6612: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} YARN-3926 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 43s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 55s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} YARN-3926 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 6 new + 212 unchanged - 11 fixed = 218 total (was 223) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 2 new + 347 unchanged - 0 fixed = 349 total (was 347) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 2s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands | | | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6612 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883616/YARN-6612.YARN-3926.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 22792133b039 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3926 / 38d04b8 | | Default Java
[jira] [Commented] (YARN-6550) Capture launch_container.sh logs
[ https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140940#comment-16140940 ] Hadoop QA commented on YARN-6550: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 51 new + 118 unchanged - 1 fixed = 169 total (was 119) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6550 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883437/YARN-6550.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7ab9c6d8af8b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c2cb7ea | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17127/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17127/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17127/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17127/console | | Powered by | Apache Yet
[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability
[ https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140929#comment-16140929 ] Hadoop QA commented on YARN-6877: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 19 new + 137 unchanged - 10 fixed = 156 total (was 147) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 42s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 42s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 26s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 10s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 54s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationTFileController$TFileAggregatedLogsBlock.render(HtmlBlock$Block) might ignore java.lang.Exception At LogAggregationTFileController.java:At LogAggregationTFileController.java:[line 440] | | | Exception is caught when Exception is not thrown in org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationTFileController$TFileAggregatedLogsBlock.render(HtmlBlock$Block) At LogAggregationTFileController.java:is not thrown in org.apa
[jira] [Assigned] (YARN-7097) Federation: routing REST invocations transparently to multiple RMs (part 5 - getNode)
[ https://issues.apache.org/jira/browse/YARN-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola reassigned YARN-7097: -- Assignee: Giovanni Matteo Fumarola > Federation: routing REST invocations transparently to multiple RMs (part 5 - > getNode) > - > > Key: YARN-7097 > URL: https://issues.apache.org/jira/browse/YARN-7097 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7097) Federation: routing REST invocations transparently to multiple RMs (part 5 - getNode)
Giovanni Matteo Fumarola created YARN-7097: -- Summary: Federation: routing REST invocations transparently to multiple RMs (part 5 - getNode) Key: YARN-7097 URL: https://issues.apache.org/jira/browse/YARN-7097 Project: Hadoop YARN Issue Type: Sub-task Reporter: Giovanni Matteo Fumarola -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7096) Federation: routing REST invocations transparently to multiple RMs (part 4 - getMetrics)
Giovanni Matteo Fumarola created YARN-7096: -- Summary: Federation: routing REST invocations transparently to multiple RMs (part 4 - getMetrics) Key: YARN-7096 URL: https://issues.apache.org/jira/browse/YARN-7096 Project: Hadoop YARN Issue Type: Sub-task Reporter: Giovanni Matteo Fumarola -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7096) Federation: routing REST invocations transparently to multiple RMs (part 4 - getMetrics)
[ https://issues.apache.org/jira/browse/YARN-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola reassigned YARN-7096: -- Assignee: Giovanni Matteo Fumarola > Federation: routing REST invocations transparently to multiple RMs (part 4 - > getMetrics) > > > Key: YARN-7096 > URL: https://issues.apache.org/jira/browse/YARN-7096 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7095) Federation: routing REST invocations transparently to multiple RMs (part 3 - getNodes)
Giovanni Matteo Fumarola created YARN-7095: -- Summary: Federation: routing REST invocations transparently to multiple RMs (part 3 - getNodes) Key: YARN-7095 URL: https://issues.apache.org/jira/browse/YARN-7095 Project: Hadoop YARN Issue Type: Sub-task Reporter: Giovanni Matteo Fumarola -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7095) Federation: routing REST invocations transparently to multiple RMs (part 3 - getNodes)
[ https://issues.apache.org/jira/browse/YARN-7095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola reassigned YARN-7095: -- Assignee: Giovanni Matteo Fumarola > Federation: routing REST invocations transparently to multiple RMs (part 3 - > getNodes) > -- > > Key: YARN-7095 > URL: https://issues.apache.org/jira/browse/YARN-7095 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7051) FifoIntraQueuePreemptionPlugin can get concurrent modification exception
[ https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140892#comment-16140892 ] Hadoop QA commented on YARN-7051: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 48s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 73m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7051 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883610/YARN-7051.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d6de4cf349de 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c2cb7ea | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17124/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17124/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17124/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FifoIntraQueuePreemptionPlugin
[jira] [Commented] (YARN-6640) AM heartbeat stuck when responseId overflows MAX_INT
[ https://issues.apache.org/jira/browse/YARN-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140857#comment-16140857 ] Wangda Tan commented on YARN-6640: -- +1, thanks [~botong]. > AM heartbeat stuck when responseId overflows MAX_INT > - > > Key: YARN-6640 > URL: https://issues.apache.org/jira/browse/YARN-6640 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Blocker > Attachments: YARN-6640.v1.patch, YARN-6640.v2.patch > > > The current code in {{ApplicationMasterService}}: > if ((request.getResponseId() + 1) == lastResponse.getResponseId()) {/* old > heartbeat */ return lastResponse;} > else if (request.getResponseId() + 1 < lastResponse.getResponseId()) { throw > ... } > process the heartbeat... > When a heartbeat comes in, in usual case we are expecting > request.getResponseId() == lastResponse.getResponseId(). The “if“ is for the > duplicate heartbeat that’s one step old, the “else if” is to throw and > complain for heartbeats more than two steps old, otherwise we accept the new > heartbeat and process it. > So the bug is: when lastResponse.getResponseId() == MAX_INT, the newest > heartbeat comes in with responseId == MAX_INT. However reponseId + 1 will be > MIN_INT, and we will fall into the “else if” case and RM will throw. Then we > are stuck here… -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7094) Document that server-side graceful decom is currently not recommended
[ https://issues.apache.org/jira/browse/YARN-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140855#comment-16140855 ] Hadoop QA commented on YARN-7094: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 16s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7094 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883605/YARN-7094.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 70f6d18dd2c0 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c2cb7ea | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17123/testRe
[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations
[ https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140846#comment-16140846 ] Daniel Templeton commented on YARN-6964: Well, crap. Looks like I have some unit tests to fix. > Fair scheduler misuses Resources operations > --- > > Key: YARN-6964 > URL: https://issues.apache.org/jira/browse/YARN-6964 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha4 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6964.001.patch, YARN-6964.002.patch, > YARN-6964.003.patch, YARN-6964.004.patch, YARN-6964.005.patch, > YARN-6964.006.patch, YARN-6964.007.patch > > > There are several places where YARN uses the {{Resources}} class to do > comparisons of {{Resource}} instances incorrectly. This patch corrects those > mistakes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7052) RM SchedulingMonitor gives no indication why the spawned thread crashed.
[ https://issues.apache.org/jira/browse/YARN-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140838#comment-16140838 ] Hadoop QA commented on YARN-7052: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 52s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands | | | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | | | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA | | | org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7052 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883601/YARN-7052.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d5ea45e62c4a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c2cb7ea | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17122/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17122/artifact/patchprocess/patch-unit-hadoop-y
[jira] [Updated] (YARN-6612) Update fair scheduler policies to be aware of resource types
[ https://issues.apache.org/jira/browse/YARN-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-6612: --- Attachment: YARN-6612.YARN-3926.006.patch Turns out the unit test failures were from the bad index in the {{toString()}} method. > Update fair scheduler policies to be aware of resource types > > > Key: YARN-6612 > URL: https://issues.apache.org/jira/browse/YARN-6612 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: YARN-3926 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6612.YARN-3926.001.patch, > YARN-6612.YARN-3926.002.patch, YARN-6612.YARN-3926.004.patch, > YARN-6612.YARN-3926.005.patch, YARN-6612.YARN-3926.006.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations
[ https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140831#comment-16140831 ] Hadoop QA commented on YARN-6964: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 122 unchanged - 1 fixed = 124 total (was 123) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 40s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAsyncScheduling | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6964 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883557/YARN-6964.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0fb02df7ffa5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17121/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project
[jira] [Updated] (YARN-7072) Add a new log aggregation file format controller
[ https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-7072: Attachment: YARN-7072.trunk.002.patch > Add a new log aggregation file format controller > > > Key: YARN-7072 > URL: https://issues.apache.org/jira/browse/YARN-7072 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7072-trunk.001.patch, YARN-7072.trunk.002.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6612) Update fair scheduler policies to be aware of resource types
[ https://issues.apache.org/jira/browse/YARN-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140813#comment-16140813 ] Daniel Templeton commented on YARN-6612: Thanks, [~yufeigu]. bq. Would it be easier to understand if changing {{calculateShares()}} to {{calculateShareUsages()}}? Not really? Maybe I'm missing your point. bq. What if weight is 0? Interesting point! Not a new issue, though. I'll have a look to see why it's never bitten us so far and whether we should fix it here. bq. It doesn't need weight while calculating min share. Also not a new issue. That's the way it was calculated before. Yeah, it looks wrong to me, too. [~kasha], any insights? bq. Would it make more sense if we sort it by dominant min share which is minShare/cluster? Also not a new issue. I'm just trying to preserve the existing semantics. I'll address the other issues and the unit tests in a new patch shortly. > Update fair scheduler policies to be aware of resource types > > > Key: YARN-6612 > URL: https://issues.apache.org/jira/browse/YARN-6612 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: YARN-3926 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6612.YARN-3926.001.patch, > YARN-6612.YARN-3926.002.patch, YARN-6612.YARN-3926.004.patch, > YARN-6612.YARN-3926.005.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7051) FifoIntraQueuePreemptionPlugin can get concurrent modification exception
[ https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7051: - Attachment: YARN-7051.002.patch bq. so this won't be changing while createTempAppForResCalculation is looping over the list. However, I did find a race condition that throws an NPE within {{createTempAppForResCalculation}}. {noformat} java.lang.NullPointerException at org.apache.hadoop.yarn.util.resource.Resources.clone(Resources.java:155) at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoIntraQueuePreemptionPlugin.createTempAppForResCalculation(FifoIntraQueuePreemptionPlugin.java:403) at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoIntraQueuePreemptionPlugin.computeAppsIdealAllocation(FifoIntraQueuePreemptionPlugin.java:140) at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector.computeIntraQueuePreemptionDemand(IntraQueueCandidatesSelector.java:283) {noformat} The reason for this is that {{perUserAMUsed}} was populated with running apps prior to calling {{createTempAppForResCalculation}}, but then {{createTempAppForResCalculation}} loops through both running and pending apps. Attaching new patch that addresses this. > FifoIntraQueuePreemptionPlugin can get concurrent modification exception > > > Key: YARN-7051 > URL: https://issues.apache.org/jira/browse/YARN-7051 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption, yarn >Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > Attachments: YARN-7051.001.patch, YARN-7051.002.patch > > > {{FifoIntraQueuePreemptionPlugin#calculateUsedAMResourcesPerQueue}} has the > following code: > {code} > Collection runningApps = leafQueue.getApplications(); > Resource amUsed = Resources.createResource(0, 0); > for (FiCaSchedulerApp app : runningApps) { > {code} > {{runningApps}} is unmodifiable but not concurrent. This caused the > preemption monitor thread to crash in the RM in one of our clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor
[ https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140761#comment-16140761 ] Hadoop QA commented on YARN-6623: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 33s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 23 new + 23 unchanged - 4 fixed = 46 total (was 27) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 58s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 3s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | TEST-cetest | | | TEST-cetest | \\ \\ || Subsystem || Report/Notes || | D
[jira] [Commented] (YARN-7094) Document that server-side graceful decom is currently not recommended
[ https://issues.apache.org/jira/browse/YARN-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140759#comment-16140759 ] Junping Du commented on YARN-7094: -- Sure. I will look at it today. Thanks Robert. > Document that server-side graceful decom is currently not recommended > - > > Key: YARN-7094 > URL: https://issues.apache.org/jira/browse/YARN-7094 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-7094.001.patch > > > Server-side NM graceful decom currently does not work correctly when an RM > failover occurs because we don't persist the info in the state store (see > YARN-5464). Given time constraints for Hadoop 3 beta 1, we've decided to > document this limitation and recommend client-side NM graceful decom in the > meantime if you need this functionality (see [this > comment|https://issues.apache.org/jira/browse/YARN-5464?focusedCommentId=16126119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126119]). > Once YARN-5464 is done, we can undo this doc change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6877) Create an abstract log reader for extendability
[ https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-6877: Attachment: YARN-6877-trunk.002.patch > Create an abstract log reader for extendability > --- > > Key: YARN-6877 > URL: https://issues.apache.org/jira/browse/YARN-6877 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, > YARN-6877-trunk.002.patch > > > Currently, TFile log reader is used to read aggregated log in YARN. We need > to add an abstract layer, and pick up the correct log reader based on the > configuration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7051) FifoIntraQueuePreemptionPlugin can get concurrent modification exception
[ https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140730#comment-16140730 ] Eric Payne edited comment on YARN-7051 at 8/24/17 9:28 PM: --- Hi [~sunilg]. Thanks for the review and the detailed reply. {quote} I think there is one more place we used getApplications w/o any {code} Collection apps = tq.leafQueue.getAllApplications(); {code} {quote} The call to {{leafQueue.getApplications()}} within {{calculateUsedAMResourcesPerQueue}} gets the actual collection of apps from the ordering policy, which can obviously change because the leaf queue is modifying it. However, the call to {{getAllApplications}} makes a copy of the list of running and pending apps, so this won't be changing while {{createTempAppForResCalculation}} is looping over the list. was (Author: eepayne): Hi [~sunilg]. Thanks for the review and the detailed reply. {quote} I think there is one more place we used getApplications w/o any {code} Collection apps = tq.leafQueue.getAllApplications(); {code} {quote} > FifoIntraQueuePreemptionPlugin can get concurrent modification exception > > > Key: YARN-7051 > URL: https://issues.apache.org/jira/browse/YARN-7051 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption, yarn >Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > Attachments: YARN-7051.001.patch > > > {{FifoIntraQueuePreemptionPlugin#calculateUsedAMResourcesPerQueue}} has the > following code: > {code} > Collection runningApps = leafQueue.getApplications(); > Resource amUsed = Resources.createResource(0, 0); > for (FiCaSchedulerApp app : runningApps) { > {code} > {{runningApps}} is unmodifiable but not concurrent. This caused the > preemption monitor thread to crash in the RM in one of our clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7051) FifoIntraQueuePreemptionPlugin can get concurrent modification exception
[ https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140730#comment-16140730 ] Eric Payne commented on YARN-7051: -- Hi [~sunilg]. Thanks for the review and the detailed reply. {quote} I think there is one more place we used getApplications w/o any {code} Collection apps = tq.leafQueue.getAllApplications(); {code} {quote} > FifoIntraQueuePreemptionPlugin can get concurrent modification exception > > > Key: YARN-7051 > URL: https://issues.apache.org/jira/browse/YARN-7051 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption, yarn >Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > Attachments: YARN-7051.001.patch > > > {{FifoIntraQueuePreemptionPlugin#calculateUsedAMResourcesPerQueue}} has the > following code: > {code} > Collection runningApps = leafQueue.getApplications(); > Resource amUsed = Resources.createResource(0, 0); > for (FiCaSchedulerApp app : runningApps) { > {code} > {{runningApps}} is unmodifiable but not concurrent. This caused the > preemption monitor thread to crash in the RM in one of our clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7057) FSAppAttempt#getResourceUsage doesn't need to consider resources queued for preemption
[ https://issues.apache.org/jira/browse/YARN-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140723#comment-16140723 ] Daniel Templeton commented on YARN-7057: Yep, capacity scheduler failure is unrelated. Looks like a generally sane change to me. Quick question, though: why the change to {{getResourceUsage()}}? That change would seem to have a pretty large impact. For example, also in {{FSAppAttempt}}:{code} Resource getPendingDemand() { return Resources.subtract(demand, getResourceUsage()); }{code} Are we sure that change isn't going to cause problems? > FSAppAttempt#getResourceUsage doesn't need to consider resources queued for > preemption > -- > > Key: YARN-7057 > URL: https://issues.apache.org/jira/browse/YARN-7057 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-7057.001.patch > > > FSAppAttempt#getResourceUsage excludes resources that are currently allocated > to the app but are about to be preempted. This inconsistency shows in the UI > and can affect scheduling of containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7094) Document that server-side graceful decom is currently not recommended
[ https://issues.apache.org/jira/browse/YARN-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-7094: Attachment: YARN-7094.001.patch The 001 patch adds a note to the CLI help text and the docs page. It also fixes some formatting issues and missing text on the docs page. [~djp], can you take a look? > Document that server-side graceful decom is currently not recommended > - > > Key: YARN-7094 > URL: https://issues.apache.org/jira/browse/YARN-7094 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-7094.001.patch > > > Server-side NM graceful decom currently does not work correctly when an RM > failover occurs because we don't persist the info in the state store (see > YARN-5464). Given time constraints for Hadoop 3 beta 1, we've decided to > document this limitation and recommend client-side NM graceful decom in the > meantime if you need this functionality (see [this > comment|https://issues.apache.org/jira/browse/YARN-5464?focusedCommentId=16126119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126119]). > Once YARN-5464 is done, we can undo this doc change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail
[ https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140704#comment-16140704 ] Yufei Gu commented on YARN-7077: [~ajisakaa], my bad. I missed that part. What about setting monitor for CS in {{SLSRunner#startRM()}} instead of changing in each tests? > TestAMSimulator and TestNMSimulator fail > > > Key: YARN-7077 > URL: https://issues.apache.org/jira/browse/YARN-7077 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: YARN-7077.001.patch > > > TestAMSimulator and TestNMSimulator are failing: > {noformat} > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class > org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler > at > org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159) > at > org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6876) Create an abstract log writer for extendability
[ https://issues.apache.org/jira/browse/YARN-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140690#comment-16140690 ] Hudson commented on YARN-6876: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12238 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12238/]) YARN-6876. Create an abstract log writer for extendability. Contributed (junping_du: rev c2cb7ea1ef6532020b69031dbd18b0f9b8369f0f) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestContainerLogsUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogAggregationUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/package-info.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileController.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileControllerFactory.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileControllerContext.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationTFileController.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java > Create an abstract log writer for extendability > --- > > Key: YARN-6876 > URL: https://issues.apache.org/jira/browse/YARN-6876 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6876-branch-2.001.patch, YARN-6876-trunk.001.patch, > YARN-6876-trunk.002.patch, YARN-6876-trunk.003.patch, > YARN-6876-trunk.004.patch, YARN-6876-trunk.005.patch, > YARN-6876-trunk.006.patch > > > Currently, TFile log writer is used to aggregate log in YARN. We need to add > an abstract layer, and pick up the correct log writer based on the > configuration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7052) RM SchedulingMonitor gives no indication why the spawned thread crashed.
[ https://issues.apache.org/jira/browse/YARN-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7052: - Attachment: YARN-7052.001.patch > RM SchedulingMonitor gives no indication why the spawned thread crashed. > > > Key: YARN-7052 > URL: https://issues.apache.org/jira/browse/YARN-7052 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > Attachments: YARN-7052.001.patch > > > In YARN-7051, we ran into a case where the preemption monitor thread hung > with no indication of why. > The preemption monitor is started by the {{SchedulingExecutorService}} from > {{SchedulingMonitor#serviceStart}}. Once an uncaught throwable happens, > nothing ever gets the result of the future, the thread running the preemption > monitor never dies, and it never gets rescheduled. > If {{HadoopExecutor}} were used, it would at least provide a > {{HadoopScheduledThreadPoolExecutor}} that logs the exception if one happens. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7052) RM SchedulingMonitor gives no indication why the spawned thread crashed.
[ https://issues.apache.org/jira/browse/YARN-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7052: - Target Version/s: 2.8.2 Priority: Critical (was: Major) bq. I suggest that another solution would be to handle other throwables, log them, and either re-throw or cancel the thread. After an off-line discussion with [~jlowe], I think it would be better to catch throwables, log them, and skip the invocation. Preemption does not have persistent structures across invocations, plus it doesn't modify any existing leaf queue structures. Since preemption can be an important productivity feature for certain use cases, I am marking this critical for 2.8.2. > RM SchedulingMonitor gives no indication why the spawned thread crashed. > > > Key: YARN-7052 > URL: https://issues.apache.org/jira/browse/YARN-7052 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Critical > > In YARN-7051, we ran into a case where the preemption monitor thread hung > with no indication of why. > The preemption monitor is started by the {{SchedulingExecutorService}} from > {{SchedulingMonitor#serviceStart}}. Once an uncaught throwable happens, > nothing ever gets the result of the future, the thread running the preemption > monitor never dies, and it never gets rescheduled. > If {{HadoopExecutor}} were used, it would at least provide a > {{HadoopScheduledThreadPoolExecutor}} that logs the exception if one happens. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-5464: Description: Make sure to remove the note added by YARN-7094 about RM HA failover not working right. (was: Make sure to undo the changes made by YARN-7094 as part of this.) > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Critical > Attachments: YARN-5464.wip.patch > > > Make sure to remove the note added by YARN-7094 about RM HA failover not > working right. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-5464: Description: Make sure to undo the changes made by YARN-7094 as part of this. > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Critical > Attachments: YARN-5464.wip.patch > > > Make sure to undo the changes made by YARN-7094 as part of this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140661#comment-16140661 ] Robert Kanter commented on YARN-5464: - Ok. I've made this no longer a blocker for 3b1 and created YARN-7094 as a blocker for 3b1 to document this limitation. > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Critical > Attachments: YARN-5464.wip.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7094) Document that server-side graceful decom is currently not recommended
Robert Kanter created YARN-7094: --- Summary: Document that server-side graceful decom is currently not recommended Key: YARN-7094 URL: https://issues.apache.org/jira/browse/YARN-7094 Project: Hadoop YARN Issue Type: Sub-task Components: graceful Affects Versions: 3.0.0-beta1 Reporter: Robert Kanter Assignee: Robert Kanter Priority: Blocker Server-side NM graceful decom currently does not work correctly when an RM failover occurs because we don't persist the info in the state store (see YARN-5464). Given time constraints for Hadoop 3 beta 1, we've decided to document this limitation and recommend client-side NM graceful decom in the meantime if you need this functionality (see [this comment|https://issues.apache.org/jira/browse/YARN-5464?focusedCommentId=16126119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126119]). Once YARN-5464 is done, we can undo this doc change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-5464: Priority: Critical (was: Blocker) > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Critical > Attachments: YARN-5464.wip.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations
[ https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140640#comment-16140640 ] Daniel Templeton commented on YARN-6964: I can't tell if the capacity scheduler issues are related. The failures look like the kind I would expect, but I don't see where I made any functional changes to the parts that impact capacity scheduler. I retriggered the build; let's see if they happen again. > Fair scheduler misuses Resources operations > --- > > Key: YARN-6964 > URL: https://issues.apache.org/jira/browse/YARN-6964 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha4 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6964.001.patch, YARN-6964.002.patch, > YARN-6964.003.patch, YARN-6964.004.patch, YARN-6964.005.patch, > YARN-6964.006.patch, YARN-6964.007.patch > > > There are several places where YARN uses the {{Resources}} class to do > comparisons of {{Resource}} instances incorrectly. This patch corrects those > mistakes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels
[ https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-2497: --- Attachment: YARN-2499.WIP01.patch I've rebased [~Tao Jie]'s original patch (more or less). There are still a few other things to address, though. Here's the current limitations: * No support for relaxed (non-exclusive) partitions. * All headroom calculations et al still use the full cluster resources * The current unit tests are wholly insufficient I'll be working on the latter two limitations over the next couple of weeks. > Changes for fair scheduler to support allocate resource respect labels > -- > > Key: YARN-2497 > URL: https://issues.apache.org/jira/browse/YARN-2497 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Wangda Tan >Assignee: Daniel Templeton > Attachments: YARN-2499.WIP01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels
[ https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton reassigned YARN-2497: -- Assignee: Daniel Templeton > Changes for fair scheduler to support allocate resource respect labels > -- > > Key: YARN-2497 > URL: https://issues.apache.org/jira/browse/YARN-2497 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Wangda Tan >Assignee: Daniel Templeton > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140615#comment-16140615 ] Hadoop QA commented on YARN-6999: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 45s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883584/yarn-6999.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux aa38e52e3f03 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17120/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17120/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17120/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -
[jira] [Updated] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linlin Zhou updated YARN-6999: -- Attachment: yarn-6999.003.patch > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.002.patch, yarn-6999.003.patch, yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140543#comment-16140543 ] Hadoop QA commented on YARN-6999: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883576/yarn-6999.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9bacd46d8dc1 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17118/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/17118/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17118/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17118/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT ht
[jira] [Commented] (YARN-7087) NM failed to perform log aggregation due to absent container
[ https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140536#comment-16140536 ] Hadoop QA commented on YARN-7087: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 292 unchanged - 3 fixed = 292 total (was 295) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 45s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7087 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883573/YARN-7087.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 75c49998b053 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17117/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17117/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17117/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NM failed to perform log aggregation due to absent container > ---
[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor
[ https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-6623: Attachment: YARN-6623.006.patch Thank you for the review [~ebadger]! bq. Should we fix the no newline at the end of file warnings? The apply tool complains about them. Fixed. {noformat} DockerCommand:getCommandArguments() public Map> getCommandArguments() { return Collections.unmodifiableMap(commandArguments); } This will return the command as well as the arguments. Unless we are considering the /usr/docker to be the actual command and inspect to be one of the arguments. If that’s what we’re expecting to happen here, then the name is a little bit misleading. This might be more of a problem with how commandArguments is named than how this function is named. {noformat} Renamed function to getCommandWithArguments. Is that ok? {noformat} container-executor.c:construct_docker_command() +char *construct_docker_command(const char *command_file) { + int ret = 0; + char *buffer = (char *) malloc(EXECUTOR_PATH_MAX * sizeof(char)); This should use _SC_ARG_MAX as we did in YARN-6988 size_t command_size = MIN(sysconf(_SC_ARG_MAX), 128*1024); Also, why not use calloc() instead of malloc() and then memset()? container-executor.c:run_docker() {noformat} Fixed; changed to use the alloc_and_clear function. {noformat} + docker_command = construct_docker_command(command_file); + docker_binary = get_docker_binary(&CFG); I don’t see these getting freed. Am I missing this invocation somewhere? container-executor.c:run_docker() {noformat} We call the execvp function, the running program will get replaced by the docker invocation. {noformat} + memset(docker_command_with_binary, 0, EXECUTOR_PATH_MAX); Is this necessary? We allocate the memory with calloc() which already clears all of the memory upon allocation. {noformat} Yep. Fixed. {noformat} { container-executor.h // get the executable's filename char* get_executable(char *argv0); Do we need this declaration (in container-executor.h) since we have moved that declaration into get_executable.h? We should also add get_executable.h in the appropriate places. Looks like main.c and test-container-executor.c both call get_executable. {noformat} You're correct; fixed {noformat} main.c:assert_valid_setup() -fprintf(ERRORFILE,"realpath of executable: %s\n", - errno != 0 ? strerror(errno) : "unknown"); -flush_and_close_log_files(); -exit(-1); +fprintf(ERRORFILE, "realpath of executable: %s\n", +errno != 0 ? strerror(errno) : "unknown"); +exit(INVALID_CONFIG_FILE); Why don’t we want to flush the log files anymore? {noformat} Fixed. {noformat} util.c:alloc_and_clear_memory() +void* alloc_and_clear_memory(size_t num, size_t size) { + void *ret = calloc(num, size); + if (ret == NULL) { +exit(OUT_OF_MEMORY); + } + return ret; +} Should we inline this? Also, if we’re going to use this function, then all calloc calls should be replaced with this (like the ones I mentioned above) util.h {noformat} Fixed(made function inline and replaced calloc invocations with alloc_and_clear) {noformat} // DOCKER_CONTAINER_NAME_INVALID = 41, Should add (NOT USED) to follow convention docker-util.c:read_and_verify_command_file() {noformat} Fixed. {noformat} if (command == NULL || (strcmp(command, docker_command) != 0)) { ret = INCORRECT_COMMAND; } Is command guaranteed to be null-terminated? It comes from the configuration file, which is a Java creation and I don’t think Java null-terminates. This comment is probably relevant for quite a few other places that are doing string operations. We need to be very safe about this or else we risk possibly overrunning into random regions of the heap. A safe alternative would be to use the “n” version of all the string operations. This patch uses a mixed bag of the regular versions and their accompanying “n” versions. I don’t quite understand the reasoning behind the usage of each. If we guarantee that the string is null terminated (and always null terminated) then we don’t need the “n” versions. But even if we guarantee that the input string is null terminated, it may accidentally have the null character chopped off by an off by 1 error in a strdup or something like that. So my preference here would be to use the “n” versions of all of the string functions. Thoughts? {noformat} command is guaranteed to be null terminated by the confiugration functions. If we use the 'n' versions of the function we end up doing a 'begins with' match instead of an exact match which could cause problems(e.g. "inspect" would match "inspectcommand") {noformat} docker-util.c:read_and_verify_command_file() + if (current_len + string_len < bufflen - 1) { +strncpy(buff + current_len, string, bufflen - current_len); +buff[current_len + s
[jira] [Commented] (YARN-7079) to support nodemanager ports management
[ https://issues.apache.org/jira/browse/YARN-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140521#comment-16140521 ] Wangda Tan commented on YARN-7079: -- [~tianjuan], Thanks for working on this feature. I took a very quick scan of uploaded patch. Some questions/comments: 1) As Devaraj said, it's always helpful to have design doc before working on such a big patch. 2) I found most of the code changes are adding new resource type (range) to the resource object, which should be based on YARN-3926 for better maintenance. I haven't clearly considered how to support value range in the new ResourceInformation object yet, more discussions needed here. 3) Beyond management allocated/available ports in RM, I think we need enforce in NM as well, correct? Otherwise an app can request port \[1000-1008\], and once it is launched, it could use more ports than requested. 4) Is it possible that this feature can be replaced by container + special network settings which we can allocate different IPs to different containers. With that we don't need to worry about the port management problem at all. > to support nodemanager ports management > - > > Key: YARN-7079 > URL: https://issues.apache.org/jira/browse/YARN-7079 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: 田娟娟 > Attachments: YARN_7079.001.patch > > > Just like the vcores and memory, ports is also important resource > information to job allocation . So we add the ports management logic to yarn. > It can meet the user jobs' ports request, and never allocate two jobs(with > same port requirement) to one machine. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations
[ https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140499#comment-16140499 ] Hadoop QA commented on YARN-6964: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 123 unchanged - 1 fixed = 125 total (was 124) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAsyncScheduling | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6964 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883557/YARN-6964.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 855454013c05 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17114/artifact/patchprocess/diff-checkstyle-hadoop-yarn-proje
[jira] [Updated] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linlin Zhou updated YARN-6999: -- Attachment: yarn-6999.002.patch > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.002.patch, yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140488#comment-16140488 ] Hadoop QA commented on YARN-6999: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 2s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 6 new + 42 unchanged - 0 fixed = 48 total (was 42) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 39s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883567/yarn-6999.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9d498ee3a4a5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17116/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17116/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | whitespace | https://builds.apache.org/job/Pre
[jira] [Updated] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linlin Zhou updated YARN-6999: -- Attachment: (was: yarn-6999.002.patch) > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140475#comment-16140475 ] Akira Ajisaka commented on YARN-7047: - +1, checking this in. > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Fix For: 3.0.0-beta1 > > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch, YARN-7047.004.patch, YARN-7047-branch-2.001.patch, > YARN-7047-branch-2.002.patch, YARN-7047-branch-2.003.patch, > YARN-7047-branch-2.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7087) NM failed to perform log aggregation due to absent container
[ https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-7087: - Attachment: YARN-7087.002.patch Updating the patch to fix the checkstyle issues. The TestContainerManager timeout does not appear to be related. It passes for me locally with the patch applied. > NM failed to perform log aggregation due to absent container > > > Key: YARN-7087 > URL: https://issues.apache.org/jira/browse/YARN-7087 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Affects Versions: 2.8.1 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Critical > Attachments: YARN-7087.001.patch, YARN-7087.002.patch > > > Saw a case where the NM failed to aggregate the logs for a container because > it claimed it was absent: > {noformat} > 2017-08-23 18:35:38,283 [AsyncDispatcher event handler] WARN > logaggregation.LogAggregationService: Log aggregation cannot be started for > container_e07_1503326514161_502342_01_01, as its an absent container > {noformat} > Containers should not be allowed to disappear if they're not done being fully > processed by the NM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)
[ https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140445#comment-16140445 ] Giovanni Matteo Fumarola edited comment on YARN-7010 at 8/24/17 6:16 PM: - The failed test is not related to the patch (YARN-7044). was (Author: giovanni.fumarola): The failed test is not related to the patch. > Federation: routing REST invocations transparently to multiple RMs (part 2 - > getApps) > - > > Key: YARN-7010 > URL: https://issues.apache.org/jira/browse/YARN-7010 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, > YARN-7010.v2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)
[ https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140445#comment-16140445 ] Giovanni Matteo Fumarola commented on YARN-7010: The failed test is not related to the patch. > Federation: routing REST invocations transparently to multiple RMs (part 2 - > getApps) > - > > Key: YARN-7010 > URL: https://issues.apache.org/jira/browse/YARN-7010 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, > YARN-7010.v2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140395#comment-16140395 ] Linlin Zhou commented on YARN-6999: --- Thanks for the advice [~gtCarrera]. I have rename the patch and fix the whitespaces problem. Is the checkstyle concern caused by using tab instead of whitespaces, if so, it is fixed in the new patch. > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.002.patch, yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linlin Zhou updated YARN-6999: -- Attachment: yarn-6999.002.patch > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.002.patch, yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7090) testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is configured
[ https://issues.apache.org/jira/browse/YARN-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140377#comment-16140377 ] Wangda Tan commented on YARN-7090: -- Thanks [~yeshavora] for reporting this issue and thanks [~djp]'s review and commit! > testRMRestartAfterNodeLabelDisabled get failed when CapacityScheduler is > configured > --- > > Key: YARN-7090 > URL: https://issues.apache.org/jira/browse/YARN-7090 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Yesha Vora >Assignee: Wangda Tan > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7090.001.patch, YARN-7090.002.patch > > > testRMRestartAfterNodeLabelDisabled[1] UT fails with below error. > {code} > Error Message > expected:<[x]> but was:<[]> > Stacktrace > org.junit.ComparisonFailure: expected:<[x]> but was:<[]> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartAfterNodeLabelDisabled(TestRMRestart.java:2408) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7093) Improve log message in ResourceUtils
[ https://issues.apache.org/jira/browse/YARN-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140374#comment-16140374 ] Wangda Tan commented on YARN-7093: -- +1, committing, thanks [~sunilg] and review from [~templedf] > Improve log message in ResourceUtils > > > Key: YARN-7093 > URL: https://issues.apache.org/jira/browse/YARN-7093 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Sunil G >Assignee: Sunil G >Priority: Trivial > Attachments: YARN-7093.YARN-3926.001.patch > > > Improve log message ResourceUtils class. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Linlin Zhou updated YARN-6999: -- Attachment: (was: yarn-6999.patch.002) > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6999) Add log about how to solve Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
[ https://issues.apache.org/jira/browse/YARN-6999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140353#comment-16140353 ] Li Lu commented on YARN-6999: - This looks much better, thanks for the work [~littlestone00]! Could you please rename the patch to .patch so that we can rerun Jenkins again? Also, the concerns raised by checkstyle appears to be valid, could you please fix that as well? The warning from findbugs appears to be irrelevant, so let's focus on checkstyle and whitespaces first. > Add log about how to solve Error: Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster > -- > > Key: YARN-6999 > URL: https://issues.apache.org/jira/browse/YARN-6999 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.0.0-beta1 > Environment: All operating systems. >Reporter: Linlin Zhou >Assignee: Linlin Zhou >Priority: Minor > Labels: beginner > Fix For: 3.0.0-beta1 > > Attachments: yarn-6999.patch, yarn-6999.patch.002 > > Original Estimate: 1h > Remaining Estimate: 1h > > According Setting up a Single Node Cluster > [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html], > we would still failed to run the MapReduce job example. Due to a security > fix, yarn use user's environment variables to init, and user's environment > variable usually doesn't include MapReduce related settings. So we need to > add the related config in etc/hadoop/mapred-site.xml manually. Currently the > log only tells there is an Error: > Could not find or load main class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster, without suggestion on how to > solve it. I want to add the useful suggestion in log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7093) Improve log message in ResourceUtils
[ https://issues.apache.org/jira/browse/YARN-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140341#comment-16140341 ] Hadoop QA commented on YARN-7093: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} YARN-3926 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 37s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} YARN-3926 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7093 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883555/YARN-7093.YARN-3926.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 80a53bcaa9ff 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3926 / 2144629 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17115/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17115/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve log message in ResourceUtils > > > Key: YARN-7093 > URL: https://issues.apache.org/jira/browse/YARN-7093 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Sunil G >Assignee: Sunil G >Priority: Trivial > Attachments: YARN-7093.YARN-3926.001.patch > > > Improve log message ResourceUti
[jira] [Updated] (YARN-6964) Fair scheduler misuses Resources operations
[ https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-6964: --- Attachment: YARN-6964.007.patch OK, added unit tests. > Fair scheduler misuses Resources operations > --- > > Key: YARN-6964 > URL: https://issues.apache.org/jira/browse/YARN-6964 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha4 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6964.001.patch, YARN-6964.002.patch, > YARN-6964.003.patch, YARN-6964.004.patch, YARN-6964.005.patch, > YARN-6964.006.patch, YARN-6964.007.patch > > > There are several places where YARN uses the {{Resources}} class to do > comparisons of {{Resource}} instances incorrectly. This patch corrects those > mistakes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7087) NM failed to perform log aggregation due to absent container
[ https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140302#comment-16140302 ] Hadoop QA commented on YARN-7087: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 4 new + 291 unchanged - 3 fixed = 295 total (was 294) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 31s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7087 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883550/YARN-7087.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1fd835e784dc 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17113/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17113/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17113/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/1711
[jira] [Commented] (YARN-7093) Improve log message in ResourceUtils
[ https://issues.apache.org/jira/browse/YARN-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140299#comment-16140299 ] Daniel Templeton commented on YARN-7093: +1. I also made that change in my patch for YARN-6612. > Improve log message in ResourceUtils > > > Key: YARN-7093 > URL: https://issues.apache.org/jira/browse/YARN-7093 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Sunil G >Assignee: Sunil G >Priority: Trivial > Attachments: YARN-7093.YARN-3926.001.patch > > > Improve log message ResourceUtils class. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7086) Release all containers aynchronously
[ https://issues.apache.org/jira/browse/YARN-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140295#comment-16140295 ] Arun Suresh edited comment on YARN-7086 at 8/24/17 4:51 PM: Thanks for chiming in folks. And yes, I agree with [~jlowe] too. To move forward, and if everyone if fine with the approach, I will post a patch that does the following: * Introduce a *RELEASE_CONTAINERS* scheduler event : will refactor the existing RELEASE_CONTAINER event to take multiple containers. * Will expose an aysnc release method in the AbstractYarnScheduler that takes a list of containers, will split the list into some (configured ?) max containers released at a time, and will send an event for each the sub-list. * Route all calls to release containers from both the scheduler to the new API. Currently, the problematic ones are during app attempt complete, node removed and the schedulers's handling of AM's explicit release containers. was (Author: asuresh): Thanks for chiming in folks. And yes, I agree with [~jlowe] too. To move forward, and if everyone if fine with the approach, I will post a patch that does the following: * Introduce a *RELEASE_CONTAINERS* scheduler event : will refactor the existing RELEASE_CONTAINER event to take multiple containers. * Will expose and aysnc release method in the AbstractYarnScheduler to takes a list of containers, will split the list into some (configured ?) max containers released at a time, and will send an event for each the sub-list. * Route all calls to release containers from both the scheduler to the new API. Currently, the problematic ones are during app attempt complete, node removed and the schedulers's handling of AM's explicit release containers. > Release all containers aynchronously > > > Key: YARN-7086 > URL: https://issues.apache.org/jira/browse/YARN-7086 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Arun Suresh >Assignee: Arun Suresh > > We have noticed in production two situations that can cause deadlocks and > cause scheduling of new containers to come to a halt, especially with regard > to applications that have a lot of live containers: > # When these applicaitons release these containers in bulk. > # When these applications terminate abruptly due to some failure, the > scheduler releases all its live containers in a loop. > To handle the issues mentioned above, we have a patch in production to make > sure ALL container releases happen asynchronously - and it has served us well. > Opening this JIRA to gather feedback on if this is a good idea generally (cc > [~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd]) > BTW, In YARN-6251, we already have an asyncReleaseContainer() in the > AbstractYarnScheduler and a corresponding scheduler event, which is currently > used specifically for the container-update code paths (where the scheduler > realeases temp containers which it creates for the update) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7093) Improve log message in ResourceUtils
[ https://issues.apache.org/jira/browse/YARN-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7093: -- Attachment: YARN-7093.YARN-3926.001.patch Trivial log correction. Removed an exception from info. [~leftnoteasy], please take a look. > Improve log message in ResourceUtils > > > Key: YARN-7093 > URL: https://issues.apache.org/jira/browse/YARN-7093 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Sunil G >Assignee: Sunil G >Priority: Trivial > Attachments: YARN-7093.YARN-3926.001.patch > > > Improve log message ResourceUtils class. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7086) Release all containers aynchronously
[ https://issues.apache.org/jira/browse/YARN-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140295#comment-16140295 ] Arun Suresh commented on YARN-7086: --- Thanks for chiming in folks. And yes, I agree with [~jlowe] too. To move forward, and if everyone if fine with the approach, I will post a patch that does the following: * Introduce a *RELEASE_CONTAINERS* scheduler event : will refactor the existing RELEASE_CONTAINER event to take multiple containers. * Will expose and aysnc release method in the AbstractYarnScheduler to takes a list of containers, will split the list into some (configured ?) max containers released at a time, and will send an event for each the sub-list. * Route all calls to release containers from both the scheduler to the new API. Currently, the problematic ones are during app attempt complete, node removed and the schedulers's handling of AM's explicit release containers. > Release all containers aynchronously > > > Key: YARN-7086 > URL: https://issues.apache.org/jira/browse/YARN-7086 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Arun Suresh >Assignee: Arun Suresh > > We have noticed in production two situations that can cause deadlocks and > cause scheduling of new containers to come to a halt, especially with regard > to applications that have a lot of live containers: > # When these applicaitons release these containers in bulk. > # When these applications terminate abruptly due to some failure, the > scheduler releases all its live containers in a loop. > To handle the issues mentioned above, we have a patch in production to make > sure ALL container releases happen asynchronously - and it has served us well. > Opening this JIRA to gather feedback on if this is a good idea generally (cc > [~leftnoteasy], [~jlowe], [~curino], [~kasha], [~subru], [~roniburd]) > BTW, In YARN-6251, we already have an asyncReleaseContainer() in the > AbstractYarnScheduler and a corresponding scheduler event, which is currently > used specifically for the container-update code paths (where the scheduler > realeases temp containers which it creates for the update) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7093) Improve log message in ResourceUtils
Sunil G created YARN-7093: - Summary: Improve log message in ResourceUtils Key: YARN-7093 URL: https://issues.apache.org/jira/browse/YARN-7093 Project: Hadoop YARN Issue Type: Sub-task Reporter: Sunil G Assignee: Sunil G Priority: Trivial Improve log message ResourceUtils class. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7087) NM failed to perform log aggregation due to absent container
[ https://issues.apache.org/jira/browse/YARN-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-7087: - Attachment: YARN-7087.001.patch Attaching a patch that adds the container type to the log aggregation container finished event which eliminates the need for AppLogAggregatorImpl to lookup the container in the context and potentially not find it. This appears to be occurring quite often on our clusters in cases where an application is killed, so it would be great to fix this for 2.8.2. > NM failed to perform log aggregation due to absent container > > > Key: YARN-7087 > URL: https://issues.apache.org/jira/browse/YARN-7087 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Affects Versions: 2.8.1 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Critical > Attachments: YARN-7087.001.patch > > > Saw a case where the NM failed to aggregate the logs for a container because > it claimed it was absent: > {noformat} > 2017-08-23 18:35:38,283 [AsyncDispatcher event handler] WARN > logaggregation.LogAggregationService: Log aggregation cannot be started for > container_e07_1503326514161_502342_01_01, as its an absent container > {noformat} > Containers should not be allowed to disappear if they're not done being fully > processed by the NM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse
[ https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140216#comment-16140216 ] Sunil G commented on YARN-7019: --- So containers which are scored highest/lowest (as per choice) will be preempted first compared to other containers of same app barring AM container (because we save all AM containers across apps, and preempt them only if no other containers are left). To emphasis one more time, these scoring of containers will be done within an app, correct? It is also possible that, a container may be selected for preemption and is in wait-to-preempt list in scheduler list (15sec by default). If AM re-iterates that this container is back to my priority, I think we might not need to revert its preemption state. > Ability for applications to notify YARN about container reuse > - > > Key: YARN-7019 > URL: https://issues.apache.org/jira/browse/YARN-7019 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Jason Lowe > > During preemption calculations YARN can try to reduce the amount of work lost > by considering how long a container has been running. However when an > application framework like Tez reuses a container across multiple tasks it > changes the work lost calculation since the container has essentially > checkpointed between task assignments. It would be nice if applications > could inform YARN when a container has been reused/checkpointed and therefore > is a better candidate for preemption wrt. lost work than other, younger > containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse
[ https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140193#comment-16140193 ] Jason Lowe commented on YARN-7019: -- bq. I think the challenge will be to multiply an app or framework reported score to scale that to a fair share or priority for the application / user / queue so that apps cannot lie about these scores and rig scheduling decisions. Right, which is one reason I originally raised this as a reuse notification rather than a general "preemption priority" concept. Priorities are prone to escalation wars and abuse. Notifying about reuse arguably only hurts the application offering up that information, since it makes the container more attractive for preemption. In light of that, one way to go about this is to assume no information about a container means the container is super important to the application. Any voluntary information from the application can only lower its importance relative to this default score. So applications that do not participate (i.e.: every single one we have today) will continue to be scored like we do today, where every container is precious to the application. Apps that have reuse or relatively idle containers can update YARN with that information to help YARN make better preemption decisions, and if a container suddenly goes from idle to critical then the score can be adjusted back to the default, "my precious" status. > Ability for applications to notify YARN about container reuse > - > > Key: YARN-7019 > URL: https://issues.apache.org/jira/browse/YARN-7019 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Jason Lowe > > During preemption calculations YARN can try to reduce the amount of work lost > by considering how long a container has been running. However when an > application framework like Tez reuses a container across multiple tasks it > changes the work lost calculation since the container has essentially > checkpointed between task assignments. It would be nice if applications > could inform YARN when a container has been reused/checkpointed and therefore > is a better candidate for preemption wrt. lost work than other, younger > containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7074) Fix NM state store update comment
[ https://issues.apache.org/jira/browse/YARN-7074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140175#comment-16140175 ] Botong Huang commented on YARN-7074: Cool, thanks [~bibinchundatt] and [~kasha]! > Fix NM state store update comment > - > > Key: YARN-7074 > URL: https://issues.apache.org/jira/browse/YARN-7074 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7074.v1.patch > > > A follow up of YARN-6798 to fix a typo. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse
[ https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140140#comment-16140140 ] Sunil G commented on YARN-7019: --- bq.It would be nice if the NM could be proactively told during container execution when the cost of preemption changes so it can make better decisions on its own when pressed for time. Thanks [~jlowe]. Yes, it makes sense as NM could also preempt containers. I think a general framework seems better option here as suggested. > Ability for applications to notify YARN about container reuse > - > > Key: YARN-7019 > URL: https://issues.apache.org/jira/browse/YARN-7019 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Jason Lowe > > During preemption calculations YARN can try to reduce the amount of work lost > by considering how long a container has been running. However when an > application framework like Tez reuses a container across multiple tasks it > changes the work lost calculation since the container has essentially > checkpointed between task assignments. It would be nice if applications > could inform YARN when a container has been reused/checkpointed and therefore > is a better candidate for preemption wrt. lost work than other, younger > containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7079) to support nodemanager ports management
[ https://issues.apache.org/jira/browse/YARN-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140127#comment-16140127 ] Hadoop QA commented on YARN-7079: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 51s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 44s{color} | {color:red} root generated 13 new + 1291 unchanged - 0 fixed = 1304 total (was 1291) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 12s{color} | {color:orange} root: The patch generated 171 new + 572 unchanged - 0 fixed = 743 total (was 572) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 66 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 32s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 18s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 4 new + 123 unchanged - 0 fixed = 127 total (was 123) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 38s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 34s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 48s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 36s{color} | {color:green} hadoop-yarn-server-nodemanager in
[jira] [Commented] (YARN-7091) Rename application to service in yarn-native-services
[ https://issues.apache.org/jira/browse/YARN-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140119#comment-16140119 ] Billie Rinaldi commented on YARN-7091: -- [~jianhe], I am very happy about the effects of these changes. I think standardizing on the name "service" in yarn-native-services will really clarify our terminology. A few comments on patch 01: * regarding hadoop-yarn-services-core, are other modules planned under hadoop-yarn-services? If not, we could combine these into a single module * change YARN_SERVICESAPI_OPTS in yarn-env.sh to YARN_APISERVER_OPTS * change deleteApplication / updateApplication log statements to delete service / update service in ApplicationApiService. Also, the variable name updateAppData should be changed (I am not as concerned about the variable appName, since it’s also the YARN app name) * rename ApplicationApiService and ApplicationApiWebApp to ApiServer and ApiWebApp or ApiServerWebApp * in the yaml, check for usages of “an service” -- it should be “a service.” For “Get an service details” just remove “an.” Check for uses of “app” -- “app-component” should be “component,” other usages of “app” should change to “service” (but look out for “an app”) -- except webapp which is fine. The API records java classes have the same issues (since they are generated from the yaml) * (unrelated) in the yaml, we should mention the format reqs for component name and service name * rename ServiceApiUtil methods with “application” in the name (loadApplication, validateAndResolveApplication, etc.) * ActionListArgs and SliderActions have “an service” * there are some git artifacts in the patch - ContainerState.java~HEAD, ContainerState.java~HEAD_0, and a couple other versions of ContainerState > Rename application to service in yarn-native-services > - > > Key: YARN-7091 > URL: https://issues.apache.org/jira/browse/YARN-7091 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-7091.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7091) Rename application to service in yarn-native-services
[ https://issues.apache.org/jira/browse/YARN-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-7091: - Summary: Rename application to service in yarn-native-services (was: Some rename changes in yarn-native-services) > Rename application to service in yarn-native-services > - > > Key: YARN-7091 > URL: https://issues.apache.org/jira/browse/YARN-7091 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-7091.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7074) Fix NM state store update comment
[ https://issues.apache.org/jira/browse/YARN-7074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139921#comment-16139921 ] Hudson commented on YARN-7074: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12236 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12236/]) YARN-7074. Fix NM state store update comment. Contributed by Botong (bibinchundatt: rev de0cba700bcf4276726c0aa9df8d738787debc17) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java > Fix NM state store update comment > - > > Key: YARN-7074 > URL: https://issues.apache.org/jira/browse/YARN-7074 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-7074.v1.patch > > > A follow up of YARN-6798 to fix a typo. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7091) Some rename changes in yarn-native-services
[ https://issues.apache.org/jira/browse/YARN-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139918#comment-16139918 ] Hadoop QA commented on YARN-7091: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 48 new or modified test files. {color} | || || || || {color:brown} yarn-native-services Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 35s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 43s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 16s{color} | {color:green} yarn-native-services passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 44s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 59s{color} | {color:green} yarn-native-services passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 19s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 46s{color} | {color:orange} root: The patch generated 140 new + 1124 unchanged - 152 fixed = 1264 total (was 1276) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 25s{color} | {color:green} The patch generated 0 new + 0 unchanged - 13 fixed = 0 total (was 13) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 11s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 93 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 13s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 45s{color} | {color:green} root generated 0 new + 11050 unchanged - 242 fixed = 11050 total (was 11292) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color}
[jira] [Commented] (YARN-7070) some of local cache files for yarn can't be deleted
[ https://issues.apache.org/jira/browse/YARN-7070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139762#comment-16139762 ] Changyao Ye commented on YARN-7070: --- [~jlowe] [~shaneku...@gmail.com] Thank you guys so much. I'll try applying the patch from YARN-6846 first and let you know the problem solved or not. > some of local cache files for yarn can't be deleted > --- > > Key: YARN-7070 > URL: https://issues.apache.org/jira/browse/YARN-7070 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.8.1 > Environment: Hadoop 2.8.1 >Reporter: Changyao Ye > Attachments: application_1501810184023_55949.log > > > We have found some of cache files(in > /tmp/hadoop-yarn/nm-local-dir/usercache/hdfs/appcache) for yarn on > nodemanager cannot be deleted properly. The directories are like > following(blockmgr***) > = > # ls -ltr application_1501810184023_55949 > total 120 > drwx--x--- 2 hdfs yarn 4096 Aug 22 04:29 filecache > drwxr-s--- 2 hdfs yarn 4096 Aug 22 04:56 > blockmgr-881fab2c-fba4-4bb1-8dd9-5ab35a512df7 > drwxr-s--- 10 hdfs yarn 4096 Aug 22 04:56 > blockmgr-bf8a19f5-e9ae-4269-a0ef-b27d0f9c17e7 > drwxr-s--- 11 hdfs yarn 4096 Aug 22 04:58 > blockmgr-f3437e8d-9595-4898-8bda-92ebff3ada1d > drwxr-s--- 18 hdfs yarn 4096 Aug 22 05:01 > blockmgr-930c0cd8-1d31-4cdb-a244-f6ad4bf74bff > drwxr-s--- 12 hdfs yarn 4096 Aug 22 05:13 > blockmgr-83fc0702-ac40-4743-812a-7d488e92004e > drwxr-s--- 9 hdfs yarn 4096 Aug 22 05:13 > blockmgr-f6cfe045-12c3-41d6-b77e-aa5200daeb6a > drwxr-s--- 12 hdfs yarn 4096 Aug 22 05:13 > blockmgr-53dcb4ea-ba5d-4b8b-859b-805b9303a149 > drwxr-s--- 10 hdfs yarn 4096 Aug 22 05:13 > blockmgr-0c0c4bb9-ef5e-4ca1-8d23-ce5cd58d0a75 > drwxr-s--- 9 hdfs yarn 4096 Aug 22 05:13 > blockmgr-557d0f39-67d2-491a-9307-12fc1d724380 > drwxr-s--- 10 hdfs yarn 4096 Aug 22 05:13 > blockmgr-fbc87680-4df7-498e-bf6d-456a5aea4fc9 > drwxr-s--- 10 hdfs yarn 4096 Aug 22 05:13 > blockmgr-53ee8251-fac1-4f62-82c2-5e970f0d86ec > drwxr-s--- 9 hdfs yarn 4096 Aug 22 05:14 > blockmgr-5a8bc187-abcf-482d-9da5-e8c4647d4731 > drwxr-s--- 10 hdfs yarn 4096 Aug 22 05:14 > blockmgr-251c3a99-cd85-442a-8945-52c344c0d861 > drwxr-s--- 13 hdfs yarn 4096 Aug 22 05:14 > blockmgr-c352c1ad-15dc-456b-8b62-5b83b9950494 > drwxr-s--- 12 hdfs yarn 4096 Aug 22 05:15 > blockmgr-b4f01347-4b51-4b35-8146-2aa840084c2b > drwxr-s--- 14 hdfs yarn 4096 Aug 22 05:15 > blockmgr-0095d26c-c134-48b4-82a6-e8ae02f0189c > drwxr-s--- 13 hdfs yarn 4096 Aug 22 05:15 > blockmgr-28a31574-61ae-459f-be3a-8608892246d7 > drwxr-s--- 16 hdfs yarn 4096 Aug 22 05:15 > blockmgr-c0cd0df9-b355-4209-b6aa-b549a1fa36eb > drwxr-s--- 11 hdfs yarn 4096 Aug 22 05:15 > blockmgr-a2730abb-9517-461e-bedf-d9a2dcef373f > drwxr-s--- 14 hdfs yarn 4096 Aug 22 05:15 > blockmgr-91dd2e1a-6bc2-4429-8b71-2f4240987159 > drwxr-s--- 12 hdfs yarn 4096 Aug 22 05:15 > blockmgr-f4e3a586-8817-45ea-a197-9fdbb3d91946 > drwxr-s--- 15 hdfs yarn 4096 Aug 22 05:15 > blockmgr-ba2c605e-89d8-4f7c-b42c-6ed4ba6bf4ea > drwxr-s--- 16 hdfs yarn 4096 Aug 22 05:15 > blockmgr-2ae72383-5f72-4002-84a7-e6335b8c2b6c > drwxr-s--- 13 hdfs yarn 4096 Aug 22 05:15 > blockmgr-6c5e260f-d3c7-4af6-91c1-168c73343f2d > drwxr-s--- 16 hdfs yarn 4096 Aug 22 05:15 > blockmgr-2e9923b1-281c-4a9d-8069-6c5430bd5fc3 > drwxr-s--- 18 hdfs yarn 4096 Aug 22 05:15 > blockmgr-cc3f1406-d8a2-4bf5-a276-8f7aed75c513 > drwxr-s--- 11 hdfs yarn 4096 Aug 22 05:15 > blockmgr-975bcce0-84b2-4590-880b-bf182d76e319 > drwxr-s--- 11 hdfs yarn 4096 Aug 22 05:15 > blockmgr-ce82cb63-5998-4227-b85e-77f1c633db43 > drwxr-s--- 11 hdfs yarn 4096 Aug 22 05:15 > blockmgr-592af4aa-3c89-4081-8746-29b99f2220b1 > = > We also applied patches YARN-4594, YARN-4731, but nothing changed. > YARN-4594 https://issues.apache.org/jira/browse/YARN-4594 > YARN-4731 https://issues.apache.org/jira/browse/YARN-4731 > Any advice will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7091) Some rename changes in yarn-native-services
[ https://issues.apache.org/jira/browse/YARN-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-7091: -- Attachment: YARN-7091.yarn-native-services.01.patch > Some rename changes in yarn-native-services > --- > > Key: YARN-7091 > URL: https://issues.apache.org/jira/browse/YARN-7091 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-7091.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org