[jira] [Created] (YARN-6965) Duplicate instantiation in FairSchedulerQueueInfo
Masahiro Tanaka created YARN-6965: - Summary: Duplicate instantiation in FairSchedulerQueueInfo Key: YARN-6965 URL: https://issues.apache.org/jira/browse/YARN-6965 Project: Hadoop YARN Issue Type: Bug Components: fairscheduler Affects Versions: 3.0.0-alpha3 Reporter: Masahiro Tanaka Priority: Minor There is a duplicate instantiation in FairSchedulerQueueInfo.java https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java#L102-L105 I think this is not a big issue, but we should fix this in order to avoid confusion. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6964) Fair scheduler misuses Resources operations
Daniel Templeton created YARN-6964: -- Summary: Fair scheduler misuses Resources operations Key: YARN-6964 URL: https://issues.apache.org/jira/browse/YARN-6964 Project: Hadoop YARN Issue Type: Bug Components: fairscheduler Affects Versions: 3.0.0-alpha4 Reporter: Daniel Templeton Assignee: Daniel Templeton There are several places where YARN uses the {{Resources}} class to do comparisons of {{Resource}} instances incorrectly. This patch corrects those mistakes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Resolved] (YARN-1038) LocalizationProtocolPBClientImpl RPC failing
[ https://issues-test.apache.org/jira/browse/YARN-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du resolved YARN-1038. -- Resolution: Cannot Reproduce I don't think trunk branch has this problem now, just resolve as cannot reproduce. > LocalizationProtocolPBClientImpl RPC failing > > > Key: YARN-1038 > URL: https://issues-test.apache.org/jira/browse/YARN-1038 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Alejandro Abdelnur >Priority: Blocker > > Trying to run an MR job in trunk is failing with: > {code} > 2013-08-06 22:24:21,498 WARN org.apache.hadoop.ipc.Client: interrupted > waiting to send rpc request to server > java.lang.InterruptedException > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279) > at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218) > at java.util.concurrent.FutureTask.get(FutureTask.java:83) > at > org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1019) > at org.apache.hadoop.ipc.Client.call(Client.java:1372) > at org.apache.hadoop.ipc.Client.call(Client.java:1352) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy25.heartbeat(Unknown Source) > at > org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:250) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:164) > at > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:107) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:977) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [RESULT] [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Thanks Konstantin, great work! On Sat, Aug 5, 2017 at 1:36 PM Konstantin Shvachkowrote: > My formal vote > +1 (binding) > > I am glad to summarize that > with 7 binding and 13 non-binding +1s and no -1s the vote for Apache > Release 2.7.4 passes. > Thank you everybody for contributing to the release, testing it, and > voting. > > Binding +1s (7) > Zhe Zhang > Jason Lowe > Eric Payne > Sunil G > Akira Ajisaka > Chris Douglas > Konstantin Shvachko > > Non-binding +1s (13) > John Zhuge > Surendra Lilhore > Masatake Iwasaki > Hanisha Koneru > Chen Liang > Fnu Ajay Kumar > Brahma Reddy Battula > Edwina Lu > Ye Zhou > Eric Badger > Mingliang Liu > Kuhu Shukla > Erik Krogen > > Thanks, > --Konstantin > > On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko > > wrote: > > > Hi everybody, > > > > Here is the next release of Apache Hadoop 2.7 line. The previous stable > > release 2.7.3 was available since 25 August, 2016. > > Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are > > critical bug fixes and major optimizations. See more details in Release > > Note: > > http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html > > > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ > > > > Please give it a try and vote on this thread. The vote will run for 5 > days > > ending 08/04/2017. > > > > Please note that my up to date public key are available from: > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Please don't forget to refresh the page if you've been there recently. > > There are other place on Apache sites, which may contain my outdated key. > > > > Thanks, > > --Konstantin > > > -- Zhe Zhang Apache Hadoop Committer http://zhe-thoughts.github.io/about/ | @oldcap
[jira] [Created] (YARN-6963) Prevent other containers from staring when a container is re-initializing
Arun Suresh created YARN-6963: - Summary: Prevent other containers from staring when a container is re-initializing Key: YARN-6963 URL: https://issues.apache.org/jira/browse/YARN-6963 Project: Hadoop YARN Issue Type: Bug Reporter: Arun Suresh Assignee: Arun Suresh Further to discussions in YARN-6920. Container re-initialization will lead to momentary relinquishing of NM resources when the container is brought down followed by re-claiming of the same resources when it is re-launched. If there are Opportunistic containers in the queue, it can lead to un-necessary churn if one of those opportunistic containers are started and immediately killed. This JIRA tracks changes required to prevent the above by ensuring the resources for a container are 'locked' for the during of the container lifetime - including the time it takes for a re-initialization. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6962) Federation interceptor should support full allocate request/response api
Botong Huang created YARN-6962: -- Summary: Federation interceptor should support full allocate request/response api Key: YARN-6962 URL: https://issues.apache.org/jira/browse/YARN-6962 Project: Hadoop YARN Issue Type: Bug Reporter: Botong Huang Assignee: Botong Huang Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/485/ [Aug 6, 2017 7:19:23 PM] (stevel) HADOOP-14722. Azure: BlockBlobInputStream position incorrect after seek. -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2906] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2100] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:absolute pathname in org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext) At DockerLinuxContainerRuntime.java:[line 490] org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus() makes inefficient use of keySet iterator instead of entrySet iterator At ContainerLocalizer.java:keySet iterator
[jira] [Created] (YARN-6961) Remove commons-logging dependency from hadoop-yarn-server-applicationhistoryservice module
Akira Ajisaka created YARN-6961: --- Summary: Remove commons-logging dependency from hadoop-yarn-server-applicationhistoryservice module Key: YARN-6961 URL: https://issues.apache.org/jira/browse/YARN-6961 Project: Hadoop YARN Issue Type: Bug Components: build Reporter: Akira Ajisaka Priority: Minor YARN-6873 removed the usage of commons-logging APIs, so the dependency can be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6960) definition of active queue allows idle long-running apps to distort fair shares
Steven Rand created YARN-6960: - Summary: definition of active queue allows idle long-running apps to distort fair shares Key: YARN-6960 URL: https://issues.apache.org/jira/browse/YARN-6960 Project: Hadoop YARN Issue Type: Bug Components: fairscheduler Affects Versions: 3.0.0-alpha4, 2.8.1 Reporter: Steven Rand Assignee: Steven Rand YARN-2026 introduced the notion of only considering active queues when computing the fair share of each queue. The definition of an active queue is a queue with at least one runnable app: {code} public boolean isActive() { return getNumRunnableApps() > 0; } {code} One case that this definition of activity doesn't account for is that of long-running applications that scale dynamically. Such an application might request many containers when jobs are running, but scale down to very few containers, or only the AM container, when no jobs are running. Even when such an application has scaled down to a negligible amount of demand and utilization, the queue that it's in is still considered to be active, which defeats the purpose of YARN-2026. For example, consider this scenario: 1. We have queues {{root.a}}, {{root.b}}, {{root.c}}, and {{root.d}}, all of which have the same weight. 2. Queues {{root.a}} and {{root.b}} contain long-running applications that currently have only one container each (the AM). 3. An application in queue {{root.c}} starts, and uses the whole cluster except for the small amount in use by {{root.a}} and {{root.b}}. An application in {{root.d}} starts, and has a high enough demand to be able to use half of the cluster. Because all four queues are active, the app in {{root.d}} can only preempt the app in {{root.c}} up to roughly 25% of the cluster's resources, while the app in {{root.c}} keeps about 75%. Ideally in this example, the app in {{root.d}} would be able to preempt the app in {{root.c}} up to 50% of the cluster, which would be possible if the idle apps in {{root.a}} and {{root.b}} didn't cause those queues to be considered active. One way to address this is to update the definition of an active queue to be a queue containing 1 or more non-AM containers. This way if all apps in a queue scale down to only the AM, other queues' fair shares aren't affected. The benefit of this approach is that it's quite simple. The downside is that it doesn't account for apps that are idle and using almost no resources, but still have at least one non-AM container. There are a couple of other options that seem plausible to me, but they're much more complicated, and it seems to me that this proposal makes good progress while adding minimal extra complexity. Does this seem like a reasonable change? I'm certainly open to better ideas as well. Thanks, Steve -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6915) Support slf4j API for YARN-2901
[ https://issues.apache.org/jira/browse/YARN-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved YARN-6915. - Resolution: Duplicate Closed. > Support slf4j API for YARN-2901 > --- > > Key: YARN-6915 > URL: https://issues.apache.org/jira/browse/YARN-6915 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Akira Ajisaka > > YARN-6873 will move logging apis to slf4j, however, it will break YARN-2901. > We need to support slf4j API for YARN-2901. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6959) RM may allocate wrong AM Container for new attempt
Yuqi Wang created YARN-6959: --- Summary: RM may allocate wrong AM Container for new attempt Key: YARN-6959 URL: https://issues.apache.org/jira/browse/YARN-6959 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.7.1 Reporter: Yuqi Wang Assignee: Yuqi Wang Fix For: 3.0.0-alpha4, 2.7.1 *Issue Summary:* Previous attempt ResourceRequest may be recorded into current attempt ResourceRequests. These mis-recorded ResourceRequests may confuse AM Container Request and Allocation for current attempt. *Issue Pipeline:* {code:java} // Executing precondition check for the incoming attempt id. ApplicationMasterService.allocate() -> scheduler.allocate(attemptId, ask, ...) -> // Previous precondition check for the attempt id may be outdated here, // i.e. the currentAttempt may not be the corresponding attempt of the attemptId. // Such as the attempt id is corresponding to the previous attempt. currentAttempt = scheduler.getApplicationAttempt(attemptId) -> // Previous attempt ResourceRequest may be recorded into current attempt ResourceRequests currentAttempt.updateResourceRequests(ask) -> // RM may allocate wrong AM Container for the current attempt, because its ResourceRequests // may come from previous attempt which can be any ResourceRequests previous AM asked // and there is not matching logic for the original AM Container ResourceRequest and // the returned amContainerAllocation below. AMContainerAllocatedTransition.transition(...) -> amContainerAllocation = scheduler.allocate(currentAttemptId, ...) {code} *Patch Correctness:* Because after this Patch, RM will definitely record ResourceRequests from different attempt into different objects of SchedulerApplicationAttempt.AppSchedulingInfo. So, even if RM still record ResourceRequests from old attempt at any time, these ResourceRequests will be recorded in old AppSchedulingInfo object which will not impact current attempt's resource requests and allocation. *Concerns:* The getApplicationAttempt function in AbstractYarnScheduler is so confusing, we should better rename it to getCurrentApplicationAttempt. And reconsider whether there are any other bugs related to getApplicationAttempt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-timelineservice
Yeliang Cang created YARN-6958: -- Summary: Moving logging APIs over to slf4j in hadoop-yarn-timelineservice Key: YARN-6958 URL: https://issues.apache.org/jira/browse/YARN-6958 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yeliang Cang Assignee: Yeliang Cang -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: Apache Hadoop 2.8.2 Release Plan
Hello community, Here is a quick update on status for 2.8.2: - We are 0 blockers now! - Still 9 critical issues, 8 of them are Patch Available and with actively working. For details of pending blocker/critical issues, please refer: https://s.apache.org/JM5x I am planning to cut off first RC in week of Aug. 21st to give these critical issues a bit more time (~2 weeks) to get fixed. Let's working towards first production GA release of Apache Hadoop 2.8 - let me know if you have any thoughts or comments. Cheers, Junping From: Junping DuSent: Monday, July 24, 2017 1:41 PM To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org Subject: Re: I have done the change. All committers, 2.8.2 release is supposed to be a stable/production release for branch-2.8. For commits to go for 2.8.2 release (only important and low risk bug fixes), please commit to trunk, branch-2, branch-2.8 and branch-2.8.2. For unimportant or high risk bug fixes/improvements, please commit to branch-2.8 (trunk/branch-2) only and mark JIRA fixed as 2.8.3. Thanks for your cooperation! Thanks, Junping From: Junping Du Sent: Monday, July 24, 2017 10:36 AM To: Brahma Reddy Battula; Vinod Kumar Vavilapalli Cc: Kihwal Lee; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; Jason Lowe; humbed...@apache.org Subject: Re: Apache Hadoop 2.8.2 Release Plan Nice catch, Brahma. Actually, this is not supposed to be happen as we all should know the patch should firstly get landed on branch-2.8 (as well as trunk, branch-2) before landed on branch-2.8.2. Anyway, we will always check JIRAs claim to fixed in a specific release version with commits actually landing on the releasing branch before kicking off RC. So, I am not too worry about this mistaking behaviors as it happens in every releases. If no other concerns, I will do the branch update in next 30 minutes. Thanks, Junping From: Brahma Reddy Battula Sent: Sunday, July 23, 2017 1:50 AM To: Junping Du; Vinod Kumar Vavilapalli Cc: Kihwal Lee; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; Jason Lowe; humbed...@apache.org Subject: Re: Apache Hadoop 2.8.2 Release Plan Just executed the "git log branch-2.8 ^branch-2.8.2" found two commits are missed (HDFS-8312 and HADOOP-13867 ) in branch-2.8.I just pushed this two commits.Hope we'll not miss any commits which present in only in branch-2.8.2. From: Junping Du Sent: Saturday, July 22, 2017 5:57 AM To: Vinod Kumar Vavilapalli Cc: Kihwal Lee; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; Jason Lowe; humbed...@apache.org Subject: Re: Apache Hadoop 2.8.2 Release Plan Already get back from Daniel who is from ASF INFRA team, I plan to do following operations on next Monday morning: 1. Drop current branch-2.8.2 and recut branch-2.8.2 from branch-2.8 2. Drop abandoned branch-2.8.1 and rename branch-2.8.1-private to branch-2.8.1 where we just released 2.8.1 from. I will also adjust fix version on all affected JIRA accordingly. If you have any concerns on above operations, please raise it before the end of this Sunday (7/23). Thanks, Junping From: Junping Du Sent: Friday, July 21, 2017 2:29 PM To: Vinod Kumar Vavilapalli Cc: Kihwal Lee; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; Jason Lowe Subject: Re: Apache Hadoop 2.8.2 Release Plan Make sense, just raise: https://issues.apache.org/jira/browse/INFRA-14669 Thanks, Junping From: Vinod Kumar Vavilapalli Sent: Friday, July 21, 2017 12:31 PM To: Junping Du Cc: Kihwal Lee; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org; Jason Lowe Subject: Re: Apache Hadoop 2.8.2 Release Plan Junping, If we are looking at a month, I’d not rebranch branch-2.8.2 right now given how these things go. We can just continue to commit on branch-2.8 for now. I also think we should just follow up with ASF INFRA and clean up the branches - Delete branch-2.8.2 so that we can recreate it afresh a little later. - branch-2.8.1 is also stale and it should be deleted. branch-2.8.1-private should be renamed to branch-2.8.1 Thanks +Vinod > On Jul 21, 2017, at 11:23 AM, Junping Du wrote: > > Thanks for suggestions, Jason and Kihwal! > +1 on releasing 2.8.2 on latest branch-2.8 too.