RE: About 2.7.4 Release
Hi All Any update on 2.7.4 ..? Gentle Remainder!! Let me know anything I can help on this.. Regards Brahma Reddy Battula -Original Message- From: Andrew Wang [mailto:andrew.w...@cloudera.com] Sent: 08 March 2017 04:22 To: Sangjin Lee Cc: Marton Elek; Hadoop Common; yarn-...@hadoop.apache.org; Hdfs-dev; mapreduce-...@hadoop.apache.org Subject: Re: About 2.7.4 Release Our release steps are documented on the wiki: 2.6/2.7: https://wiki.apache.org/hadoop/HowToReleasePreDSBCR 2.8+: https://wiki.apache.org/hadoop/HowToRelease I think given the push toward 2.8 and 3.0, there's less interest in streamlining the 2.6 and 2.7 release processes. CHANGES.txt is the biggest pain, and that's fixed in 2.8+. Current pain points for 2.8+ include: # fixing up JIRA versions and the release notes, though I somewhat addressed this with the versions script for 3.x # making and staging an RC and sending the vote email still requires a lot of manual steps # publishing the release is also quite manual I think the RC issues can be attacked with enough scripting. Steve had an ant file that automated a lot of this for slider. I think it'd be nice to have a nightly Jenkins job that builds an RC, since I've spent a day or two for each 3.x alpha fixing build issues. Publishing can be attacked via a mix of scripting and revamping the darned website. Forrest is pretty bad compared to the newer static site generators out there (e.g. need to write XML instead of markdown, it's hard to review a staging site because of all the absolute links, hard to customize, did I mention XML?), and the look and feel of the site is from the 00s. We don't actually have that much site content, so it should be possible to migrate to a new system. On Tue, Mar 7, 2017 at 9:13 AM, Sangjin Lee wrote: > I don't think there should be any linkage between releasing 2.8.0 and > 2.7.4. If we have a volunteer for releasing 2.7.4, we should go full > speed ahead. We still need a volunteer from a PMC member or a > committer as some tasks may require certain privileges, but I don't > think it precludes working with others to close down the release. > > I for one would like to see more frequent releases, and being able to > automate release steps more would go a long way. > > On Tue, Mar 7, 2017 at 2:16 AM, Marton Elek wrote: > > > Is there any reason to wait for 2.8 with 2.7.4? > > > > Unfortunately the previous thread about release cadence has been > > ended without final decision. But if I understood well, there was > > more or less > an > > agreement about that it would be great to achieve more frequent > > releases, if possible (with or without written rules and EOL policy). > > > > I personally prefer to be more closer to the scheduling part of the > > proposal: > > > > "A minor release on the latest major line should be every 6 months, > > and a maintenance release on a minor release (as there may be > > concurrently maintained minor releases) every 2 months". > > > > I don't know what is the hardest part of creating new > > minor/maintenance releases. But if the problems are technical > > (smoketesting, unit tests, > old > > release script, anything else) I would be happy to do any task for > > new maintenance releases (or more frequent releases). > > > > Regards, > > Marton > > > > > > > > From: Akira Ajisaka > > Sent: Tuesday, March 07, 2017 7:34 AM > > To: Brahma Reddy Battula; Hadoop Common; yarn-...@hadoop.apache.org; > > Hdfs-dev; mapreduce-...@hadoop.apache.org > > Subject: Re: About 2.7.4 Release > > > > Probably 2.8.0 will be released soon. > > https://issues.apache.org/jira/browse/HADOOP-13866? > > focusedCommentId=15898379&page=com.atlassian.jira. > > plugin.system.issuetabpanels:comment-tabpanel#comment-15898379 > > > > I'm thinking 2.7.4 release process starts after 2.8.0 release, so > > 2.7.4 will be released in April or May. (hopefully) > > > > Thoughts? > > > > Regards, > > Akira > > > > On 2017/03/01 21:01, Brahma Reddy Battula wrote: > > > Hi All > > > > > > It has been six months for branch-2.7 release.. is there any near > > > plan > > for 2.7.4..? > > > > > > > > > Thanks&Regards > > > Brahma Reddy Battula > > > > > > > > > > > > - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > > > > > > - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > >
[jira] [Created] (HDFS-11665) HttpFSServerWebServer$deprecateEnv may leak secret
John Zhuge created HDFS-11665: - Summary: HttpFSServerWebServer$deprecateEnv may leak secret Key: HDFS-11665 URL: https://issues.apache.org/jira/browse/HDFS-11665 Project: Hadoop HDFS Issue Type: Bug Components: httpfs, security Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge May print secret in warning message: {code} LOG.warn("Environment variable {} = '{}' is deprecated and overriding" + " property {} = '{}', please set the property in {} instead.", varName, value, propName, propValue, confFile); {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11664) Invalidati blocks cannot invoke because of replication work slowly while cluster storage is almost full
DENG FEI created HDFS-11664: --- Summary: Invalidati blocks cannot invoke because of replication work slowly while cluster storage is almost full Key: HDFS-11664 URL: https://issues.apache.org/jira/browse/HDFS-11664 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.0.0-alpha2, 2.7.2 Reporter: DENG FEI Currently invalideBlock is executed after replication, when 'dfs remaining' is not much left and affect block writing, but can not release sufficient storage quickly after without delete, the daemon stuck at computing replication. Split replication and invalidation may be better. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11663) [READ] Fix NullPointerException in ProvidedBlocksBuilder
Virajith Jalaparti created HDFS-11663: - Summary: [READ] Fix NullPointerException in ProvidedBlocksBuilder Key: HDFS-11663 URL: https://issues.apache.org/jira/browse/HDFS-11663 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Virajith Jalaparti -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11662) TestJobEndNotifier.testNotificationTimeout fails intermittently
Eric Badger created HDFS-11662: -- Summary: TestJobEndNotifier.testNotificationTimeout fails intermittently Key: HDFS-11662 URL: https://issues.apache.org/jira/browse/HDFS-11662 Project: Hadoop HDFS Issue Type: Bug Reporter: Eric Badger {noformat} junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:55) at junit.framework.Assert.assertTrue(Assert.java:22) at junit.framework.Assert.assertTrue(Assert.java:31) at junit.framework.TestCase.assertTrue(TestCase.java:201) at org.apache.hadoop.mapred.TestJobEndNotifier.testNotificationTimeout(TestJobEndNotifier.java:182) {noformat} This test depends on absolute timing, which can't be guaranteed. If {{JobEndNotifier.localRunnerNotification(jobConf, jobStatus);}} doesn't run in less than 2 seconds, the test will fail. Loading up my machine can cause this failure consistently. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11661) GetContentSummary uses excessive amounts of memory
Nathan Roberts created HDFS-11661: - Summary: GetContentSummary uses excessive amounts of memory Key: HDFS-11661 URL: https://issues.apache.org/jira/browse/HDFS-11661 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.8.0 Reporter: Nathan Roberts Priority: Blocker ContentSummaryComputationContext::nodeIncluded() is being used to keep track of all INodes visited during the current content summary calculation. This can be all of the INodes in the filesystem, making for a VERY large hash table. This simply won't work on large filesystems. We noticed this after upgrading a namenode with ~100Million filesystem objects was spending significantly more time in GC. Fortunately this system had some memory breathing room, other clusters we have will not run with this additional demand on memory. This was added as part of HDFS-10797 as a way of keeping track of INodes that have already been accounted for - to avoid double counting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11660) TestFsDataset#testPageRounder fails intermittently with AssertionError
Andrew Wang created HDFS-11660: -- Summary: TestFsDataset#testPageRounder fails intermittently with AssertionError Key: HDFS-11660 URL: https://issues.apache.org/jira/browse/HDFS-11660 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.6.5 Reporter: Andrew Wang Assignee: Andrew Wang We've seen this test fail occasionally with an error like the following: {noformat} java.lang.AssertionError at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:510) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:695) at java.lang.Thread.run(Thread.java:745) {noformat} This assertion fires when the heartbeat response is null -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11659) TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no
Lei (Eddy) Xu created HDFS-11659: Summary: TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no Key: HDFS-11659 URL: https://issues.apache.org/jira/browse/HDFS-11659 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 3.0.0-alpha2, 2.7.3 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu The test fails after the following error messages: {code} java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1280) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1354) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1512) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1236) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721) {code} In such case, the DataNode that has removed can not be used in the pipeline recovery. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
disallowing force pushes to trunk was done back in: * August 2014: INFRA-8195 * February 2016: INFRA-11136 On Mon, Apr 17, 2017 at 11:18 AM, Jason Lowe wrote: > I found at least one commit that was dropped, MAPREDUCE-6673. I was able to > cherry-pick the original commit hash since it was recorded in the commit > email. > This begs the question of why we're allowing force pushes to trunk. I > thought we asked to have that disabled the last time trunk was accidentally > clobbered? > Jason > > > On Monday, April 17, 2017 10:18 AM, Arun Suresh > wrote: > > > Hi > > I had the Apr-14 eve version of trunk on my local machine. I've pushed that. > Don't know if anything was committed over the weekend though. > > Cheers > -Arun > > On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer > wrote: > >> Hi Allen, >> >> https://issues.apache.org/jira/browse/INFRA-13902 >> >> That happened with ozone branch too. It was an inadvertent force push. >> Infra has advised us to force push the latest branch if you have it. >> >> Thanks >> Anu >> >> >> On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: >> >> >Looks like someone reset HEAD back to Mar 31. >> > >> >Sent from my iPad >> > >> >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server < >> jenk...@builds.apache.org> wrote: >> >> >> >> For more details, see https://builds.apache.org/job/ >> hadoop-qbt-trunk-java8-linux-x86/378/ >> >> >> >> >> >> >> >> >> >> >> >> -1 overall >> >> >> >> >> >> The following subsystems voted -1: >> >>docker >> >> >> >> >> >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org >> >> >> >> >> >> >> >> - >> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >> > >> > >> >- >> >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >> >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >> > >> > >> >> > > > -- busbey - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/ No changes -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-compile-root.txt [136K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-compile-root.txt [136K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-compile-root.txt [136K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [20K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/291/artifact/out/patch-unit-hadoop-mapreduce-pr
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
I found at least one commit that was dropped, MAPREDUCE-6673. I was able to cherry-pick the original commit hash since it was recorded in the commit email. This begs the question of why we're allowing force pushes to trunk. I thought we asked to have that disabled the last time trunk was accidentally clobbered? Jason On Monday, April 17, 2017 10:18 AM, Arun Suresh wrote: Hi I had the Apr-14 eve version of trunk on my local machine. I've pushed that. Don't know if anything was committed over the weekend though. Cheers -Arun On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer wrote: > Hi Allen, > > https://issues.apache.org/jira/browse/INFRA-13902 > > That happened with ozone branch too. It was an inadvertent force push. > Infra has advised us to force push the latest branch if you have it. > > Thanks > Anu > > > On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: > > >Looks like someone reset HEAD back to Mar 31. > > > >Sent from my iPad > > > >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server < > jenk...@builds.apache.org> wrote: > >> > >> For more details, see https://builds.apache.org/job/ > hadoop-qbt-trunk-java8-linux-x86/378/ > >> > >> > >> > >> > >> > >> -1 overall > >> > >> > >> The following subsystems voted -1: > >> docker > >> > >> > >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org > >> > >> > >> > >> - > >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > >- > >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > >
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Hi I had the Apr-14 eve version of trunk on my local machine. I've pushed that. Don't know if anything was committed over the weekend though. Cheers -Arun On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer wrote: > Hi Allen, > > https://issues.apache.org/jira/browse/INFRA-13902 > > That happened with ozone branch too. It was an inadvertent force push. > Infra has advised us to force push the latest branch if you have it. > > Thanks > Anu > > > On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: > > >Looks like someone reset HEAD back to Mar 31. > > > >Sent from my iPad > > > >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server < > jenk...@builds.apache.org> wrote: > >> > >> For more details, see https://builds.apache.org/job/ > hadoop-qbt-trunk-java8-linux-x86/378/ > >> > >> > >> > >> > >> > >> -1 overall > >> > >> > >> The following subsystems voted -1: > >>docker > >> > >> > >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org > >> > >> > >> > >> - > >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > >- > >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > >
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Hi Allen, https://issues.apache.org/jira/browse/INFRA-13902 That happened with ozone branch too. It was an inadvertent force push. Infra has advised us to force push the latest branch if you have it. Thanks Anu On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: >Looks like someone reset HEAD back to Mar 31. > >Sent from my iPad > >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server >> wrote: >> >> For more details, see >> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/378/ >> >> >> >> >> >> -1 overall >> >> >> The following subsystems voted -1: >>docker >> >> >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org >> >> >> >> - >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > >- >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Looks like someone reset HEAD back to Mar 31. Sent from my iPad > On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server > wrote: > > For more details, see > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/378/ > > > > > > -1 overall > > > The following subsystems voted -1: >docker > > > Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11658) Ozone: SCM daemon is unable to be started via CLI
Weiwei Yang created HDFS-11658: -- Summary: Ozone: SCM daemon is unable to be started via CLI Key: HDFS-11658 URL: https://issues.apache.org/jira/browse/HDFS-11658 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang SCM daemon can no longer be started via CLI since {{StorageContainerManager}} class package renamed from {{org.apache.hadoop.ozone.storage.StorageContainerManager}} to {{org.apache.hadoop.ozone.scm.StorageContainerManager}} after HDFS-11184. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org