[jira] [Created] (HDFS-11665) HttpFSServerWebServer$deprecateEnv may leak secret
John Zhuge created HDFS-11665: - Summary: HttpFSServerWebServer$deprecateEnv may leak secret Key: HDFS-11665 URL: https://issues.apache.org/jira/browse/HDFS-11665 Project: Hadoop HDFS Issue Type: Bug Components: httpfs, security Affects Versions: 3.0.0-alpha3 Reporter: John Zhuge Assignee: John Zhuge May print secret in warning message: {code} LOG.warn("Environment variable {} = '{}' is deprecated and overriding" + " property {} = '{}', please set the property in {} instead.", varName, value, propName, propValue, confFile); {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11664) Invalidati blocks cannot invoke because of replication work slowly while cluster storage is almost full
DENG FEI created HDFS-11664: --- Summary: Invalidati blocks cannot invoke because of replication work slowly while cluster storage is almost full Key: HDFS-11664 URL: https://issues.apache.org/jira/browse/HDFS-11664 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.0.0-alpha2, 2.7.2 Reporter: DENG FEI Currently invalideBlock is executed after replication, when 'dfs remaining' is not much left and affect block writing, but can not release sufficient storage quickly after without delete, the daemon stuck at computing replication. Split replication and invalidation may be better. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11663) [READ] Fix NullPointerException in ProvidedBlocksBuilder
Virajith Jalaparti created HDFS-11663: - Summary: [READ] Fix NullPointerException in ProvidedBlocksBuilder Key: HDFS-11663 URL: https://issues.apache.org/jira/browse/HDFS-11663 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Virajith Jalaparti -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11662) TestJobEndNotifier.testNotificationTimeout fails intermittently
Eric Badger created HDFS-11662: -- Summary: TestJobEndNotifier.testNotificationTimeout fails intermittently Key: HDFS-11662 URL: https://issues.apache.org/jira/browse/HDFS-11662 Project: Hadoop HDFS Issue Type: Bug Reporter: Eric Badger {noformat} junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:55) at junit.framework.Assert.assertTrue(Assert.java:22) at junit.framework.Assert.assertTrue(Assert.java:31) at junit.framework.TestCase.assertTrue(TestCase.java:201) at org.apache.hadoop.mapred.TestJobEndNotifier.testNotificationTimeout(TestJobEndNotifier.java:182) {noformat} This test depends on absolute timing, which can't be guaranteed. If {{JobEndNotifier.localRunnerNotification(jobConf, jobStatus);}} doesn't run in less than 2 seconds, the test will fail. Loading up my machine can cause this failure consistently. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11661) GetContentSummary uses excessive amounts of memory
Nathan Roberts created HDFS-11661: - Summary: GetContentSummary uses excessive amounts of memory Key: HDFS-11661 URL: https://issues.apache.org/jira/browse/HDFS-11661 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.8.0 Reporter: Nathan Roberts Priority: Blocker ContentSummaryComputationContext::nodeIncluded() is being used to keep track of all INodes visited during the current content summary calculation. This can be all of the INodes in the filesystem, making for a VERY large hash table. This simply won't work on large filesystems. We noticed this after upgrading a namenode with ~100Million filesystem objects was spending significantly more time in GC. Fortunately this system had some memory breathing room, other clusters we have will not run with this additional demand on memory. This was added as part of HDFS-10797 as a way of keeping track of INodes that have already been accounted for - to avoid double counting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11660) TestFsDataset#testPageRounder fails intermittently with AssertionError
Andrew Wang created HDFS-11660: -- Summary: TestFsDataset#testPageRounder fails intermittently with AssertionError Key: HDFS-11660 URL: https://issues.apache.org/jira/browse/HDFS-11660 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.6.5 Reporter: Andrew Wang Assignee: Andrew Wang We've seen this test fail occasionally with an error like the following: {noformat} java.lang.AssertionError at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:510) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:695) at java.lang.Thread.run(Thread.java:745) {noformat} This assertion fires when the heartbeat response is null -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11659) TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no
Lei (Eddy) Xu created HDFS-11659: Summary: TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail due to no Key: HDFS-11659 URL: https://issues.apache.org/jira/browse/HDFS-11659 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 3.0.0-alpha2, 2.7.3 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu The test fails after the following error messages: {code} java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:57377,DS-b4ec61fc-657c-4e2a-9dc3-8d93b7769a2b,DISK], DatanodeInfoWithStorage[127.0.0.1:47448,DS-18bca8d7-048d-4d7f-9594-d2df16096a3d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1280) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1354) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1512) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1236) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721) {code} In such case, the DataNode that has removed can not be used in the pipeline recovery. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
disallowing force pushes to trunk was done back in: * August 2014: INFRA-8195 * February 2016: INFRA-11136 On Mon, Apr 17, 2017 at 11:18 AM, Jason Lowewrote: > I found at least one commit that was dropped, MAPREDUCE-6673. I was able to > cherry-pick the original commit hash since it was recorded in the commit > email. > This begs the question of why we're allowing force pushes to trunk. I > thought we asked to have that disabled the last time trunk was accidentally > clobbered? > Jason > > > On Monday, April 17, 2017 10:18 AM, Arun Suresh > wrote: > > > Hi > > I had the Apr-14 eve version of trunk on my local machine. I've pushed that. > Don't know if anything was committed over the weekend though. > > Cheers > -Arun > > On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineer > wrote: > >> Hi Allen, >> >> https://issues.apache.org/jira/browse/INFRA-13902 >> >> That happened with ozone branch too. It was an inadvertent force push. >> Infra has advised us to force push the latest branch if you have it. >> >> Thanks >> Anu >> >> >> On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: >> >> >Looks like someone reset HEAD back to Mar 31. >> > >> >Sent from my iPad >> > >> >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server < >> jenk...@builds.apache.org> wrote: >> >> >> >> For more details, see https://builds.apache.org/job/ >> hadoop-qbt-trunk-java8-linux-x86/378/ >> >> >> >> >> >> >> >> >> >> >> >> -1 overall >> >> >> >> >> >> The following subsystems voted -1: >> >>docker >> >> >> >> >> >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org >> >> >> >> >> >> >> >> - >> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >> > >> > >> >- >> >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >> >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >> > >> > >> >> > > > -- busbey - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Hi I had the Apr-14 eve version of trunk on my local machine. I've pushed that. Don't know if anything was committed over the weekend though. Cheers -Arun On Mon, Apr 17, 2017 at 7:17 AM, Anu Engineerwrote: > Hi Allen, > > https://issues.apache.org/jira/browse/INFRA-13902 > > That happened with ozone branch too. It was an inadvertent force push. > Infra has advised us to force push the latest branch if you have it. > > Thanks > Anu > > > On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote: > > >Looks like someone reset HEAD back to Mar 31. > > > >Sent from my iPad > > > >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server < > jenk...@builds.apache.org> wrote: > >> > >> For more details, see https://builds.apache.org/job/ > hadoop-qbt-trunk-java8-linux-x86/378/ > >> > >> > >> > >> > >> > >> -1 overall > >> > >> > >> The following subsystems voted -1: > >>docker > >> > >> > >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org > >> > >> > >> > >> - > >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > >- > >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > >
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Hi Allen, https://issues.apache.org/jira/browse/INFRA-13902 That happened with ozone branch too. It was an inadvertent force push. Infra has advised us to force push the latest branch if you have it. Thanks Anu On 4/17/17, 7:10 AM, "Allen Wittenauer"wrote: >Looks like someone reset HEAD back to Mar 31. > >Sent from my iPad > >> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server >> wrote: >> >> For more details, see >> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/378/ >> >> >> >> >> >> -1 overall >> >> >> The following subsystems voted -1: >>docker >> >> >> Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org >> >> >> >> - >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > >- >To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Looks like someone reset HEAD back to Mar 31. Sent from my iPad > On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server >wrote: > > For more details, see > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/378/ > > > > > > -1 overall > > > The following subsystems voted -1: >docker > > > Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11658) Ozone: SCM daemon is unable to be started via CLI
Weiwei Yang created HDFS-11658: -- Summary: Ozone: SCM daemon is unable to be started via CLI Key: HDFS-11658 URL: https://issues.apache.org/jira/browse/HDFS-11658 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang SCM daemon can no longer be started via CLI since {{StorageContainerManager}} class package renamed from {{org.apache.hadoop.ozone.storage.StorageContainerManager}} to {{org.apache.hadoop.ozone.scm.StorageContainerManager}} after HDFS-11184. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/379/ No changes -1 overall The following subsystems voted -1: docker Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org