[jira] [Created] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.
lufei created HDFS-12999: Summary: When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice. Key: HDFS-12999 URL: https://issues.apache.org/jira/browse/HDFS-12999 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding, hdfs-client Affects Versions: 3.0.0-beta1, 3.1.0 Reporter: lufei Assignee: lufei Fix For: 3.1.0 In order to make the process simplification. It's no need to flush all the data packets(flushAllInternals) twice,when reach the end of the block group. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12998) SnapshotDiff - Provide an iterator-based listing API for calculating snapshotDiff
Shashikant Banerjee created HDFS-12998: -- Summary: SnapshotDiff - Provide an iterator-based listing API for calculating snapshotDiff Key: HDFS-12998 URL: https://issues.apache.org/jira/browse/HDFS-12998 Project: Hadoop HDFS Issue Type: Improvement Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee Currently , SnapshotDiff computation happens over multiple rpc calls to namenode depending on the no of snapshotDiff entries where each rpc call returns at max 1000 entries by default . Each "getSnapshotDiffreportListing" call to namenode returns a partial snapshotDiffreportList which are all combined and processed at the client side to generate a final snapshotDiffreport. There can be cases where SnapshotDiffReport can be huge and in situations as such , the rpc calls to namnode should happen on demand at the client side. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/98/ No changes -1 overall The following subsystems voted -1: asflicense unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Unreaped Processes : hadoop-hdfs:32 bkjournal:8 hadoop-yarn-server-timelineservice:1 hadoop-yarn-client:4 hadoop-yarn-applications-distributedshell:1 hadoop-mapreduce-client-jobclient:12 hadoop-streaming:3 hadoop-distcp:3 hadoop-archives:1 hadoop-extras:1 Failed junit tests : hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels hadoop.mapreduce.v2.TestUberAM hadoop.tools.TestIntegration hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService Timed out junit tests : org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport org.apache.hadoop.hdfs.server.namenode.TestINodeFile org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl org.apache.hadoop.hdfs.server.namenode.TestEditLog org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits org.apache.hadoop.hdfs.server.namenode.TestFileTruncate org.apache.hadoop.hdfs.server.namenode.TestCheckpoint org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover org.apache.hadoop.hdfs.server.namenode.TestDeleteRace org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA org.apache.hadoop.fs.permission.TestStickyBit org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices org.apache.hadoop.yarn.client.TestRMFailover org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation org.apache.hadoop.yarn.client.api.impl.TestAMRMClient org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell org.apache.hadoop.mapred.lib.TestDelegatingInputFormat org.apache.hadoop.mapred.TestClusterMapReduceTestCase org.apache.hadoop.mapred.TestMRIntermediateDataEncryption org.apache.hadoop.mapred.TestMRTimelineEventHandling org.apache.hadoop.mapred.join.TestDatamerge org.apache.hadoop.mapred.TestJobCleanup org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers org.apache.hadoop.mapred.TestNetworkedJob org.apache.hadoop.mapred.TestMiniMRClientCluster
[jira] [Created] (HDFS-12997) Moving logging to slf4j in BlockPoolSliceStorage and Storage
Ajay Kumar created HDFS-12997: - Summary: Moving logging to slf4j in BlockPoolSliceStorage and Storage Key: HDFS-12997 URL: https://issues.apache.org/jira/browse/HDFS-12997 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ajay Kumar Assignee: Ajay Kumar Moving logging to slf4j in BlockPoolSliceStorage and Storage classes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12996) DataNode Replica Trash
Hanisha Koneru created HDFS-12996: - Summary: DataNode Replica Trash Key: HDFS-12996 URL: https://issues.apache.org/jira/browse/HDFS-12996 Project: Hadoop HDFS Issue Type: New Feature Reporter: Hanisha Koneru Assignee: Hanisha Koneru Attachments: DataNode_Replica_Trash_Design_Doc.pdf DataNode Replica Trash will allow administrators to recover from a recent delete request that resulted in catastrophic loss of user data. This is achieved by placing all invalidated blocks in a replica trash on the datanode before completely purging them from the system. The design doc is attached here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12995) [SPS] : Implement ExternalSPSContext for establishing RPC communication between SPS Service and NN
Uma Maheswara Rao G created HDFS-12995: -- Summary: [SPS] : Implement ExternalSPSContext for establishing RPC communication between SPS Service and NN Key: HDFS-12995 URL: https://issues.apache.org/jira/browse/HDFS-12995 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Uma Maheswara Rao G This is the task for implementing the RPC based communication wrapper for SPS Service to talk to NN when it require the information for processing. Let us say, that name of external context implementation is ExternalSPSContext which should implement the APIs of Context interface. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12994) TestReconstructStripedFile.testNNSendsErasureCodingTasks fails due to socket timeout
Lei (Eddy) Xu created HDFS-12994: Summary: TestReconstructStripedFile.testNNSendsErasureCodingTasks fails due to socket timeout Key: HDFS-12994 URL: https://issues.apache.org/jira/browse/HDFS-12994 Project: Hadoop HDFS Issue Type: Bug Components: erasure-coding Affects Versions: 3.0.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Occasionally, {{testNNSendsErasureCodingTasks}} fails due to socket timeout {code} 2017-12-26 20:35:19,961 [StripedBlockReconstruction-0] INFO datanode.DataNode (StripedBlockReader.java:createBlockReader(132)) - Exception while creating remote block reader, datanode 127.0.0.1:34145 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReader.newConnectedPeer(StripedBlockReader.java:148) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReader.createBlockReader(StripedBlockReader.java:123) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReader.(StripedBlockReader.java:83) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.createReader(StripedReader.java:169) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.initReaders(StripedReader.java:150) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.init(StripedReader.java:133) at org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.run(StripedBlockReconstructor.java:56) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) {code} while the target datanode is removed in the test: {code} 2017-12-26 20:35:18,710 [Thread-2393] INFO net.NetworkTopology (NetworkTopology.java:remove(219)) - Removing a node: /default-rack/127.0.0.1:34145 {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/ [Jan 8, 2018 5:09:31 AM] (rohithsharmaks) YARN-7699. queueUsagePercentage is coming as INF for getApp REST api [Jan 8, 2018 6:29:06 AM] (sunilg) YARN-7242. Support to specify values of different resource types in -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources At Resource.java:by returning Resource.resources At Resource.java:[line 234] Failed junit tests : hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean hadoop.hdfs.crypto.TestHdfsCryptoStreams hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.TestDFSUpgradeFromImage hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.TestDatanodeReport hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDecommission hadoop.hdfs.TestDecommissionWithStriped hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 hadoop.hdfs.TestEncryptionZonesWithKMS hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.TestCrcCorruption hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage hadoop.yarn.server.TestDiskFailures hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator hadoop.mapreduce.v2.TestUberAM cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-compile-javac-root.txt [280K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/whitespace-eol.txt [9.2M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/whitespace-tabs.txt [292K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [732K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [44K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [20K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/650/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [104K]