[jira] [Assigned] (HDFS-6292) Display HDFS per user and per group usage on the webUI
[ https://issues.apache.org/jira/browse/HDFS-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash reassigned HDFS-6292: -- Assignee: (was: Ravi Prakash) > Display HDFS per user and per group usage on the webUI > -- > > Key: HDFS-6292 > URL: https://issues.apache.org/jira/browse/HDFS-6292 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.4.0 >Reporter: Ravi Prakash >Priority: Major > Attachments: HDFS-6292.01.patch, HDFS-6292.patch, HDFS-6292.png > > > It would be nice to show HDFS usage per user and per group on a web ui. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly
[ https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13592: - Description: Without cluster shutdown in TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the below two tests fail (referring to https://builds.apache.org/job/hadoop-trunk-win/469/testReport/) * [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/] * [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/] was: Without cluster shutdown in org.apache.hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages.testUnusedStorageIsPruned > TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does > not shut down cluster properly > -- > > Key: HDFS-13592 > URL: https://issues.apache.org/jira/browse/HDFS-13592 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > > Without cluster shutdown in > TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the > below two tests fail (referring to > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/) > * > [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/] > * > [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-7342) Lease Recovery doesn't happen some times
[ https://issues.apache.org/jira/browse/HDFS-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash reassigned HDFS-7342: -- Assignee: (was: Ravi Prakash) > Lease Recovery doesn't happen some times > > > Key: HDFS-7342 > URL: https://issues.apache.org/jira/browse/HDFS-7342 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Ravi Prakash >Priority: Major > Attachments: HDFS-7342.04.patch, HDFS-7342.1.patch, > HDFS-7342.2.patch, HDFS-7342.3.patch > > > In some cases, LeaseManager tries to recover a lease, but is not able to. > HDFS-4882 describes a possibility of that. We should fix this -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly
[ https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13592: - Description: Without cluster shutdown in org.apache.hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages.testUnusedStorageIsPruned > TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does > not shut down cluster properly > -- > > Key: HDFS-13592 > URL: https://issues.apache.org/jira/browse/HDFS-13592 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > > Without cluster shutdown in > org.apache.hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages.testUnusedStorageIsPruned -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks
[ https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash reassigned HDFS-8344: -- Assignee: (was: Ravi Prakash) > NameNode doesn't recover lease for files with missing blocks > > > Key: HDFS-8344 > URL: https://issues.apache.org/jira/browse/HDFS-8344 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.0 >Reporter: Ravi Prakash >Priority: Major > Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, > HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, > HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, > HDFS-8344.09.patch, HDFS-8344.10.patch, TestHadoop.java > > > I found another\(?) instance in which the lease is not recovered. This is > reproducible easily on a pseudo-distributed single node cluster > # Before you start it helps if you set. This is not necessary, but simply > reduces how long you have to wait > {code} > public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000; > public static final long LEASE_HARDLIMIT_PERIOD = 2 * > LEASE_SOFTLIMIT_PERIOD; > {code} > # Client starts to write a file. (could be less than 1 block, but it hflushed > so some of the data has landed on the datanodes) (I'm copying the client code > I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar) > # Client crashes. (I simulate this by kill -9 the $(hadoop jar > TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter" > # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was > only 1) > I believe the lease should be recovered and the block should be marked > missing. However this is not happening. The lease is never recovered. > The effect of this bug for us was that nodes could not be decommissioned > cleanly. Although we knew that the client had crashed, the Namenode never > released the leases (even after restarting the Namenode) (even months > afterwards). There are actually several other cases too where we don't > consider what happens if ALL the datanodes die while the file is being > written, but I am going to punt on that for another time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480222#comment-16480222 ] Anbang Hu edited comment on HDFS-13591 at 5/18/18 6:54 AM: --- According to * https://stackoverflow.com/questions/48983566/why-am-i-getting-r-r-n-cr-cr-lf-from-a-windows-command-line-in-java * https://stackoverflow.com/questions/24961755/batch-how-to-correct-variable-overwriting-misbehavior-when-parsing-output the return stream in TestDFSShell#testSetrepLow contains "\r\r\n". [^HDFS-13591.000.patch] is intended to deal with this. was (Author: huanbang1993): According to * https://stackoverflow.com/questions/48983566/why-am-i-getting-r-r-n-cr-cr-lf-from-a-windows-command-line-in-java * https://stackoverflow.com/questions/24961755/batch-how-to-correct-variable-overwriting-misbehavior-when-parsing-output the return stream in TestDFSShell#testSetrepLow contains "\r\r\n". [^HDFS-13591.000.patch] is intended to deal with this. > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13591: - Attachment: HDFS-13591.000.patch Status: Patch Available (was: Open) > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13591: - Attachment: (was: HDFS-13591.000.patch) > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly
Anbang Hu created HDFS-13592: Summary: TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly Key: HDFS-13592 URL: https://issues.apache.org/jira/browse/HDFS-13592 Project: Hadoop HDFS Issue Type: Bug Reporter: Anbang Hu Assignee: Anbang Hu -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480222#comment-16480222 ] Anbang Hu commented on HDFS-13591: -- According to * https://stackoverflow.com/questions/48983566/why-am-i-getting-r-r-n-cr-cr-lf-from-a-windows-command-line-in-java * https://stackoverflow.com/questions/24961755/batch-how-to-correct-variable-overwriting-misbehavior-when-parsing-output the return stream in TestDFSShell#testSetrepLow contains "\r\r\n". [^HDFS-13591.000.patch] is intended to deal with this. > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480222#comment-16480222 ] Anbang Hu edited comment on HDFS-13591 at 5/18/18 6:37 AM: --- According to * https://stackoverflow.com/questions/48983566/why-am-i-getting-r-r-n-cr-cr-lf-from-a-windows-command-line-in-java * https://stackoverflow.com/questions/24961755/batch-how-to-correct-variable-overwriting-misbehavior-when-parsing-output the return stream in TestDFSShell#testSetrepLow contains "\r\r\n". [^HDFS-13591.000.patch] is intended to deal with this. was (Author: huanbang1993): According to * https://stackoverflow.com/questions/48983566/why-am-i-getting-r-r-n-cr-cr-lf-from-a-windows-command-line-in-java * https://stackoverflow.com/questions/24961755/batch-how-to-correct-variable-overwriting-misbehavior-when-parsing-output the return stream in TestDFSShell#testSetrepLow contains "\r\r\n". [^HDFS-13591.000.patch] is intended to deal with this. > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13591: - Attachment: HDFS-13591.000.patch > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > Attachments: HDFS-13591.000.patch > > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13591: - Description: https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ shows {code:java} Error message is not the expected error message expected:<...testFileForSetrepLow[] > but was:<...testFileForSetrepLow[ ] > {code} > TestDFSShell#testSetrepLow fails on Windows > --- > > Key: HDFS-13591 > URL: https://issues.apache.org/jira/browse/HDFS-13591 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Minor > Labels: Windows > > https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/ > shows > {code:java} > Error message is not the expected error message > expected:<...testFileForSetrepLow[] > > but was:<...testFileForSetrepLow[ > ] > > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows
Anbang Hu created HDFS-13591: Summary: TestDFSShell#testSetrepLow fails on Windows Key: HDFS-13591 URL: https://issues.apache.org/jira/browse/HDFS-13591 Project: Hadoop HDFS Issue Type: Bug Reporter: Anbang Hu Assignee: Anbang Hu -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info
[ https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480184#comment-16480184 ] Mukul Kumar Singh commented on HDDS-76: --- +1, the patch looks good to me as well. I will commit this patch soon. > Modify SCMStorageReportProto to include the data dir paths as well as the > StorageType info > -- > > Key: HDDS-76 > URL: https://issues.apache.org/jira/browse/HDDS-76 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-76.00.patch, HDDS-76.01.patch > > > Currently, SCMStorageReport contains the storageUUID which are sent across to > SCM for maintaining storage Report info. This Jira aims to include the data > dir paths for actual disks as well as the storage Type info for each volume > on datanode to be sent to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized
[ https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480134#comment-16480134 ] Hanisha Koneru commented on HDFS-13589: --- Added a patch to return the upgrade status for a normal upgrade (non-rolling). Some sample outputs of the {{dfsadmin -upgradeStatus}} command are below: {code:java} $ hdfs dfsadmin -upgradeStatus Upgrade finalized for mycluster-node-1/172.18.0.2:8020 Upgrade finalized for mycluster-node-2/172.18.0.3:8020 $ hdfs dfsadmin -upgradeStatus Upgrade not finalized for mycluster-node-1/172.18.0.2:8020 Upgrade finalized for mycluster-node-2/172.18.0.3:8020 $ hdfs dfsadmin -upgradeStatus Upgrade finalized for mycluster-node-2/172.18.0.3:8020 upgradeStatus: Call From e6815a2a1257/172.18.0.2 to mycluster-node-1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused {code} > Add dfsAdmin command to query if "upgrade" is finalized > --- > > Key: HDFS-13589 > URL: https://issues.apache.org/jira/browse/HDFS-13589 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDFS-13589.001.patch > > > When we do upgrade on a Namenode (non rollingUpgrade), we should be able to > query whether the upgrade has been finalized or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized
[ https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-13589: -- Attachment: HDFS-13589.001.patch > Add dfsAdmin command to query if "upgrade" is finalized > --- > > Key: HDFS-13589 > URL: https://issues.apache.org/jira/browse/HDFS-13589 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDFS-13589.001.patch > > > When we do upgrade on a Namenode (non rollingUpgrade), we should be able to > query whether the upgrade has been finalized or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13585) libhdfs SIGSEGV during shutdown of Java application.
[ https://issues.apache.org/jira/browse/HDFS-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nalini Ganapati updated HDFS-13585: --- Environment: Centos 7 gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) was:Centos 7 > libhdfs SIGSEGV during shutdown of Java application. > > > Key: HDFS-13585 > URL: https://issues.apache.org/jira/browse/HDFS-13585 > Project: Hadoop HDFS > Issue Type: Bug > Components: native >Affects Versions: 2.7.5 > Environment: Centos 7 > gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) >Reporter: Nalini Ganapati >Priority: Major > > We are using libhdfs for hdfs support from our native library. This has been > working mostly fine with Java/Spark applications, but some of them throw a > SIGSEGV in hdfsThreadDestructor(). We tried to dynamically load and unload > libhdfs.so using dlopen/dlclose but to no avail and we still see the seg > fault. Is this a known issue? Looks like thread local storage is involved, > are there workarounds? > > Here is a call stack from gdb java > (gdb) bt > #0 0x7fad21f7 in raise () from /usr/lib64/libc.so.6 > #1 0x7fad38e8 in abort () from /usr/lib64/libc.so.6 > #2 0x7f380259 in os::abort(bool) () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #3 0x7f585986 in VMError::report_and_die() () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #4 0x7f389ec7 in JVM_handle_linux_signal () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #5 0x7f37d678 in signalHandler(int, siginfo_t*, void*) () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #6 > #7 0x7f341e66 in Monitor::ILock(Thread*) () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #8 0x7f3428f6 in Monitor::lock_without_safepoint_check() () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #9 0x7f58bc21 in VM_Exit::wait_if_vm_exited() () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #10 0x7f14fee5 in jni_DetachCurrentThread () from > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so > #11 0x7f32f2645f15 in hdfsThreadDestructor (v=0x7f332c018bc8) > at > /home/kshvachk/Work/Hadoop/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c:49 > #12 0x7f3334490c22 in __nptl_deallocate_tsd () from > /usr/lib64/libpthread.so.0 > #13 0x7f3334490e33 in start_thread () from /usr/lib64/libpthread.so.0 > #14 0x7fb9534d in clone () from /usr/lib64/libc.so.6 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480117#comment-16480117 ] genericqa commented on HDFS-13587: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}163m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924043/HDFS-13587.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2a4aabed952e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2f2dd22 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24254/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24254/testReport/ | | Max. process+thread count | 3052 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCom
[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480108#comment-16480108 ] genericqa commented on HDFS-13588: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13588 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924044/HDFS-13588.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 023cb1aa1116 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2f2dd22 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24255/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24255/testReport/ | | Max. process+thread count | 3563 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdf
[jira] [Updated] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate
[ https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13573: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) > Javadoc for BlockPlacementPolicyDefault is inaccurate > - > > Key: HDFS-13573 > URL: https://issues.apache.org/jira/browse/HDFS-13573 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Yiqun Lin >Assignee: Zsolt Venczel >Priority: Trivial > Fix For: 3.2.0 > > Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch > > > Current rule of default block placement policy: > {quote}The replica placement strategy is that if the writer is on a datanode, > the 1st replica is placed on the local machine, > otherwise a random datanode. The 2nd replica is placed on a datanode > that is on a different rack. The 3rd replica is placed on a datanode > which is on a different node of the rack as the second replica. > {quote} > *if the writer is on a datanode, the 1st replica is placed on the local > machine*, actually this can be decided by the hdfs client. The client can > pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on > the local datanode. But subsequent replicas will still follow default block > placement policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate
[ https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480077#comment-16480077 ] Hudson commented on HDFS-13573: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14234 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14234/]) HDFS-13573. Javadoc for BlockPlacementPolicyDefault is inaccurate. (yqlin: rev f749517cc78fc761cecff21e8b7f65fb719bfca2) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java > Javadoc for BlockPlacementPolicyDefault is inaccurate > - > > Key: HDFS-13573 > URL: https://issues.apache.org/jira/browse/HDFS-13573 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Yiqun Lin >Assignee: Zsolt Venczel >Priority: Trivial > Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch > > > Current rule of default block placement policy: > {quote}The replica placement strategy is that if the writer is on a datanode, > the 1st replica is placed on the local machine, > otherwise a random datanode. The 2nd replica is placed on a datanode > that is on a different rack. The 3rd replica is placed on a datanode > which is on a different node of the rack as the second replica. > {quote} > *if the writer is on a datanode, the 1st replica is placed on the local > machine*, actually this can be decided by the hdfs client. The client can > pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on > the local datanode. But subsequent replicas will still follow default block > placement policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480073#comment-16480073 ] Yongjun Zhang commented on HDFS-13388: -- HI [~elgoiri], Would you please take a look at my comment above. Wonder what you think. Thanks. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, > HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480070#comment-16480070 ] genericqa commented on HDFS-13587: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924035/HDFS-13587.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ec3d3681e6e7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 53b807a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24252/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24252/testReport/ | | Max. process+thread count | 4204 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://buil
[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info
[ https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480061#comment-16480061 ] Bharat Viswanadham commented on HDDS-76: Thank You [~shashikant] for info. +1 LGTM. > Modify SCMStorageReportProto to include the data dir paths as well as the > StorageType info > -- > > Key: HDDS-76 > URL: https://issues.apache.org/jira/browse/HDDS-76 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-76.00.patch, HDDS-76.01.patch > > > Currently, SCMStorageReport contains the storageUUID which are sent across to > SCM for maintaining storage Report info. This Jira aims to include the data > dir paths for actual disks as well as the storage Type info for each volume > on datanode to be sent to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480059#comment-16480059 ] Hudson commented on HDFS-13586: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14233 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14233/]) HDFS-13586. Fsync fails on directories on Windows. Contributed by Lukas (inigoiri: rev 8783613696674aba4ae1739c6e8f48cda0d1c386) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate
[ https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480060#comment-16480060 ] Yiqun Lin commented on HDFS-13573: -- Committed this to trunk. Thanks [~zvenczel] for the contribution. > Javadoc for BlockPlacementPolicyDefault is inaccurate > - > > Key: HDFS-13573 > URL: https://issues.apache.org/jira/browse/HDFS-13573 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Yiqun Lin >Assignee: Zsolt Venczel >Priority: Trivial > Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch > > > Current rule of default block placement policy: > {quote}The replica placement strategy is that if the writer is on a datanode, > the 1st replica is placed on the local machine, > otherwise a random datanode. The 2nd replica is placed on a datanode > that is on a different rack. The 3rd replica is placed on a datanode > which is on a different node of the rack as the second replica. > {quote} > *if the writer is on a datanode, the 1st replica is placed on the local > machine*, actually this can be decided by the hdfs client. The client can > pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on > the local datanode. But subsequent replicas will still follow default block > placement policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice
[ https://issues.apache.org/jira/browse/HDFS-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13583: - Description: Router admin -clrQuota command is removing the quota from the mount table only, it is not getting synchronized with nameservice. was: Router admin -clrQuota command is removing the quota from the mount table only, it is not getting synchronized with nameservice. we should remove this QUOTA_DONT_SET check from RouterAdminServer#synchronizeQuota {code:java} if (nsQuota != HdfsConstants.QUOTA_DONT_SET || ssQuota != HdfsConstants.QUOTA_DONT_SET) { this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota, ssQuota, null); } {code} > RBF: Router admin clrQuota is not synchronized with nameservice > --- > > Key: HDFS-13583 > URL: https://issues.apache.org/jira/browse/HDFS-13583 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > Router admin -clrQuota command is removing the quota from the mount table > only, it is not getting synchronized with nameservice. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate
[ https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480046#comment-16480046 ] Yiqun Lin commented on HDFS-13573: -- LGTM, +1. Will commit this shortly. > Javadoc for BlockPlacementPolicyDefault is inaccurate > - > > Key: HDFS-13573 > URL: https://issues.apache.org/jira/browse/HDFS-13573 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Yiqun Lin >Assignee: Zsolt Venczel >Priority: Trivial > Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch > > > Current rule of default block placement policy: > {quote}The replica placement strategy is that if the writer is on a datanode, > the 1st replica is placed on the local machine, > otherwise a random datanode. The 2nd replica is placed on a datanode > that is on a different rack. The 3rd replica is placed on a datanode > which is on a different node of the rack as the second replica. > {quote} > *if the writer is on a datanode, the 1st replica is placed on the local > machine*, actually this can be decided by the hdfs client. The client can > pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on > the local datanode. But subsequent replicas will still follow default block > placement policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13586: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.3 2.9.2 3.1.1 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Thanks [~lukmajercak] for the patch and [~giovanni.fumarola] for the review. Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480038#comment-16480038 ] Íñigo Goiri commented on HDFS-13586: The original Hadoop fsync fix comes from LUCENE-5588. Over there, they do something similar to [^HDFS-13586.001.patch] : {code} if (Constants.WINDOWS && isDir) { // We know from MSDN that Windows does not support fsyncing directories at all return; } {code} +1 on [^HDFS-13586.001.patch]. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480031#comment-16480031 ] Giovanni Matteo Fumarola commented on HDFS-13586: - Thanks [~lukmajercak] for working on this. The patch looks OK for me. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480026#comment-16480026 ] genericqa commented on HDFS-13586: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 2m 31s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 29s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13586 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924032/HDFS-13586.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 48c90a4100ed 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a97a204 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/24251/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24251/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24251/conso
[jira] [Commented] (HDFS-8298) HA: NameNode should not shut down completely without quorum, doesn't recover from temporary network outages
[ https://issues.apache.org/jira/browse/HDFS-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480017#comment-16480017 ] KaiXu commented on HDFS-8298: - The same issue on HDP2.6, any comments on how to workaround? 2018-05-18 09:31:39,653 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (JournalAndStream(mgr=QJM to [172.31.48.34:8485, 172.31.48.54:8485, 172.31.48.64:8485], stream=QuorumOutputStream starting at txid 9751808)) java.io.IOException: Timed out waiting 2ms for a quorum of nodes to respond. at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137) at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107) at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113) at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107) at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533) at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393) at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57) at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:707) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:641) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2691) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2556) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:408) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345) 2018-05-18 09:31:39,654 WARN client.QuorumJournalManager (QuorumOutputStream.java:abort(72)) - Aborting QuorumOutputStream starting at txid 9751808 2018-05-18 09:31:39,654 INFO BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1647)) - BLOCK* neededReplications = 0, pendingReplications = 0. 2018-05-18 09:31:39,663 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2018-05-18 09:31:39,691 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: > HA: NameNode should not shut down completely without quorum, doesn't recover > from temporary network outages > --- > > Key: HDFS-8298 > URL: https://issues.apache.org/jira/browse/HDFS-8298 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ha, namenode, qjm >Affects Versions: 2.6.0, 2.7.3 > Environment: multiple clients, HDP 2.2, HDP 2.5, CDH etc >Reporter: Hari Sekhon >Priority: Major > > In an HDFS HA setup if there is a temporary problem with contacting journal > nodes (eg. network interruption), the NameNode shuts down entirely, when it > should instead go in to a standby mode so that it can stay online and retry > to achieve quorum later. > If both NameNodes shut themselves off like this then even after the temporary > network outage is resolved, the entire cluster remains offline indefinitely > until operator intervention, whereas it could have self-repaired after > re-contacting the journalnodes and re-achieving quorum. > {code}2015-04-15 15:59:26,900 FATAL namenode.FSEditLog > (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for > required journal (JournalAndStre > am(mgr=QJM to [:8485, :8485, :8485], stream=QuorumOutputStream > starting at txid 54270281)) > java.io.IOException: Interrupted waiting 2ms for a quorum of nodes to > respond. > at > org.apache.had
[jira] [Comment Edited] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479932#comment-16479932 ] Íñigo Goiri edited comment on HDFS-13586 at 5/18/18 2:07 AM: - This potentially 25 fixed unit tests. I think making this fix all the way in {{IOUtils}} is the way to go. Let's see what Yetus says. was (Author: elgoiri): This is potentially 25 fixed unit tests. I think making this fix all the way in {{IOUtils}} is the way to go. Let's see what Yetus says. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-49) Standalone protocol should use grpc in place of netty.
[ https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480004#comment-16480004 ] Anu Engineer commented on HDDS-49: -- [~msingh] Thanks for the patch. I have some minor comments. # DatanodeDetailsProto: Instead of adding another port to DatabideDetailsProto, can we support a port message. For example, messsage port { string name; uint32 port; } Then you can have repeated ports, and any one can insert a new port without having to making signification changes. Instead of a String name, you can also make it an enum if needed. You don't have to fix this issue now, if it is lot of work. Can you please file a JIRA in HDDS for us to fix this later? 2. MiniOzoneClusterImpl.java:227. Not a change from you, but can please rename conf to config? That will fix a checkstyle warning and I think it is a very fair warning. 3. With this we have now 3 RPC service ports on Datanode, Don't you think that is excessive ? and we should start either Netty or gRPC and not both, especially since we decided not to introduce new pipelines. > Standalone protocol should use grpc in place of netty. > -- > > Key: HDDS-49 > URL: https://issues.apache.org/jira/browse/HDDS-49 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-49.001.patch, HDDS-49.002.patch, HDDS-49.003.patch, > HDDS-49.004.patch, HDDS-49.005.patch, HDDS-49.006.patch > > > Currently an Ozone client in standalone communicates with datanode over > netty. However while using ratis, grpc is the default protocol. > In order to reduce the number of rpc protocol and handling, this jira aims to > convert the standalone protocol to use grpc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-78) Add per volume level storage stats in SCM.
[ https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480003#comment-16480003 ] Shashikant Banerjee commented on HDDS-78: - Patch v0 had unintended changes . Removed and uploaded correct v0 patch. > Add per volume level storage stats in SCM. > --- > > Key: HDDS-78 > URL: https://issues.apache.org/jira/browse/HDDS-78 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-78.00.patch > > > HDDS-38 adds Storage Statistics per Datanode in SCM. This Jira aims to add > per volume per Datanode storage stats in SCM. These will be useful while > figuring out failed volumes, out of space disks, over utilized and under > utilized disks which will be used in balancing the data within a datanode > across multiple disks as well as across the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-78) Add per volume level storage stats in SCM.
[ https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-78: Attachment: HDDS-78.00.patch > Add per volume level storage stats in SCM. > --- > > Key: HDDS-78 > URL: https://issues.apache.org/jira/browse/HDDS-78 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-78.00.patch > > > HDDS-38 adds Storage Statistics per Datanode in SCM. This Jira aims to add > per volume per Datanode storage stats in SCM. These will be useful while > figuring out failed volumes, out of space disks, over utilized and under > utilized disks which will be used in balancing the data within a datanode > across multiple disks as well as across the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-78) Add per volume level storage stats in SCM.
[ https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-78: Attachment: (was: HDDS-78.00.patch) > Add per volume level storage stats in SCM. > --- > > Key: HDDS-78 > URL: https://issues.apache.org/jira/browse/HDDS-78 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > HDDS-38 adds Storage Statistics per Datanode in SCM. This Jira aims to add > per volume per Datanode storage stats in SCM. These will be useful while > figuring out failed volumes, out of space disks, over utilized and under > utilized disks which will be used in balancing the data within a datanode > across multiple disks as well as across the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479997#comment-16479997 ] Íñigo Goiri commented on HDFS-13587: Thanks [~huanbang1993] for the explanation, that makes sense. [~chris.douglas], do we want to do something else for the DefaultMetricsSystem.setMiniClusterMode(true) in the MiniDFSCluster or we just go with the approach in [^HDFS-13587.001.patch]. > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.j
[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479989#comment-16479989 ] Xiao Liang commented on HDFS-13588: --- Yes it does, uploaded [^HDFS-13588.000.patch] for trunk. > Fix TestFsDatasetImpl test failures on Windows > -- > > Key: HDFS-13588 > URL: https://issues.apache.org/jira/browse/HDFS-13588 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch > > > Some test cases of TestFsDatasetImpl failed on Windows due to: > # using File#setWritable interface; > # test directory conflict between test cases (details in HDFS-13408); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Liang updated HDFS-13588: -- Attachment: HDFS-13588.000.patch > Fix TestFsDatasetImpl test failures on Windows > -- > > Key: HDFS-13588 > URL: https://issues.apache.org/jira/browse/HDFS-13588 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch > > > Some test cases of TestFsDatasetImpl failed on Windows due to: > # using File#setWritable interface; > # test directory conflict between test cases (details in HDFS-13408); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479986#comment-16479986 ] Anbang Hu commented on HDFS-13587: -- In [^HDFS-13587.001.patch], I use GenericTestUtils.getRandomizedTestDir().getAbsolutePath() for random directory generation. Note that DefaultMetricsSystem.setMiniClusterMode(true) needs to be set for proper Metrics creation, because in DefaultMetricsSystem.java: {code:java} synchronized String newSourceName(String name, boolean dupOK) { if (sourceNames.map.containsKey(name)) { if (dupOK) { return name; } else if (!miniClusterMode) { throw new MetricsException("Metrics source "+ name +" already exists!"); } } return sourceNames.uniqueName(name); } {code} If miniClusterMode is false, exception will be thrown. TestQuorumJournalManager originally uses default getBaseDirectory from MiniDFSCluster, which sets the variable to true in MiniDFSCluster. > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExp
[jira] [Commented] (HDFS-12378) TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk
[ https://issues.apache.org/jira/browse/HDFS-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479982#comment-16479982 ] Íñigo Goiri commented on HDFS-12378: We'd like to backport to branch-2.9, opened HDFS-13590. > TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk > -- > > Key: HDFS-12378 > URL: https://issues.apache.org/jira/browse/HDFS-12378 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: Xiao Chen >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: flaky-test > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12378.00.patch, HDFS-12378.01.patch > > > Saw on > https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/org.apache.hadoop.hdfs/TestClientProtocolForPipelineRecovery/testZeroByteBlockRecovery/: > Error Message > {noformat} > Failed to replace a bad datanode on the existing pipeline due to no more good > datanodes being available to try. (Nodes: > current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], > > original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). > The current failed datanode replacement policy is ALWAYS, and a client may > configure this via > 'dfs.client.block.write.replace-datanode-on-failure.policy' in its > configuration. > {noformat} > Stacktrace > {noformat} > java.io.IOException: Failed to replace a bad datanode on the existing > pipeline due to no more good datanodes being available to try. (Nodes: > current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], > > original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). > The current failed datanode replacement policy is ALWAYS, and a client may > configure this via > 'dfs.client.block.write.replace-datanode-on-failure.policy' in its > configuration. > at > org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322) > at > org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388) > at > org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587) > at > org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488) > at > org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) > at > org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) > {noformat} > Standard Output > {noformat} > 2017-08-30 18:02:37,714 [main] INFO hdfs.MiniDFSCluster > (MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, > numDataNodes=3 > Formatting using clusterid: testClusterID > 2017-08-30 18:02:37,716 [main] INFO namenode.FSEditLog > (FSEditLog.java:newInstance(224)) - Edit logging is async:false > 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:(742)) - KeyProvider: null > 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem > (FSNamesystemLock.java:(120)) - fsLock is fair: true > 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem > (FSNamesystemLock.java:(136)) - Detailed lock hold time metrics > enabled: false > 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:(763)) - fsOwner = jenkins (auth:SIMPLE) > 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:(764)) - supergroup = supergroup > 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:(765)) - isPermissionEnabled = true > 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem > (FSNamesystem.java:(776)) - HA Enabled: false > 2017-08-30 18:02:37,718 [main] INFO common.Util > (Util.java:isDiskStatsEnabled(395)) - > dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO > profiling > 2017-08-30 18:02:37,718 [main] INFO blockmanagement.DatanodeManager > (DatanodeManager.java:(301)) - dfs.block.invalidate.limit: > configured=1000, counted=60, effected=1000 > 2017-08-30 18:02:37,718 [main] INFO blockmanagement.DatanodeManager > (DatanodeManager.java:(309)) - > dfs.namenode.datanode.registration.ip-hostname-check=true > 2017-08-30 18:02:37,719 [main] INFO blockmanagement.BlockManager > (InvalidateBlocks.java:printBlockDeletionTime(76)) - > dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 > 2017-08-30 18:02:37,719 [main] INFO blockmanagement.BlockManager > (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block de
[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13590: --- Description: The unit tests are flaky in 2.9. We should backport this. > Backport HDFS-12378 to branch-2 > --- > > Key: HDFS-13590 > URL: https://issues.apache.org/jira/browse/HDFS-13590 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, test >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Labels: flaky-test > Fix For: 2.9.0, 2.9.1, 2.9.2 > > Attachments: HDFS-13590_branch-2.000.patch > > > The unit tests are flaky in 2.9. We should backport this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: (was: HDFS-13587.001.patch) > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: HDFS-13587.001.patch > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.Pa
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: HDFS-13587.001.patch > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.Pa
[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-13590: -- Attachment: HDFS-13590_branch-2.000.patch > Backport HDFS-12378 to branch-2 > --- > > Key: HDFS-13590 > URL: https://issues.apache.org/jira/browse/HDFS-13590 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, test >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Labels: flaky-test > Fix For: 2.9.0, 2.9.1, 2.9.2 > > Attachments: HDFS-13590_branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-13590: -- Fix Version/s: 2.9.0 2.9.1 2.9.2 > Backport HDFS-12378 to branch-2 > --- > > Key: HDFS-13590 > URL: https://issues.apache.org/jira/browse/HDFS-13590 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, test >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Major > Labels: flaky-test > Fix For: 2.9.0, 2.9.1, 2.9.2 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info
[ https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479981#comment-16479981 ] Shashikant Banerjee commented on HDDS-76: - Thanks [~bharatviswa] for the reviews. 1) Yes, the patch will have continuation jira, to use this info in SCM end. HDDS-78 will track the same. 2.Yes, the purpose of sending data dir paths is to show up in the JMX as discussed in HDDS-38. > Modify SCMStorageReportProto to include the data dir paths as well as the > StorageType info > -- > > Key: HDDS-76 > URL: https://issues.apache.org/jira/browse/HDDS-76 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-76.00.patch, HDDS-76.01.patch > > > Currently, SCMStorageReport contains the storageUUID which are sent across to > SCM for maintaining storage Report info. This Jira aims to include the data > dir paths for actual disks as well as the storage Type info for each volume > on datanode to be sent to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13590) Backport HDFS-12378 to branch-2
Lukas Majercak created HDFS-13590: - Summary: Backport HDFS-12378 to branch-2 Key: HDFS-13590 URL: https://issues.apache.org/jira/browse/HDFS-13590 Project: Hadoop HDFS Issue Type: Bug Components: datanode, hdfs, test Reporter: Lukas Majercak Assignee: Lukas Majercak -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479957#comment-16479957 ] genericqa commented on HDFS-13399: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 46s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 26s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 18s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 24s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 4m 39s{color} | {color:orange} root: The patch generated 12 new + 625 unchanged - 0 fixed = 637 total (was 625) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 33s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 37s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 11 new + 0 unchanged - 0 fixed = 11 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs in the pat
[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows
[ https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479950#comment-16479950 ] Hudson commented on HDFS-13560: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14230 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14230/]) HDFS-13560. Insufficient system resources exist to complete the (inigoiri: rev 53b807a6a8486cefe0b036f7893de9f619bd44a1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java > Insufficient system resources exist to complete the requested service for > some tests on Windows > --- > > Key: HDFS-13560 > URL: https://issues.apache.org/jira/browse/HDFS-13560 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, > HDFS-13560.002.patch, HDFS-13560.003.patch, HDFS-13560.004.patch > > > On Windows, there are 30 tests in HDFS component giving error like the > following: > {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, > Time elapsed: 50.149 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color} > {color:#d04437} [ERROR] > testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles) > Time elapsed: 16.513 s <<< ERROR!{color} > {color:#d04437} 1450: Insufficient system resources exist to complete the > requested service.{color} > {color:#d04437}at > org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native > Method){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color} > {color:#d04437} at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color} > {color:#33}The involved tests are{color} > {code:java} > TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPers
[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479946#comment-16479946 ] Íñigo Goiri commented on HDFS-13588: Does this issue apply for trunk? > Fix TestFsDatasetImpl test failures on Windows > -- > > Key: HDFS-13588 > URL: https://issues.apache.org/jira/browse/HDFS-13588 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13588-branch-2.000.patch > > > Some test cases of TestFsDatasetImpl failed on Windows due to: > # using File#setWritable interface; > # test directory conflict between test cases (details in HDFS-13408); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13588: --- Status: Patch Available (was: Open) > Fix TestFsDatasetImpl test failures on Windows > -- > > Key: HDFS-13588 > URL: https://issues.apache.org/jira/browse/HDFS-13588 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13588-branch-2.000.patch > > > Some test cases of TestFsDatasetImpl failed on Windows due to: > # using File#setWritable interface; > # test directory conflict between test cases (details in HDFS-13408); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized
Hanisha Koneru created HDFS-13589: - Summary: Add dfsAdmin command to query if "upgrade" is finalized Key: HDFS-13589 URL: https://issues.apache.org/jira/browse/HDFS-13589 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: Hanisha Koneru Assignee: Hanisha Koneru When we do upgrade on a Namenode (non rollingUpgrade), we should be able to query whether the upgrade has been finalized or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
[ https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Liang updated HDFS-13588: -- Attachment: HDFS-13588-branch-2.000.patch > Fix TestFsDatasetImpl test failures on Windows > -- > > Key: HDFS-13588 > URL: https://issues.apache.org/jira/browse/HDFS-13588 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13588-branch-2.000.patch > > > Some test cases of TestFsDatasetImpl failed on Windows due to: > # using File#setWritable interface; > # test directory conflict between test cases (details in HDFS-13408); > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13556) TestNestedEncryptionZones does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479939#comment-16479939 ] Hudson commented on HDFS-13556: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14229 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14229/]) HDFS-13556. TestNestedEncryptionZones does not shut down cluster. (inigoiri: rev a97a2042f210e9db97646baad6f56064d672f447) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNestedEncryptionZones.java > TestNestedEncryptionZones does not shut down cluster > > > Key: HDFS-13556 > URL: https://issues.apache.org/jira/browse/HDFS-13556 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13556.000.patch, HDFS-13556.001.patch > > > Without shutting down cluster, there is conflict at least on Windows. > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 33.631 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] > testNestedEncryptionZones(org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones) > Time elapsed: 0.03 s <<< ERROR!{color} > {color:#d04437}java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones.setup(TestNestedEncryptionZones.java:104){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color} > {color:#d04437} at > org.apache.maven.surefire.booter.
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479938#comment-16479938 ] Anbang Hu commented on HDFS-13587: -- [^HDFS-13587.000.patch] applies to both trunk and branch-2. [~chris.douglas], [~surmountian] Can you take a look since this is similar to part of -[HDFS-13408|https://issues.apache.org/jira/browse/HDFS-13408]-. > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedul
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: (was: HDFS-13587.000.patch) > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$
[jira] [Created] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows
Xiao Liang created HDFS-13588: - Summary: Fix TestFsDatasetImpl test failures on Windows Key: HDFS-13588 URL: https://issues.apache.org/jira/browse/HDFS-13588 Project: Hadoop HDFS Issue Type: Bug Reporter: Xiao Liang Assignee: Xiao Liang Some test cases of TestFsDatasetImpl failed on Windows due to: # using File#setWritable interface; # test directory conflict between test cases (details in HDFS-13408); -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479936#comment-16479936 ] Anbang Hu commented on HDFS-13587: -- On Windows, testNewerVersionOfSegmentWins2 seems to leave behind a Java handle that holds E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal\current\committed-txid that causes the following tests to fail. My proposal is to randomize the journalnode path for each unit tests for the purpose of isolation. > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4C
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: HDFS-13587.000.patch Status: Patch Available (was: Open) > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) >
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Attachment: HDFS-13587.000.patch > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13587.000.patch > > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(
[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479935#comment-16479935 ] Íñigo Goiri commented on HDFS-13587: The same 12 failures show in the [Windows daily build|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.qjournal.client/TestQuorumJournalManager/]. > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runC
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479932#comment-16479932 ] Íñigo Goiri commented on HDFS-13586: This is potentially 25 fixed unit tests. I think making this fix all the way in {{IOUtils}} is the way to go. Let's see what Yetus says. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica
[ https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479933#comment-16479933 ] genericqa commented on HDFS-13448: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 25m 31s{color} | {color:red} root generated 2 new + 1465 unchanged - 0 fixed = 1467 total (was 1465) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 31s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 54s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13448 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924015/HDFS-13448.8.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 847e2b863437 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Bui
[jira] [Updated] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows
[ https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13560: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.3 2.9.2 3.1.1 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Thanks [~huanbang1993] for the patch and [~giovanni.fumarola] for the review. Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. > Insufficient system resources exist to complete the requested service for > some tests on Windows > --- > > Key: HDFS-13560 > URL: https://issues.apache.org/jira/browse/HDFS-13560 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, > HDFS-13560.002.patch, HDFS-13560.003.patch, HDFS-13560.004.patch > > > On Windows, there are 30 tests in HDFS component giving error like the > following: > {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, > Time elapsed: 50.149 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color} > {color:#d04437} [ERROR] > testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles) > Time elapsed: 16.513 s <<< ERROR!{color} > {color:#d04437} 1450: Insufficient system resources exist to complete the > requested service.{color} > {color:#d04437}at > org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native > Method){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color} > {color:#d04437} at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color} > {color:#33}The involved tests are{color} > {code:java} > TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Issue Type: Bug (was: Test) > TestQuorumJournalManager fails on Windows > - > > Key: HDFS-13587 > URL: https://issues.apache.org/jira/browse/HDFS-13587 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > > There are 12 test failures in TestQuorumJournalManager on Windows. Local run > shows: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: > 106.81 s <<< FAILURE! - in > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager > [ERROR] > testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) > Time elapsed: 1.93 s <<< ERROR! > org.apache.hadoop.hdfs.qjournal.client.QuorumException: > Could not format one or more JournalNodes. 2 successful responses: > 127.0.0.1:27044: null [success] > 127.0.0.1:27064: null [success] > 1 exceptions thrown: > 127.0.0.1:27054: Directory > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal > is in an inconsistent state: Can't format the storage directory because the > current directory is not empty. > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) > at > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) > at > org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runn
[jira] [Updated] (HDFS-13587) TestQuorumJournalManager fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anbang Hu updated HDFS-13587: - Description: There are 12 test failures in TestQuorumJournalManager on Windows. Local run shows: {color:#d04437}[INFO] Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 106.81 s <<< FAILURE! - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager [ERROR] testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager) Time elapsed: 1.93 s <<< ERROR! org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 2 successful responses: 127.0.0.1:27044: null [success] 127.0.0.1:27064: null [success] 1 exceptions thrown: 127.0.0.1:27054: Directory E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal is in an inconsistent state: Can't format the storage directory because the current directory is not empty. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574) at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185) at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221) at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157) at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145) at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606) at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212) at org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at
[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows
[ https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479915#comment-16479915 ] Íñigo Goiri commented on HDFS-13560: Thank you [~huanbang1993] for double checking. +1 on [^HDFS-13560.004.patch]. Committing. > Insufficient system resources exist to complete the requested service for > some tests on Windows > --- > > Key: HDFS-13560 > URL: https://issues.apache.org/jira/browse/HDFS-13560 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, > HDFS-13560.002.patch, HDFS-13560.003.patch, HDFS-13560.004.patch > > > On Windows, there are 30 tests in HDFS component giving error like the > following: > {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, > Time elapsed: 50.149 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color} > {color:#d04437} [ERROR] > testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles) > Time elapsed: 16.513 s <<< ERROR!{color} > {color:#d04437} 1450: Insufficient system resources exist to complete the > requested service.{color} > {color:#d04437}at > org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native > Method){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color} > {color:#d04437} at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color} > {color:#33}The involved tests are{color} > {code:java} > TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,TestDirectoryScanner#testExceptionHandlingWhileDirectoryScan,TestDirectoryScanner#testDirectoryScanner,TestDirectoryScanner#testThrott
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479912#comment-16479912 ] Lukas Majercak commented on HDFS-13586: --- Added patch001 to use cleaner Shell.WINDOWS instead of Path > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-13586: -- Attachment: HDFS-13586.001.patch > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch, HDFS-13586.001.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows
[ https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479909#comment-16479909 ] Anbang Hu commented on HDFS-13560: -- [~elgoiri] I ran the 30 tests again on local Windows machine and got the same result in my [first post|https://issues.apache.org/jira/browse/HDFS-13560?focusedCommentId=16475420&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16475420]. The root cause to "All datanodes are bad" is being dealt with in HDFS-13586. > Insufficient system resources exist to complete the requested service for > some tests on Windows > --- > > Key: HDFS-13560 > URL: https://issues.apache.org/jira/browse/HDFS-13560 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, > HDFS-13560.002.patch, HDFS-13560.003.patch, HDFS-13560.004.patch > > > On Windows, there are 30 tests in HDFS component giving error like the > following: > {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, > Time elapsed: 50.149 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color} > {color:#d04437} [ERROR] > testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles) > Time elapsed: 16.513 s <<< ERROR!{color} > {color:#d04437} 1450: Insufficient system resources exist to complete the > requested service.{color} > {color:#d04437}at > org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native > Method){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color} > {color:#d04437} at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color} > {color:#33}The involved tests are{color} > {code:java} > TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocks
[jira] [Updated] (HDFS-13556) TestNestedEncryptionZones does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13556: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.3 2.9.2 3.1.1 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Thanks [~huanbang1993] for the patch and [~giovanni.fumarola] for the review. Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. > TestNestedEncryptionZones does not shut down cluster > > > Key: HDFS-13556 > URL: https://issues.apache.org/jira/browse/HDFS-13556 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13556.000.patch, HDFS-13556.001.patch > > > Without shutting down cluster, there is conflict at least on Windows. > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 33.631 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] > testNestedEncryptionZones(org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones) > Time elapsed: 0.03 s <<< ERROR!{color} > {color:#d04437}java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones.setup(TestNestedEncryptionZones.java:104){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color} >
[jira] [Updated] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13586: --- Status: Patch Available (was: Open) > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13587) TestQuorumJournalManager fails on Windows
Anbang Hu created HDFS-13587: Summary: TestQuorumJournalManager fails on Windows Key: HDFS-13587 URL: https://issues.apache.org/jira/browse/HDFS-13587 Project: Hadoop HDFS Issue Type: Test Reporter: Anbang Hu Assignee: Anbang Hu -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479903#comment-16479903 ] Anbang Hu commented on HDFS-13586: -- [~lukmajercak]'s patch can fix the following tests that complain about "All datanodes are bad" as well (referring to daily Windows build 469): * [TestHdfsCryptoStreams|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.crypto/TestHdfsCryptoStreams/] * [TestOverReplicatedBlocks|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestOverReplicatedBlocks/] * [TestBlockRecovery|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockRecovery/] * [TestDataNodeMetrics|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMetrics/] * [TestStorageMover|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.mover/TestStorageMover/] * [TestFSImage|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.namenode/TestFSImage/] * [TestFSImageWithSnapshot|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.namenode/TestFSImageWithSnapshot/] * [TestNamenodeCapacityReport|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.namenode/TestNamenodeCapacityReport/] * [TestFileLengthOnClusterRestart|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestFileLengthOnClusterRestart/] * [TestFileCreation|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestFileCreation/] * [TestHFlush|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestHFlush/] * [TestLeaseRecovery|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestLeaseRecovery/] > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --
[jira] [Commented] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc
[ https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479901#comment-16479901 ] genericqa commented on HDDS-7: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDDS-4 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 24s{color} | {color:green} HDDS-4 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} HDDS-4 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} HDDS-4 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} HDDS-4 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m 19s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} HDDS-4 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} HDDS-4 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 16s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} acceptance-test in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924023/HDDS-7-HDDS-4.02.patch
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479900#comment-16479900 ] Lukas Majercak commented on HDFS-13586: --- Posted a patch to ignore fsync on directories on Windows. Should apply to branch-2 as well. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13556) TestNestedEncryptionZones does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479898#comment-16479898 ] Íñigo Goiri commented on HDFS-13556: The failed unit tests are unrelated. [^HDFS-13556.001.patch] LGTM. +1 Committing. > TestNestedEncryptionZones does not shut down cluster > > > Key: HDFS-13556 > URL: https://issues.apache.org/jira/browse/HDFS-13556 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Attachments: HDFS-13556.000.patch, HDFS-13556.001.patch > > > Without shutting down cluster, there is conflict at least on Windows. > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time > elapsed: 33.631 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color} > {color:#d04437}[ERROR] > testNestedEncryptionZones(org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones) > Time elapsed: 0.03 s <<< ERROR!{color} > {color:#d04437}java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones.setup(TestNestedEncryptionZones.java:104){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color} > {color:#d04437}[INFO]{color} > {color:#d04437}[INFO] Results:{color} > {color:#d04437}[INFO]{color} > {color:#d04437}[ERROR] Errors:{color} > {color:#d04437}[ERROR] TestNestedEncryptionZones.setup:104 ╗ IO Could not > fully delete E:\OSS\hadoop-...{color} > {color:#d04437}[INFO]{color} > {color:#
[jira] [Updated] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-13586: -- Attachment: HDFS-13586.000.patch > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > Attachments: HDFS-13586.000.patch > > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-38) Add SCMNodeStorage map in SCM class to store storage statistics per Datanode
[ https://issues.apache.org/jira/browse/HDDS-38?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479890#comment-16479890 ] Hudson commented on HDDS-38: SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14228 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14228/]) HDDS-38. Add SCMNodeStorage map in SCM class to store storage statistics (aengineer: rev 7c485a6701275578cb22392168b2b31726121ceb) * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java * (add) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeStorageStatMap.java * (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMXBean.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeStorageStatMap.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java > Add SCMNodeStorage map in SCM class to store storage statistics per Datanode > > > Key: HDDS-38 > URL: https://issues.apache.org/jira/browse/HDDS-38 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-38.00.patch, HDDS-38.01.patch, HDDS-38.02.patch > > > Currently , the storage stats per Datanode are maintained inside > scmNodeManager. This will > move the scmNodeStats for storage outside SCMNodeManager to simplify > refactoring. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13556) TestNestedEncryptionZones does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479882#comment-16479882 ] genericqa commented on HDFS-13556: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}157m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13556 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924002/HDFS-13556.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 999fe33e218d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 26f1e22 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24248/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCo
[jira] [Updated] (HDDS-38) Add SCMNodeStorage map in SCM class to store storage statistics per Datanode
[ https://issues.apache.org/jira/browse/HDDS-38?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-38: - Resolution: Fixed Status: Resolved (was: Patch Available) [~elek] Thanks for the reviews. [~shashikant] Thank you for the contribution. I have committed the patch to trunk. > Add SCMNodeStorage map in SCM class to store storage statistics per Datanode > > > Key: HDDS-38 > URL: https://issues.apache.org/jira/browse/HDDS-38 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-38.00.patch, HDDS-38.01.patch, HDDS-38.02.patch > > > Currently , the storage stats per Datanode are maintained inside > scmNodeManager. This will > move the scmNodeStats for storage outside SCMNodeManager to simplify > refactoring. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info
[ https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479846#comment-16479846 ] Bharat Viswanadham commented on HDDS-76: Patch LGTM. Few questions: # Will this patch have continuation jira, to use this info in SCM end? # What is the reason for including data dir path and send it to SCM? Is this for the purpose of stats to show in JMX? > Modify SCMStorageReportProto to include the data dir paths as well as the > StorageType info > -- > > Key: HDDS-76 > URL: https://issues.apache.org/jira/browse/HDDS-76 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-76.00.patch, HDDS-76.01.patch > > > Currently, SCMStorageReport contains the storageUUID which are sent across to > SCM for maintaining storage Report info. This Jira aims to include the data > dir paths for actual disks as well as the storage Type info for each volume > on datanode to be sent to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc
[ https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-7: -- Attachment: HDDS-7-HDDS-4.02.patch > Enable kerberos auth for Ozone client in hadoop rpc > > > Key: HDDS-7 > URL: https://issues.apache.org/jira/browse/HDDS-7 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Client, SCM Client >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 0.3.0 > > Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch, > HDDS-7-HDDS-4.01.patch, HDDS-7-HDDS-4.02.patch > > > Enable kerberos auth for Ozone client in hadoop rpc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-73) Add acceptance tests for Ozone Shell
[ https://issues.apache.org/jira/browse/HDDS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479839#comment-16479839 ] Hudson commented on HDDS-73: SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14227 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14227/]) HDDS-73. Add acceptance tests for Ozone Shell. Contributed by Lokesh (aengineer: rev e0367d3b248d47fc95e5df5d772f93663319c3b8) * (edit) hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot * (edit) hadoop-ozone/acceptance-test/src/test/compose/docker-config * (add) hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-shell.robot > Add acceptance tests for Ozone Shell > > > Key: HDDS-73 > URL: https://issues.apache.org/jira/browse/HDDS-73 > Project: Hadoop Distributed Data Store > Issue Type: Test >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-73.001.patch, HDDS-73.002.patch, HDDS-73.003.patch > > > This Jira aims to add acceptance tests related to http, o3 scheme and various > server port combinations in shell commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13399: Status: Patch Available (was: In Progress) > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, > HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, > HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch, > HDFS-13399-HDFS-12943.007.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13586: --- Labels: Windows (was: ) > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Labels: Windows > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-73) Add acceptance tests for Ozone Shell
[ https://issues.apache.org/jira/browse/HDDS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-73: - Resolution: Fixed Status: Resolved (was: Patch Available) [~elek] Thanks for the Reviews. [~ljain] Thank you for the contribution. I have committed to trunk > Add acceptance tests for Ozone Shell > > > Key: HDDS-73 > URL: https://issues.apache.org/jira/browse/HDDS-73 > Project: Hadoop Distributed Data Store > Issue Type: Test >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-73.001.patch, HDDS-73.002.patch, HDDS-73.003.patch > > > This Jira aims to add acceptance tests related to http, o3 scheme and various > server port combinations in shell commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13554) TestDatanodeRegistration#testForcedRegistration does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479816#comment-16479816 ] Hudson commented on HDFS-13554: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14226 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14226/]) HDFS-13554. TestDatanodeRegistration#testForcedRegistration does not (inigoiri: rev 65476458fa05656010809be632356e4015b59a17) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java > TestDatanodeRegistration#testForcedRegistration does not shut down cluster > -- > > Key: HDFS-13554 > URL: https://issues.apache.org/jira/browse/HDFS-13554 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13554.000.patch > > > On Windows, two tests fail because the cluster did not shutdown in > TestDatanodeRegistration#testForcedRegistration: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 166.619 s <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] > testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(org.apache.hadoop.hdfs.TestDatanodeRegistration) > Time elapsed: 0.024 s <<< ERROR!{color} > {color:#d04437} java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.TestDatanodeRegistration.testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(TestDatanodeRegistration.java:279){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color} > {color:#d04437} at > org.apache.maven.surefire.b
[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows
[ https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479810#comment-16479810 ] genericqa commented on HDFS-13563: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}187m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13563 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12923973/HDFS-13563.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 78fdcd3ab17b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2cdffb | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24246/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24246/testReport/ | | Max. process+thread count | 2806 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/
[jira] [Commented] (HDFS-13339) Volume reference can't be released and leads to deadlock when DataXceiver does a check volume
[ https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479813#comment-16479813 ] genericqa commented on HDFS-13339: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}188m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestBlocksScheduledCounter | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.checker.TestDatasetVolumeChecker | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13339 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12923974/HDFS-13339.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3a21193b660a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2cdffb | | maven | version: Apache Maven 3.3.9 | | Defa
[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows
[ https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479790#comment-16479790 ] Íñigo Goiri commented on HDFS-13560: In the last Yetus run, the [unit tests|https://builds.apache.org/job/PreCommit-HDFS-Build/24245/testReport/org.apache.hadoop.hdfs.server.datanode.fsdataset.impl/] now pass. [~huanbang1993] does this still work for Windows? > Insufficient system resources exist to complete the requested service for > some tests on Windows > --- > > Key: HDFS-13560 > URL: https://issues.apache.org/jira/browse/HDFS-13560 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13560.000.patch, HDFS-13560.001.patch, > HDFS-13560.002.patch, HDFS-13560.003.patch, HDFS-13560.004.patch > > > On Windows, there are 30 tests in HDFS component giving error like the > following: > {color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, > Time elapsed: 50.149 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color} > {color:#d04437} [ERROR] > testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles) > Time elapsed: 16.513 s <<< ERROR!{color} > {color:#d04437} 1450: Insufficient system resources exist to complete the > requested service.{color} > {color:#d04437}at > org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native > Method){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color} > {color:#d04437} at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color} > {color:#d04437} at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color} > {color:#33}The involved tests are{color} > {code:java} > TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,
[jira] [Commented] (HDFS-13586) Fsync fails on directories on Windows
[ https://issues.apache.org/jira/browse/HDFS-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479789#comment-16479789 ] Wei-Chiu Chuang commented on HDFS-13586: IIRC I believe the behavior is platform dependent, so it makes sense to do it differently for Windows. > Fsync fails on directories on Windows > - > > Key: HDFS-13586 > URL: https://issues.apache.org/jira/browse/HDFS-13586 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs > Environment: JDK 1.8.0_144 > Hadoop 2.9+ >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > > HDFS-11915 added a fsync call on DataNode's rbw directory on the first > hsync() call. IOUtils.fsync first tries to get a FileChannel on the directory > using FileChannel.open(READ). This call fails on Windows for any directory > and throws an AccessDeniedException, see discussion here: > [http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. > > {code:java} > java.io.IOException: Failed to sync > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:160) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:430) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:873) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.nio.file.AccessDeniedException: > E:\workspace\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1\current\BP-571178992-10.123.152.148-1526591934139\current\rbw > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > at > sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:405) > at > org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.fsyncDirectory(DatanodeUtil.java:158) > ... 8 more > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13554) TestDatanodeRegistration#testForcedRegistration does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13554: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.3 2.9.2 3.1.1 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Thanks [~huanbang1993] for the fix and [~giovanni.fumarola] for the review. Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. > TestDatanodeRegistration#testForcedRegistration does not shut down cluster > -- > > Key: HDFS-13554 > URL: https://issues.apache.org/jira/browse/HDFS-13554 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13554.000.patch > > > On Windows, two tests fail because the cluster did not shutdown in > TestDatanodeRegistration#testForcedRegistration: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 166.619 s <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] > testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(org.apache.hadoop.hdfs.TestDatanodeRegistration) > Time elapsed: 0.024 s <<< ERROR!{color} > {color:#d04437} java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.TestDatanodeRegistration.testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(TestDatanodeRegistration.java:279){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBoot
[jira] [Commented] (HDFS-13554) TestDatanodeRegistration#testForcedRegistration does not shut down cluster
[ https://issues.apache.org/jira/browse/HDFS-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479780#comment-16479780 ] Íñigo Goiri commented on HDFS-13554: bq. I followed other tests in TestDatanodeRegistration.java to minimize the change. I would prefer not to break the consistency in this file. True. Fair enough, the rest of the tests do exactly that. Consistency within classes is more important than across. Ultimately this should move into the After pattern and to use a random path for the MiniDFSCluster but let's focus on fixing what's here and not refactor everything. Failed unit tests are unrelated. +1 on [^HDFS-13554.000.patch]. Given that this fix is straightforward (surrounding with try/finally), I think is safe to commit. > TestDatanodeRegistration#testForcedRegistration does not shut down cluster > -- > > Key: HDFS-13554 > URL: https://issues.apache.org/jira/browse/HDFS-13554 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Anbang Hu >Assignee: Anbang Hu >Priority: Major > Labels: Windows > Attachments: HDFS-13554.000.patch > > > On Windows, two tests fail because the cluster did not shutdown in > TestDatanodeRegistration#testForcedRegistration: > {color:#d04437}[INFO] Running > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 166.619 s <<< FAILURE! - in > org.apache.hadoop.hdfs.TestDatanodeRegistration{color} > {color:#d04437} [ERROR] > testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(org.apache.hadoop.hdfs.TestDatanodeRegistration) > Time elapsed: 0.024 s <<< ERROR!{color} > {color:#d04437} java.io.IOException: Could not fully delete > E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color} > {color:#d04437} at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color} > {color:#d04437} at > org.apache.hadoop.hdfs.TestDatanodeRegistration.testRegistrationWithDifferentSoftwareVersionsDuringUpgrade(TestDatanodeRegistration.java:279){color} > {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method){color} > {color:#d04437} at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color} > {color:#d04437} at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color} > {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color} > {color:#d04437} at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color} > {color:#d04437} at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color} > {color:#d04437} at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color} > {color:#d04437} at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color} > {color:#d04437} at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color} > {color:#d04437} at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color} > {color:#d04437} at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color} > {color:#d04437} at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color} > {color:#d04437} at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color} > {color:#d04437} at > org.junit.runners.ParentRunner.run(ParentRunner.java:309){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color} > {color:#d04437} at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color} > {color:#d04437} at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){
[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica
[ https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HDFS-13448: --- Status: Patch Available (was: Open) > HDFS Block Placement - Ignore Locality for First Block Replica > -- > > Key: HDFS-13448 > URL: https://issues.apache.org/jira/browse/HDFS-13448 > Project: Hadoop HDFS > Issue Type: New Feature > Components: block placement, hdfs-client >Affects Versions: 3.0.1, 2.9.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, > HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, > HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch > > > According to the HDFS Block Place Rules: > {quote} > /** > * The replica placement strategy is that if the writer is on a datanode, > * the 1st replica is placed on the local machine, > * otherwise a random datanode. The 2nd replica is placed on a datanode > * that is on a different rack. The 3rd replica is placed on a datanode > * which is on a different node of the rack as the second replica. > */ > {quote} > However, there is a hint for the hdfs-client that allows the block placement > request to not put a block replica on the local datanode _where 'local' means > the same host as the client is being run on._ > {quote} > /** >* Advise that a block replica NOT be written to the local DataNode where >* 'local' means the same host as the client is being run on. >* >* @see CreateFlag#NO_LOCAL_WRITE >*/ > {quote} > I propose that we add a new flag that allows the hdfs-client to request that > the first block replica be placed on a random DataNode in the cluster. The > subsequent block replicas should follow the normal block placement rules. > The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block > replica is not placed on the local node, but it is still placed on the local > rack. Where this comes into play is where you have, for example, a flume > agent that is loading data into HDFS. > If the Flume agent is running on a DataNode, then by default, the DataNode > local to the Flume agent will always get the first block replica and this > leads to un-even block placements, with the local node always filling up > faster than any other node in the cluster. > Modifying this example, if the DataNode is removed from the host where the > Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then > the default block placement policy will still prefer the local rack. This > remedies the situation only so far as now the first block replica will always > be distributed to a DataNode on the local rack. > This new flag would allow a single Flume agent to distribute the blocks > randomly, evenly, over the entire cluster instead of hot-spotting the local > node or the local rack. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org