[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection
[ https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984183#comment-16984183 ] Hudson commented on HDFS-15019: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17707 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17707/]) HDFS-15019. Refactor the unit test of TestDeadNodeDetection. Contributed (yqlin: rev c3659f8f94bef7cfad0c3fb04391a7ffd4221679) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java > Refactor the unit test of TestDeadNodeDetection > > > Key: HDFS-15019 > URL: https://issues.apache.org/jira/browse/HDFS-15019 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Yiqun Lin >Assignee: Lisheng Sun >Priority: Minor > Fix For: 3.3.0 > > Attachments: HDFS-15019.001.patch > > > There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We > can simplified that. > In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the > DFSInputstream is passed incorrectly in asset operation. > {code} > din2 = (DFSInputStream) in1.getWrappedStream(); > {code} > Should be > {code} > din2 = (DFSInputStream) in2.getWrappedStream(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection
[ https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984175#comment-16984175 ] Yiqun Lin commented on HDFS-15019: -- LGTM, +1. > Refactor the unit test of TestDeadNodeDetection > > > Key: HDFS-15019 > URL: https://issues.apache.org/jira/browse/HDFS-15019 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Yiqun Lin >Assignee: Lisheng Sun >Priority: Minor > Attachments: HDFS-15019.001.patch > > > There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We > can simplified that. > In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the > DFSInputstream is passed incorrectly in asset operation. > {code} > din2 = (DFSInputStream) in1.getWrappedStream(); > {code} > Should be > {code} > din2 = (DFSInputStream) in2.getWrappedStream(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection
[ https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984152#comment-16984152 ] Hadoop QA commented on HDFS-15019: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 55s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HDFS-15019 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987014/HDFS-15019.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f33d23284ca7 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 82ad9b5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/28417/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://
[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection
[ https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984089#comment-16984089 ] Lisheng Sun commented on HDFS-15019: Thanks [~linyiqun] for your review. the v001 pathc refactors the unit test of TestDeadNodeDetection. > Refactor the unit test of TestDeadNodeDetection > > > Key: HDFS-15019 > URL: https://issues.apache.org/jira/browse/HDFS-15019 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Yiqun Lin >Assignee: Lisheng Sun >Priority: Minor > Attachments: HDFS-15019.001.patch > > > There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We > can simplified that. > In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the > DFSInputstream is passed incorrectly in asset operation. > {code} > din2 = (DFSInputStream) in1.getWrappedStream(); > {code} > Should be > {code} > din2 = (DFSInputStream) in2.getWrappedStream(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection
[ https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983619#comment-16983619 ] Yiqun Lin commented on HDFS-15019: -- We can put common setting in @Before method and leave specific setting in test method. Here io.bytes.per.checksum is a deprecated key, use {{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of. {code} @Before public void setUp() { cluster = null; conf = new HdfsConfiguration(); conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true); conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY, 1000); conf.setLong( DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100); // We'll be using a 512 bytes block size just for tests // so making sure the checksum bytes match it too. conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512); } {code} > Refactor the unit test of TestDeadNodeDetection > > > Key: HDFS-15019 > URL: https://issues.apache.org/jira/browse/HDFS-15019 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Yiqun Lin >Assignee: Lisheng Sun >Priority: Minor > > There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We > can simplified that. > In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the > DFSInputstream is passed incorrectly in asset operation. > {code} > din2 = (DFSInputStream) in1.getWrappedStream(); > {code} > Should be > {code} > din2 = (DFSInputStream) in2.getWrappedStream(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org