[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983619#comment-16983619
 ] 

Yiqun Lin edited comment on HDFS-15019 at 11/27/19 3:20 PM:
------------------------------------------------------------

We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
    cluster = null;
    conf = new HdfsConfiguration();
    conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
    conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
        1000);

    conf.setLong(
        DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
    // We'll be using a 512 bytes block size just for tests
    // so making sure the checksum bytes match it too.
    conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

It  would be better to add additional check for the dfsClient got in 
{{testDeadNodeDetectionInMultipleDFSInputStream}}. The dfsClient got from 
dfsinputstream1/2 should be same one.
{code}
      assertEquals(dfsClient1.toString(), dfsClient2.toString());  <===
      assertEquals(1, dfsClient1.getDeadNodes(din1).size());
      assertEquals(1, dfsClient2.getDeadNodes(din2).size());
{code}

cc [~leosun08]


was (Author: linyiqun):
We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
    cluster = null;
    conf = new HdfsConfiguration();
    conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
    conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
        1000);

    conf.setLong(
        DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
    // We'll be using a 512 bytes block size just for tests
    // so making sure the checksum bytes match it too.
    conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

It  would be better to add additional check for the dfsClient got in 
{{testDeadNodeDetectionInMultipleDFSInputStream}}. The dfsClient got from 
dfsinputstream1/2 should be same one.
{code}
      assertEquals(dfsClient1.toString(), dfsClient2.toString());  <===
      assertEquals(1, dfsClient1.getDeadNodes(din1).size());
      assertEquals(1, dfsClient2.getDeadNodes(din2).size());
{code}

> Refactor the unit test of TestDeadNodeDetection 
> ------------------------------------------------
>
>                 Key: HDFS-15019
>                 URL: https://issues.apache.org/jira/browse/HDFS-15019
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yiqun Lin
>            Assignee: Lisheng Sun
>            Priority: Minor
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to