[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16848663#comment-16848663
 ] 

Elek, Marton commented on HDDS-1554:
------------------------------------

Thank you very much the patch [~eyang]

1. The new tests are missing from the distribution tar file 
(hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/). We agreed to support 
the execution of all the new tests from the final tar.

2. I am not sure why we need the normal read/write test. All of the smoketests 
and integration-tests are testing this scenario

3. With the Read/Only test: I don't think that we need to support read-only 
disks. The only question is if the right exception is thrown. I think it also 
can be tested from MiniOzoneCluster / real unit tests in a more lightweight way.

4. [~anu] suggested multiple times to do the disk failure injection on the java 
code level where more sophisticated tests can be added (eg. generate corrupt 
read with low probability with using specific Input/OutputStream). Can you 
please explain the design consideration to use docker images? Why is it better 
than the suggested solution?

> Create disk tests for fault injection test
> ------------------------------------------
>
>                 Key: HDDS-1554
>                 URL: https://issues.apache.org/jira/browse/HDDS-1554
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: build
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>            Priority: Major
>         Attachments: HDDS-1554.001.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to