[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-28 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918264#comment-16918264
 ] 

Eric Yang edited comment on HDDS-1554 at 8/29/19 4:04 AM:
--

[~arp] The test is written to run by specifying the "it" profile.

{code}
mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade 
-Pit,docker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT{code}


was (Author: eyang):
[~arp] The test is written to run by specifying the "it" profile.

{code}
mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade 
-P,itdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT{code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-28 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918037#comment-16918037
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/28/19 7:44 PM:
--

Thanks for the updated patch [~eyang]. It looks like the tests are still 
getting skipped.
{code:java}
$ mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade -am -pl 
:hadoop-ozone-dist -Pdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT



$ mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT

...

[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-ozone-read-write-tests ---
[INFO] Compiling 2 source files to 
/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/disk-tests/read-write-test/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ 
hadoop-ozone-read-write-tests ---
[INFO] Tests are skipped.
 {code}


was (Author: arpitagarwal):
Thanks for the updated patch [~eyang]. It looks like the tests are still 
getting skipped.
{code:java}
$ mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade -am -pl 
:hadoop-ozone-dist -Pdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT$ 
mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT



$ mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT

...

[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-ozone-read-write-tests ---
[INFO] Compiling 2 source files to 
/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/disk-tests/read-write-test/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ 
hadoop-ozone-read-write-tests ---
[INFO] Tests are skipped.
 {code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-28 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918037#comment-16918037
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/28/19 7:44 PM:
--

Thanks for the updated patch [~eyang]. It looks like the tests are still 
getting skipped.
{code:java}
$ mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade -am -pl 
:hadoop-ozone-dist -Pdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT



$ cd hadoop-ozone/fault-injection-test/disk-tests/
$ mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT

...

[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-ozone-read-write-tests ---
[INFO] Compiling 2 source files to 
/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/disk-tests/read-write-test/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ 
hadoop-ozone-read-write-tests ---
[INFO] Tests are skipped.
 {code}


was (Author: arpitagarwal):
Thanks for the updated patch [~eyang]. It looks like the tests are still 
getting skipped.
{code:java}
$ mvn -T 1C clean install -DskipTests=true -Pdist -Dtar -DskipShade -am -pl 
:hadoop-ozone-dist -Pdocker-build -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT



$ mvn test -Pit -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT

...

[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-ozone-read-write-tests ---
[INFO] Compiling 2 source files to 
/Users/agarwal/src/hadoop/hadoop-ozone/fault-injection-test/disk-tests/read-write-test/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ 
hadoop-ozone-read-write-tests ---
[INFO] Tests are skipped.
 {code}

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903443#comment-16903443
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/9/19 4:18 AM:
-

A few comments on the test case implementations.
 # {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code:java}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}

 # {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
 # {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded 
path. Should we get from configuration instead?
 # {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
 # {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code:java}
  Assert.assertTrue("Download File test passed.", true);
{code}

 


was (Author: arpitagarwal):
A few comments on the test case implementations.
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
# {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded path. 
Should we get from configuration instead?
# {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
# {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code}
  Assert.assertTrue("Download File test passed.", true);
{code}

Still reviewing the rest.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-08-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903443#comment-16903443
 ] 

Arpit Agarwal edited comment on HDDS-1554 at 8/9/19 4:17 AM:
-

A few comments on the test case implementations.
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code can 
probably be removed, since it's really testing that the cluster is read-only in 
safe mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one.
# {{ITDiskCorruption#addCorruption:72}} - looks like we have a hard-coded path. 
Should we get from configuration instead?
# {{ITDiskCorruption#testUpload}} - The corruption implementation is bit of a 
heavy hammer, it is replacing the content of all meta files. Is it possible to 
make it reflect real-world corruption where a part of the file may be 
corrupted. Also we should probably restart the cluster after corrupting RocksDB 
meta files.
# {{ITDiskCorruption#testDownload:161}} - should we just remove the assertTrue 
since it is no-op?
{code}
  Assert.assertTrue("Download File test passed.", true);
{code}

Still reviewing the rest.


was (Author: arpitagarwal):
Looking at the test case implementations:
# {{ITDiskReadOnly#testReadOnlyDiskStartup}} - The following block of code from 
can be removed, since it's really testing that the cluster is read-only in safe 
mode. We have unit tests for that:
{code}
try {
  createVolumeAndBucket();
} catch (Exception e) {
  LOG.info("Bucket creation failed for read-only disk: ", e);
  Assert.assertTrue("Cluster is still in safe mode.", safeMode);
}
{code}
# {{ITDiskReadOnly#testUpload}} - do we need to wait for safe mode exit after 
restarting the cluster? Also I think this test is essentially the same as the 
previous one. Once we have ensured that read-only disk forces us to remain in 
safe mode, the rest of the checks should be covered by safe-mode unit tests.

Still reviewing the rest.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-07-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877340#comment-16877340
 ] 

Eric Yang edited comment on HDDS-1554 at 7/2/19 10:41 PM:
--

Patch 10 rebase to current trunk.  The current usage command is:

{code}
mvn clean verify -Pit,docker-build
{code}

It does not work without docker-build parameter because the default docker 
image name apache/ozone:0.5.0-SNAPSHOT doesn't exist.  User can force to build 
apache/ozone:0.5.0-SNAPSHOT with:

{code}
mvn clean verify -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT -Pit,docker-build
{code}

[~elek] [~arp] I was unsuccessful in communicating for a better default docker 
image name for docker-build profile in HDDS-1667.  This is the reason that 
docker-build profile needs to be passed even when user is not building docker 
image.

Patch 10 version is to discuss if we are open to eliminate docker-build passing 
by making the docker.image name default to apache/ozone:${project.version} 
because snapshot are most likely locally built image.  There is no point to 
make further distinction between user and apache docker image name prefix when 
docker version tag already make the distinction in my view.  I am not sure if 
you can agree to this point.

The current test cases is same in the issue description with exception that 
Read-only test does not fully initialize metadata.  I will update a new version 
of read-only test to ensure metadata initialization is done before mark volume 
as read-only.  


was (Author: eyang):
Patch 10 rebase to current trunk.  The current usage command is:

{code}
mvn clean verify -Pit,docker-build
{code}

It does not work without docker-build parameter because the default docker 
image name apache/ozone:0.5.0-SNAPSHOT doesn't exist.  User can force to build 
apache/pzone:0.5.0-SNAPSHOT with:

{code}
mvn clean verify -Ddocker.image=apache/ozone:0.5.0-SNAPSHOT -Pit,docker-build
{code}

[~elek] [~arp] I was unsuccessful in communicating for a better default docker 
image name for docker-build profile in HDDS-1667.  This is the reason that 
docker-build profile needs to be passed even when user is not building docker 
image.

Patch 10 version is to discuss if we are open to eliminate docker-build passing 
by making the docker.image name default to apache/ozone:${project.version} 
because snapshot are most likely locally built image.  There is no point to 
make further distinction between user and apache docker image name prefix when 
docker version tag already make the distinction in my view.  I am not sure if 
you can agree to this point.

The current test cases is same in the issue description with exception that 
Read-only test does not fully initialize metadata.  I will update a new version 
of read-only test to ensure metadata initialization is done before mark volume 
as read-only.  

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-06-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855940#comment-16855940
 ] 

Eric Yang edited comment on HDDS-1554 at 6/4/19 5:40 PM:
-

{quote}Based on my experience it can be executed in jenkins without any 
problem. I think it works very well without robot test plugin. It's enough to 
run robot tests in a dind image. Isn't it?{quote}

No, it would be very difficult to look at the console output look to determine 
which test case has failed because FAIL is a common word in the output.  Color 
coding in test summary report is very useful to identify the failed test case 
with a single glance.  Jenkins Robot framework plugin also helps to organize 
build number and generated report.

{quote}Can you please explain how the junit test will do it if the backend runs 
in a spearated container?{quote}

Junit tests can be written to interact with rpc, http to retrieve information 
from backend that runs in a separate container.  Exception Wrapping can handle 
server side stack trace to provide seamless experience without additional 
coding.

Seasoned programmer maybe interested in remote debugging to capture private 
variable states, remote debugger parameters can be passed as environment 
variable to JAVA_OPTS for docker container:

{code}-Xdebug 
-Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y{code}

This allows IDE to connect to containerized server for troubleshooting, while 
launching Junit tests that are interacting with the container.  This 
arrangement can cover most of end to end white box testing with power of IDE to 
assist remote debugging.


was (Author: eyang):
{quote}Based on my experience it can be executed in jenkins without any 
problem. I think it works very well without robot test plugin. It's enough to 
run robot tests in a dind image. Isn't it?{quote}

No, it would be very difficult to look at the console output look to determine 
which test case has failed because FAIL is a common word in the output.  Color 
coding in test summary report is very useful to identify the failed test case 
with a single glance.  Jenkins Robot framework plugin also helps to organize 
build number and generated report.

{quote}Can you please explain how the junit test will do it if the backend runs 
in a spearated container?{quote}

Junit tests can be written to interact with rpc, http to retrieve information 
from backend that runs in a separate container.  Exception Wrapping can handle 
server side stack trace to provide seamless experience without additional 
coding.

Seasoned programmer maybe interested in remote debugging to capture private 
variable states, remote debugger parameters can be passed as environment 
variable to JAVA_OPTS for docker container:

{code}-Xdebug 
-Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y{code}

This allows IDE to connect to containerized server for troubleshooting, while 
launching Junit tests are interacting with the container.  This arrangement can 
cover most of end to end white box testing with power of IDE to assist remote 
debugging.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1554) Create disk tests for fault injection test

2019-05-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847927#comment-16847927
 ] 

Eric Yang edited comment on HDDS-1554 at 5/24/19 10:05 PM:
---

Patch 001 requires HDDS-1458 patch 13 or newer.


was (Author: eyang):
This patch requires HDDS-1458 patch 13 or newer.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org