[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16848659#comment-16848659
 ] 

Elek, Marton commented on HDDS-1458:
------------------------------------

Thank you very much the update [~eyang]

1. 

bq. In maven environment, it will try to use docker compose file in 
${basedir}/target because it is localized with additional properties like 
version number. The script path does not tell us where localized version of 
docker compose file is relative to the script. This is the reason that the 
script computes the current working directory for maven environment.

As I wrote earlier the current working directory is not as flexible as the 
earlier solutions. For example you can do:

{code}
python -m pytest -s tests/blockade 
{code}

But you can't do

{code}
cd tests/blockade
python -m pytest -s .
{code}

Earlier it was supported and you don't need to do any special just keep the old 
method and use
 
{code}
os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
{code}

instead of 

{code}
os.getcwd()
{code}

Can you please do that?

2. Thank you very much to explain the role of MAVEN_TEST and OZONE_HOME. My 
question was regarding to the hadoop-ozone/network-tests/pom.xml

{code}
                  <environmentVariables>
                    <OZONE_HOME>
                      ${ozone.home}
                    </OZONE_HOME>
                    <MAVEN_TEST>
                      ${project.build.directory}
                    </MAVEN_TEST>
                  </environmentVariables>
{code}

You don't need to set *both* as setting MAVEN_TEST is enough. I would suggest 
to remove one of the environment variables.

3. BTW, I think it would better to use one environment variable: 
FAULT_INJECTION_COMPOSE_DIR. Using OZONE_HOME or MAVEN_TEST we don't have the 
context how is it used (in fact just to locate the compose files). Using more 
meaningful env variable name can help to understand what is the goal of the 
specific environment variables.

4. The content of the docker-compose files in 
hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose are changed. As we 
agreed multiple times, we should keep the *current* content of the 
docker-compose files. Please put back the volumes to the docker-compose files 
and please use apache/hadoop-runner image.

5. hadoop-ozone/dist depends from the network-tests as it copies the files from 
the network-tests. This is not a hard dependency as of now as we copy the files 
directly from the src folder (build is not required) but I think it's more 
clear to add a provided dependency to hadoop-ozone/dist to show it's 
dependency. (As I remember you also suggested to use maven instead of direct 
copy from dist-layout stitching)

> Create a maven profile to run fault injection tests
> ---------------------------------------------------
>
>                 Key: HDDS-1458
>                 URL: https://issues.apache.org/jira/browse/HDDS-1458
>             Project: Hadoop Distributed Data Store
>          Issue Type: Test
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.4.1
>
>         Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to