[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16850018#comment-16850018
 ] 

Eric Yang commented on HDDS-1458:
---------------------------------

{quote}I didn't say that os.getcwd should be removed. I asked to keep the more 
generic solution (which was used until now) as the default third option. 
Nothing more.{quote}

Os.path.realpath is not a more generic solution.  It finds the script location, 
but script can be moved around the tarball which cause the relative location of 
compose files to change.    MAVEN_TEST OZONE_HOME, and os.getcwd() are all 
making the same logic assertion to discover the base directory of OZONE project 
where docker compse file is placed in a subdirectory from ozone project.  
Os.getcwd() is more inline with the documentation that the script is to be 
executed from OZONE_HOME.  Environment variable assisted discovering location 
of OZONE_HOME is icy on the cake for downstream developer that creates RPM or 
docker images and setup environment variables in /etc/profile.d to discover 
OZONE_HOME location, and the scripts can be moved to /usr/bin or destination of 
their choice.  The scripts are also more flexible to move their location within 
the tarball later on by changing python command execution path in documentation.

{quote}We also agreed that all of the tests should be supported in both 
ways.{quote}

I do not fully agree to this personally.  As I have said earlier to keep 
existing tests in release tarball is unfortunate choice that we have to live 
with.  I don't like steering more integration test artifacts into release 
tarball is the right direction.  It only creates messy situation and over claim 
integration test code can be used for a different purpose as a check engine 
indicator for actual distributed cluster.  This will only create frustrations 
for Ozone admin who wants to depend on Ozone code, but can't.

{quote}I think it's a good consensus, let's modify the patch according to this. 
Can you please keep the docker-compose files in the dist in the original 
form?{quote}

The files were removed from tarball based on your feedback.  They are restored 
again based on your feedback in patch 14.  From my point of view, the test can 
work with compose files from ozone or ozoneblockade.  I thought you wanted to 
remove duplication, but I could be wrong.  What was the reason that you 
suggested to remove them in the first place, and rationale for restore them?

> Create a maven profile to run fault injection tests
> ---------------------------------------------------
>
>                 Key: HDDS-1458
>                 URL: https://issues.apache.org/jira/browse/HDDS-1458
>             Project: Hadoop Distributed Data Store
>          Issue Type: Test
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.4.1
>
>         Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to