[ https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16849256#comment-16849256 ]
Eric Yang commented on HDDS-1458: --------------------------------- [~elek] 1. The purpose of this change is to illustrate the frequent use cases that the script may be moved to a location other than the relative structure of OZONE_HOME. For example, source code location which is different from binary package. The patched code is written to support the most frequent scenarios rather than making only one assumption that the script is always fixated to the relative location of docker compose file in binary release tarball. Ozone 0.4.0 has incorrectly documented how to run the script the proper way. Therefore, the code has been corrected to match documentation. I do not agree to remove os.getcwd() is the right solution. Your subconscious mind insisted on undocumented behavior: {code} cd $OZONE_HOME/blockade python -m pytest -s . {code} However, this is not what is documented even through README is located in the blockade location. The documented procedure is going to OZONE_HOME, which is top level of Ozone tarball. Therefore, the code is more accurately reflected using getcwd() as OZONE_HOME to locate compose file. For any reason if blockade test is moved again in tarball structure, getcwd() as OZONE_HOME is more accurately referencing compose file location. Script path approach is actually less optimal because any later decision to change python script location will result in code changes to discovering new relative location of compose file location. It is also very common for package maintainer to move scripts into /usr/bin, and use environment variable to locate rest of the binaries. With current code, it is less work to maintain script location change IMHO. 2. {quote}You don't need to set both as setting MAVEN_TEST is enough. I would suggest to remove one of the environment variables.{quote} I know that. It is just for convenience to the reader that both can be supported and MAVEN_TEST takes precedence over OZONE_HOME for new developers. 3. {quote}BTW, I think it would better to use one environment variable: FAULT_INJECTION_COMPOSE_DIR. Using OZONE_HOME or MAVEN_TEST we don't have the context how is it used (in fact just to locate the compose files). Using more meaningful env variable name can help to understand what is the goal of the specific environment variables.{quote} We don't want to make one extra environment variable just for one purpose. When it is possible to use one variable to solve multiple problems, then it is worth the time to define the variable name. For example, to figure out where the location of the program. It is worth the time to define JAVA_HOME, or OZONE_HOME. FAULT_INJECTION_COMPOSE_DIR is same as rest of the compose file used in release tarball, then it is a waste of time and labor to define this unique name. 4. {quote}The content of the docker-compose files in hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose are changed. As we agreed multiple times, we should keep the current content of the docker-compose files. Please put back the volumes to the docker-compose files and please use apache/hadoop-runner image.{quote} The removal of ozoneblockade compose file is based on [your comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16845291] point 1. I also confirmed that the tests can run with global docker compose file in release tarball. I do not understand why you insist on doing development style docker image with release tarball. The current patch made no change to global docker compose file. I am confused by your statement about put volumes back in docker-compose-files or apache/hadoop-runner image. I feel the conversation has not been fruitful with you insist on doing everything the old way, which has been identified as none-salable approach. I feel like chasing you in circles did not show a bit of productivity over the length conversations. Please help with a break through in the conversation rather than insist on going back on the broken model when I did nothing to break the broken model and followed your words accurately. 5. {quote}hadoop-ozone/dist depends from the network-tests as it copies the files from the network-tests. This is not a hard dependency as of now as we copy the files directly from the src folder (build is not required) but I think it's more clear to add a provided dependency to hadoop-ozone/dist to show it's dependency. (As I remember you also suggested to use maven instead of direct copy from dist-layout stitching){quote} The dissected conversation is to move docker build and maven assembly change to HDDS-1495. It seems that assembly and dependency issue will not be addressed unless the code are put together. Can we defer the dependency conversation until HDDS-1495 is committed to prevent another circular conversation? > Create a maven profile to run fault injection tests > --------------------------------------------------- > > Key: HDDS-1458 > URL: https://issues.apache.org/jira/browse/HDDS-1458 > Project: Hadoop Distributed Data Store > Issue Type: Test > Reporter: Eric Yang > Assignee: Eric Yang > Priority: Major > Labels: pull-request-available > Fix For: 0.4.1 > > Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, > HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, > HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, > HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, > HDDS-1458.012.patch, HDDS-1458.013.patch > > Time Spent: 50m > Remaining Estimate: 0h > > Some fault injection tests have been written using blockade. It would be > nice to have ability to start docker compose and exercise the blockade test > cases against Ozone docker containers, and generate reports. This is > optional integration tests to catch race conditions and fault tolerance > defects. > We can introduce a profile with id: it (short for integration tests). This > will launch docker compose via maven-exec-plugin and run blockade to simulate > container failures and timeout. > Usage command: > {code} > mvn clean verify -Pit > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org