[ https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826541#comment-16826541 ]
Eric Yang edited comment on HDDS-1458 at 4/26/19 12:04 AM: ----------------------------------------------------------- Patch 001 is a draft of fault injection framework in maven + developer docker image. What is in this patch: # Modified start-build-env.sh to allow docker cli to run in docker for spin up additional docker container for testing. # Include pytest and blockade for network fault injection tests. # Include a maven profile to launch Ozone blockade test suites. Disk fault injection can be simulated by running Ozone docker image with read only disks using maven docker compose plugin in combination of additional python tests. Developer/Jenkins can start the fault injection tests by running: {code} ./start-build.env.sh cd hadoop/hadoop-ozone/dist mvn clean verify -Pit {code} A couple of points for discussion: - Are we ok with docker in docker addition to start-build-env.sh because it uses --privileged command to gain access to host level docker. In my opinion, the existing setup already require user to have access to docker. This new privileged flag gives more power to break out of container environment, but it is necessary to simulate network or disk failures. I am fine without bundle this in start-build-env.sh, but it is nicer without having to look for developer dependencies to start development. - Can we move hadoop-ozone/dist/src/main/blockade into integration-test project? It seems a more logical choice to host fault injection test suites. - Do we want the test to run as a profile, or default "mvn verify" is good? was (Author: eyang): Patch 001 is a draft of fault injection framework in maven + developer docker image. What is in this patch: # Modified start-build-env.sh to allow docker cli to run in docker for spin up additional docker container for testing. # Include pytest and blockade for network fault injection tests. # Include a maven profile to launch Ozone blockade test suites. Disk fault injection can be simulated by running Ozone docker image with read only disks using maven docker compose plugin in combination of additional python tests. Developer/Jenkins can start the fault injection tests by running: {code} ./start-build.env.sh cd hadoop/hadoop-ozone/dist mvn clean verify -Pit {code} A couple of points for discussion: - Are we ok with docker in docker addition to start-build-env.sh because it uses --privileged command to gain access to host level docker. In my opinion, the existing setup already require user to have access to docker. This new privileged flag gives more power to break out of container environment, but it is necessary to simulate network or disk failures. I am fine without bundle this in start-build-env.sh, but it is nicer without having to look for developer dependencies to start development. - Can we move hadoop-ozone/dist/src/main/blockade into integration-test project? It seems a more logical choice to host fault injection test suites. > Create a maven profile to run fault injection tests > --------------------------------------------------- > > Key: HDDS-1458 > URL: https://issues.apache.org/jira/browse/HDDS-1458 > Project: Hadoop Distributed Data Store > Issue Type: Test > Reporter: Eric Yang > Assignee: Eric Yang > Priority: Major > Attachments: HDDS-1458.001.patch > > > Some fault injection tests have been written using blockade. It would be > nice to have ability to start docker compose and exercise the blockade test > cases against Ozone docker containers, and generate reports. This is > optional integration tests to catch race conditions and fault tolerance > defects. > We can introduce a profile with id: it (short for integration tests). This > will launch docker compose via maven-exec-plugin and run blockade to simulate > container failures and timeout. > Usage command: > {code} > mvn clean verify -Pit > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org