[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16837902#comment-16837902
 ] 

Eric Yang edited comment on HDDS-1458 at 5/11/19 6:04 PM:
----------------------------------------------------------

{quote}Let us drop the share and make it tests/blockade, and tests/smoketests 
etc. That way, all tests that we ship can be found easily. Otherwise I am +1 on 
this change.{quote}

It would be more proper to have a ozone command to wrap around python -m pytest 
-s share/ozone/tests/blockade that can be started on any directory than asking 
user to type the exact location and commands.  [~elek] explained to make test 
functionality part of the release for system admin to validate their 
environment is not aligned with the current tests capabilities.  Blockade tests 
are single node integration test because [Blockade does not support docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html].  Other 
smoketests base on docker-compose is also not easy to get them running with 
docker swarm for distributed test because Docker documentation has advice to 
[keep binary in docker image and remove Volume 
requirement|https://docs.docker.com/compose/production/] for production 
docker-compose to work with docker swarm.  Hence, carrying any of the 
integration tests in release tarball would not achieved the discussed end goals 
from my point of view.  I am in favor of defer the decision to carry tests in 
release binary tarball to prevent us running in circular discussions.  For the 
moment, having a long path to start single node test for developer, is 
inconvenient but we are not leading general users to a dead end.

{quote}I think It's time to revisit the current structure of ozone distribution 
tar files. The original structure is inherited from HADOOP-6255 which is 
defined for a project which includes multiple subproject (hadoop, yarn,...) and 
should support the creation of different RPM package from the same tar. I think 
for ozone we can improve the usability with reorganizing the dirs to a better 
structure.{quote}

The reason for HADOOP-6255 was more than just RPM packaging.  The motivation 
behind the reorganization was to make a directory structure that can work as 
standalone tarball as well as follow the general guideline for [Filesystem 
Hierarchy 
Standard|https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard].  This 
was proposed because a several people saw the need to make the structure more 
unix like for sharing dependencies for larger eco-system to work.  This is the 
reason that Hadoop HDFS, Mapreduce have good integration between projects to 
reduce shell script bootstrap cost.  Earlier version of YARN did not follow the 
conventional wisdom and it was hard to integrate with rest of Hadoop, YARN 
community struggled on classpath issue for at least 2+ years and the time to 
hype YARN framework had already passed.  Given there is a high probability that 
we want to make ozone as universal as possible for applications to integrate 
with us.  It given us more incentive to make the structure as flexible as 
possible.  This is only a advice from my own past scaring.  There are no 
perfect solution, but the conventional wisdom usually have endure test of time 
and save energy.


was (Author: eyang):
{quote}Let us drop the share and make it tests/blockade, and tests/smoketests 
etc. That way, all tests that we ship can be found easily. Otherwise I am +1 on 
this change.{quote}

It would be more proper to have a ozone command to wrap around python -m pytest 
-s share/ozone/tests/blockade that can be started on any directory than asking 
user to type the exact location and commands.  [~elek] explained to make test 
functionality part of the release for system admin to validate their 
environment is not aligned with the current tests capabilities.  Blockade tests 
are single node integration test because Blockade does not support [docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html].  Other 
smoketests base on docker-compose is also not easy to get them running with 
docker swarm for distributed test because Docker documentation has advice to 
keep binary in docker image and [remove Volume 
requirement|https://docs.docker.com/compose/production/] for production 
docker-compose to work with docker swarm.  Hence, carrying any of the 
integration tests in release tarball would not achieved the discussed end goals 
from my point of view.  I am in favor of defer the decision to carry tests in 
release binary tarball to prevent us running in circular discussions.  For the 
moment, having a long path to start single node test for developer, is 
inconvenient but we are not leading general users to a dead end.

{quote}I think It's time to revisit the current structure of ozone distribution 
tar files. The original structure is inherited from HADOOP-6255 which is 
defined for a project which includes multiple subproject (hadoop, yarn,...) and 
should support the creation of different RPM package from the same tar. I think 
for ozone we can improve the usability with reorganizing the dirs to a better 
structure.{quote}

The reason for HADOOP-6255 was more than just RPM packaging.  The motivation 
behind the reorganization was to make a directory structure that can work as 
standalone tarball as well as follow the general guideline for [Filesystem 
Hierarchy 
Standard|https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard].  This 
was proposed because a several people saw the need to make the structure more 
unix like for sharing dependencies for larger eco-system to work.  This is the 
reason that Hadoop HDFS, Mapreduce have good integration between projects to 
reduce shell script bootstrap cost.  Earlier version of YARN did not follow the 
conventional wisdom and it was hard to integrate with rest of Hadoop, YARN 
community struggled on classpath issue for at least 2+ years and the time to 
hype YARN framework had already passed.  Given there is a high probability that 
we want to make ozone as universal as possible for applications to integrate 
with us.  It given us more incentive to make the structure as flexible as 
possible.  This is only a advice from my own past scaring.  There are no 
perfect solution, but the conventional wisdom usually have endure test of time 
and save energy.

> Create a maven profile to run fault injection tests
> ---------------------------------------------------
>
>                 Key: HDDS-1458
>                 URL: https://issues.apache.org/jira/browse/HDDS-1458
>             Project: Hadoop Distributed Data Store
>          Issue Type: Test
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to