I replied before seeing Reuven's comment. In that case it makes sense to keep them as post commit tests.
Are these test flaky because they fail some performance metrics? If that is the case it makes sense to separate them from functional tests. On Mon, Aug 13, 2018 at 10:30 AM, Reuven Lax <[email protected]> wrote: > Nexmark was primarily written as a set of functionality tests, not > performance tests. In fact some of the Nexmark queries are written in > knowingly inefficient ways, in order to better exercise more features of > Beam. We also use Nexmark as performance tests (mostly out of convenience, > because we already have those tests), however I believe that is a secondary > use. > > Reuven > > On Mon, Aug 13, 2018 at 9:36 AM Mikhail Gryzykhin <[email protected]> > wrote: > >> Hi everyone, >> >> As I can understand, a lot of tests in Nexmark set are performance tests. >> I suggest to rename(or split) the set to performance tests. >> >> Performance tests are much less reliable compared to post-commit tests >> and should have different requirements. Additionally, they are much more >> flaky. >> >> Splitting out performance tests to separate set will allow us to treat >> failures with lower priority and add more tolerance for flakes compared to >> what we have decided for post-commit tests >> <https://beam.apache.org/contribute/postcommits-policies/>. >> >> This will also be more organic to use different builder from >> PostcommitJobBuilder >> <https://github.com/apache/beam/tree/master/.test-infra/jenkins>, since >> we will want different requirements for perf tests. >> >> I do not believe we have a problem with this in current state, but I >> expect this to become an issue in the future as amount of perf tests grows. >> >> Regards, >> --Mikhail >> >> Have feedback <http://go/migryz-feedback>? >> >
