URL: https://github.com/freeipa/freeipa/pull/3401 Author: abbra Title: #3401: [Backport][ipa-4-8] Make use of Azure Pipeline slicing Action: opened
PR body: """ This PR was opened automatically because PR #3385 was pushed to master and backport to ipa-4-8 is required. """ To pull the PR as Git branch: git remote add ghfreeipa https://github.com/freeipa/freeipa git fetch ghfreeipa pull/3401/head:pr3401 git checkout pr3401
From 5eee09b5f4e28b3c483cde468630d27532968660 Mon Sep 17 00:00:00 2001 From: Stanislav Levin <s...@altlinux.org> Date: Mon, 8 Jul 2019 11:35:23 +0300 Subject: [PATCH 1/3] Simplify ipa-run-tests script This is a sort of rollback to the pre #93c158b05 state with several improvements. For now, the nodeids calculation by ipa-run-tests is not stable, since it depends on current working directory. Nodeids (tests addresses) are utilized by the other plugins, for example. Unfortunately, the `pytest_load_initial_conftests` hook doesn't correctly work with pytest internal paths, since it is called after the calculation of rootdir was performed, for example. Eventually, it's simpler to follow the default convention for Python test discovery. There is at least one drawback of new "old" implementation. The ignore rules don't support globs, because pytest 4.3.0+ has the same facility via `--ignore-glob`: > Add the `--ignore-glob` parameter to exclude test-modules with > Unix shell-style wildcards. Add the collect_ignore_glob for > conftest.py to exclude test-modules with Unix shell-style > wildcards. Upon switching to pytest4 it will be possible to utilize this. Anyway, tests for checking current basic facilities of ipa-run-tests were added. Fixes: https://pagure.io/freeipa/issue/8007 Signed-off-by: Stanislav Levin <s...@altlinux.org> --- .travis.yml | 1 + Makefile.am | 3 +- ipatests/azure/azure-pipelines.yml | 1 + ipatests/conftest.py | 3 +- ipatests/ipa-run-tests | 113 ++++-------- ipatests/man/ipa-run-tests.1 | 34 ++-- ipatests/setup.py | 1 + ipatests/test_ipatests_plugins/__init__.py | 7 + .../test_ipa_run_tests.py | 162 ++++++++++++++++++ 9 files changed, 217 insertions(+), 108 deletions(-) create mode 100644 ipatests/test_ipatests_plugins/__init__.py create mode 100644 ipatests/test_ipatests_plugins/test_ipa_run_tests.py diff --git a/.travis.yml b/.travis.yml index 126fced23a..d9d26d9c6d 100644 --- a/.travis.yml +++ b/.travis.yml @@ -38,6 +38,7 @@ env: test_ipaplatform test_ipapython test_ipaserver + test_ipatests_plugins test_xmlrpc/test_[l-z]*.py" - TASK_TO_RUN="tox" TEST_RUNNER_CONFIG=".test_runner_config.yaml" diff --git a/Makefile.am b/Makefile.am index 7d84aa5325..31a7abaf50 100644 --- a/Makefile.am +++ b/Makefile.am @@ -207,7 +207,8 @@ fastcheck: fasttest: $(GENERATED_PYTHON_FILES) ipasetup.py @ # --ignore doubles speed of total test run compared to pytest.skip() @ # on module. - PYTHONPATH=$(abspath $(top_srcdir)) $(PYTHON) ipatests/ipa-run-tests \ + PATH=$(abspath ipatests):$$PATH PYTHONPATH=$(abspath $(top_srcdir)) \ + $(PYTHON) ipatests/ipa-run-tests \ --skip-ipaapi \ --ignore $(abspath $(top_srcdir))/ipatests/test_integration \ --ignore $(abspath $(top_srcdir))/ipatests/test_xmlrpc diff --git a/ipatests/azure/azure-pipelines.yml b/ipatests/azure/azure-pipelines.yml index 1c92f37b3e..89f81b87f3 100644 --- a/ipatests/azure/azure-pipelines.yml +++ b/ipatests/azure/azure-pipelines.yml @@ -137,6 +137,7 @@ jobs: - test_ipalib - test_ipaplatform - test_ipapython + - test_ipatests_plugins testsToIgnore: - test_integration - test_webui diff --git a/ipatests/conftest.py b/ipatests/conftest.py index c09df4aed6..ae523c88a1 100644 --- a/ipatests/conftest.py +++ b/ipatests/conftest.py @@ -28,7 +28,8 @@ 'ipatests.pytest_ipa.beakerlib', 'ipatests.pytest_ipa.declarative', 'ipatests.pytest_ipa.nose_compat', - 'ipatests.pytest_ipa.integration' + 'ipatests.pytest_ipa.integration', + 'pytester', ] diff --git a/ipatests/ipa-run-tests b/ipatests/ipa-run-tests index 4585e8e04b..3b3e462b15 100755 --- a/ipatests/ipa-run-tests +++ b/ipatests/ipa-run-tests @@ -22,15 +22,13 @@ """Pytest wrapper for running an installed (not in-tree) IPA test suite -Any command-line arguments are passed directly to py.test. -The current directory is changed to the locaition of the ipatests package, -so any relative paths given will be based on the ipatests module's path +Any command-line arguments are passed directly to pytest. +The current directory is changed to the location of the ipatests package, +so any relative paths given will be based on the ipatests module's path. """ import os -import copy import sys -import glob import pytest @@ -41,82 +39,29 @@ os.environ['IPATEST_XUNIT_PATH'] = os.path.join(os.getcwd(), 'nosetests.xml') HERE = os.path.dirname(os.path.abspath(ipatests.__file__)) - -class ArgsManglePlugin(object): - """Modify pytest arguments - - * Add confcutdir if hasn't been set already - * Mangle paths to support tests both relative to basedir and ipatests/ - * Default to ipatests/ if no tests are specified - """ - def pytest_load_initial_conftests(self, early_config, parser, args): - # During initial loading, parser supports only basic options - ns = early_config.known_args_namespace - if not ns.confcutdir: - # add config cut directory to only load fixtures from ipatests/ - args.insert(0, '--confcutdir={}'.format(HERE)) - - if not ns.file_or_dir: - # No file or directory found, run all tests - args.append(HERE) - else: - vargs = [] - for name in ns.file_or_dir: - idx = args.index(name) - # split on pytest separator - # ipatests/test_ipaplatform/test_importhook.py::test_override - filename, sep, suffix = name.partition('::') - # a file or directory relative to cwd or already absolute - if os.path.exists(filename): - continue - if '*' in filename: - # Expand a glob, we'll flatten the list later - paths = glob.glob(os.path.join(HERE, filename)) - vargs.append([idx, paths]) - else: - # a file or directory relative to ipatests package - args[idx] = sep.join((os.path.join(HERE, filename), suffix)) - # flatten and insert all expanded file names - base = 0 - for idx, items in vargs: - args.pop(base + idx) - for item in items: - args.insert(base + idx, item) - base += len(items) - - # replace ignores, e.g. "--ignore test_integration" is changed to - # "--ignore path/to/ipatests/test_integration" - if ns.ignore: - vargs = [] - for ignore in ns.ignore: - idx = args.index(ignore) - if os.path.exists(ignore): - continue - if '*' in ignore: - # expand a glob, we'll flatten the list later - paths = glob.glob(os.path.join(HERE, ignore)) - vargs.append([idx, paths]) - else: - args[idx] = os.path.join(HERE, ignore) - # flatten and insert all expanded file names - base = 0 - for idx, items in vargs: - # since we are expanding, remove old pair - # --ignore and old file name - args.pop(base + idx) - args.pop(base + idx) - for item in items: - # careful: we need to add a pair - # --ignore and new filename - args.insert(base + idx, '--ignore') - args.insert(base + idx, item) - base += len(items) * 2 - 1 - - # rebuild early_config's known args with new args. The known args - # are used for initial conftest.py from ipatests, which adds - # additional arguments. - early_config.known_args_namespace = parser.parse_known_args( - args, namespace=copy.copy(early_config.option)) - - -sys.exit(pytest.main(plugins=[ArgsManglePlugin()])) +def has_option(option): + if option in sys.argv: + return True + for arg in sys.argv: + for argi in arg.split("="): + if option in argi: + return True + return False + +# don't override specified command line options +if not has_option("confcutdir"): + sys.argv.insert(1, "--confcutdir={}".format(HERE)) +# for backward compatibility +if not has_option("cache_dir"): + sys.argv[1:1] = ["-o", 'cache_dir={}'.format(os.path.join(os.getcwd(), + ".pytest_cache"))] + +pyt_args = [sys.executable, "-c", + "import sys,pytest;sys.exit(pytest.main())"] + sys.argv[1:] +# shell is needed to perform globbing +sh_args = ["/bin/sh", "--norc", "--noprofile", "-c", "--"] +pyt_args_esc = ["'{}'".format(x) if " " in x else x for x in pyt_args] +args = sh_args + [" ".join(pyt_args_esc)] +os.chdir(HERE) +sys.stdout.flush() +os.execv(args[0], args) diff --git a/ipatests/man/ipa-run-tests.1 b/ipatests/man/ipa-run-tests.1 index 66c2a0b00f..08f2f00586 100644 --- a/ipatests/man/ipa-run-tests.1 +++ b/ipatests/man/ipa-run-tests.1 @@ -22,42 +22,32 @@ ipa\-run\-tests \- Run the FreeIPA test suite .SH "SYNOPSIS" ipa\-run\-tests [options] .SH "DESCRIPTION" -ipa\-run\-tests is a wrapper around nosetests that run the FreeIPA test suite. +ipa\-run\-tests is a wrapper around Pytest that run the FreeIPA test suite. It is intended to be used for developer testing and in continuous integration systems. -It loads IPA-internal Nose plugins ordered-tests and beakerlib. -The ordered-tests plugin is enabled automatically. - -The FreeIPA test suite installed system\-wide is selected via Nose's \-\-where -option. It is possible to select a subset of the entire test suite by specifying a test file relative to the ipatests package, for example: ipa-run-tests test_integration/test_simple_replication.py .SH "OPTIONS" -All command-line options are passed to the underlying Nose runner. -See nosetests(1) for a complete list. - -The internal IPA plugins add an extra option: - -.TP -\fB\-\-with-beakerlib\fR -Enable BeakerLib integration. -Test phases, failures and passes, and log messages are reported using -beakerlib(1) commands. -This option requires the beakerlib.sh script to be sourced. +All command-line options are passed to the underlying Pytest runner. +See "pytest --help" for a complete list. .SH "EXIT STATUS" -0 if the command was successful - -nonzero if any error or failure occurred +Running pytest can result in six different exit codes: +Exit code 0: All tests were collected and passed successfully +Exit code 1: Tests were collected and run but some of the tests failed +Exit code 2: Test execution was interrupted by the user +Exit code 3: Internal error happened while executing tests +Exit code 4: pytest command line usage error +Exit code 5: No tests were collected .SH "CONFIGURATION" Please see ipa-test-config(1) for a description of configuration environment variables. .SH "REFERENCES" -A full description of the FreeIPA integration testing framework is available at -http://www.freeipa.org/page/V3/Integration_testing +A full description of the FreeIPA testing is available at +https://www.freeipa.org/page/Testing diff --git a/ipatests/setup.py b/ipatests/setup.py index cd498b7870..8b89f90ca8 100644 --- a/ipatests/setup.py +++ b/ipatests/setup.py @@ -44,6 +44,7 @@ "ipatests.test_ipapython", "ipatests.test_ipaserver", "ipatests.test_ipaserver.test_install", + "ipatests.test_ipatests_plugins", "ipatests.test_webui", "ipatests.test_xmlrpc", "ipatests.test_xmlrpc.tracker" diff --git a/ipatests/test_ipatests_plugins/__init__.py b/ipatests/test_ipatests_plugins/__init__.py new file mode 100644 index 0000000000..4f35bc0422 --- /dev/null +++ b/ipatests/test_ipatests_plugins/__init__.py @@ -0,0 +1,7 @@ +# +# Copyright (C) 2019 FreeIPA Contributors see COPYING for license +# + +""" +Sub-package containing unit tests for IPA internal test plugins +""" diff --git a/ipatests/test_ipatests_plugins/test_ipa_run_tests.py b/ipatests/test_ipatests_plugins/test_ipa_run_tests.py new file mode 100644 index 0000000000..b1bc720a99 --- /dev/null +++ b/ipatests/test_ipatests_plugins/test_ipa_run_tests.py @@ -0,0 +1,162 @@ +# +# Copyright (C) 2019 FreeIPA Contributors see COPYING for license +# + +import os + +import pytest + +MOD_NAME = "test_module_{}" +FUNC_NAME = "test_func_{}" +MODS_NUM = 5 + + +@pytest.fixture +def ipatestdir(testdir, monkeypatch): + """ + Create MODS_NUM test modules within testdir/ipatests. + Each module contains 1 test function. + Patch PYTHONPATH with created package path to override the system's + ipatests + """ + ipatests_dir = testdir.mkpydir("ipatests") + for i in range(MODS_NUM): + ipatests_dir.join("{}.py".format(MOD_NAME.format(i))).write( + "def {}(): pass".format(FUNC_NAME.format(i))) + + python_path = os.pathsep.join( + filter(None, [str(testdir.tmpdir), os.environ.get("PYTHONPATH", "")])) + monkeypatch.setenv("PYTHONPATH", python_path) + + def run_ipa_tests(*args): + cmdargs = ["ipa-run-tests", "-v"] + list(args) + return testdir.run(*cmdargs, timeout=60) + + testdir.run_ipa_tests = run_ipa_tests + return testdir + + +def test_ipa_run_tests_basic(ipatestdir): + """ + Run ipa-run-tests with default arguments + """ + result = ipatestdir.run_ipa_tests() + assert result.ret == 0 + result.assert_outcomes(passed=MODS_NUM) + for mod_num in range(MODS_NUM): + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +def test_ipa_run_tests_glob1(ipatestdir): + """ + Run ipa-run-tests using glob patterns to collect tests + """ + result = ipatestdir.run_ipa_tests("{mod}".format( + mod="test_modul[!E]?[0-5]*")) + assert result.ret == 0 + result.assert_outcomes(passed=MODS_NUM) + for mod_num in range(MODS_NUM): + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +def test_ipa_run_tests_glob2(ipatestdir): + """ + Run ipa-run-tests using glob patterns to collect tests + """ + result = ipatestdir.run_ipa_tests("{mod}".format( + mod="test_module_{0,1}*")) + assert result.ret == 0 + result.assert_outcomes(passed=2) + for mod_num in range(2): + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +def test_ipa_run_tests_specific_nodeid(ipatestdir): + """ + Run ipa-run-tests using nodeid to collect test + """ + mod_num = 0 + result = ipatestdir.run_ipa_tests("{mod}.py::{func}".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))) + assert result.ret == 0 + result.assert_outcomes(passed=1) + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +@pytest.mark.parametrize( + "expr", + [["-k", "not {func}".format(func=FUNC_NAME.format(0))], + ["-k not {func}".format(func=FUNC_NAME.format(0))]]) +def test_ipa_run_tests_expression(ipatestdir, expr): + """ + Run ipa-run-tests using expression + """ + result = ipatestdir.run_ipa_tests(*expr) + assert result.ret == 0 + result.assert_outcomes(passed=4) + for mod_num in range(1, MODS_NUM): + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +def test_ipa_run_tests_ignore_basic(ipatestdir): + """ + Run ipa-run-tests ignoring one test module + """ + result = ipatestdir.run_ipa_tests( + "--ignore", "{mod}.py".format(mod=MOD_NAME.format(0)), + "--ignore", "{mod}.py".format(mod=MOD_NAME.format(1)), + ) + assert result.ret == 0 + result.assert_outcomes(passed=MODS_NUM - 2) + for mod_num in range(2, MODS_NUM): + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +def test_ipa_run_tests_defaultargs(ipatestdir): + """ + Checking the ipa-run-tests defaults: + * cachedir + * rootdir + """ + mod_num = 0 + result = ipatestdir.run_ipa_tests("{mod}.py::{func}".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))) + assert result.ret == 0 + result.assert_outcomes(passed=1) + result.stdout.re_match_lines([ + "^cachedir: {cachedir}$".format( + cachedir=os.path.join(os.getcwd(), ".pytest_cache")), + "^rootdir: {rootdir}([,].*)?$".format( + rootdir=os.path.join(str(ipatestdir.tmpdir), "ipatests")) + ]) + + +def test_ipa_run_tests_confcutdir(ipatestdir): + """ + Checking the ipa-run-tests defaults: + * confcutdir + """ + mod_num = 0 + ipatestdir.makeconftest("import somenotexistedpackage") + result = ipatestdir.run_ipa_tests("{mod}.py::{func}".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))) + assert result.ret == 0 + result.assert_outcomes(passed=1) + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) From 533bc18556b1291234ddde1463ca2ca0b1b88ccd Mon Sep 17 00:00:00 2001 From: Stanislav Levin <s...@altlinux.org> Date: Wed, 3 Jul 2019 12:56:32 +0300 Subject: [PATCH 2/3] Make use of Azure Pipeline slicing The unit tests execution time within Azure Pipelines(AP) is not balanced. One test job(Base) takes ~13min, while another(XMLRPC) ~28min. Fortunately, AP supports slicing: > An agent job can be used to run a suite of tests in parallel. For example, you can run a large suite of 1000 tests on a single agent. Or, you can use two agents and run 500 tests on each one in parallel. To leverage slicing, the tasks in the job should be smart enough to understand the slice they belong to. >The step that runs the tests in a job needs to know which test slice should be run. The variables System.JobPositionInPhase and System.TotalJobsInPhase can be used for this purpose. Thus, to support this pytest should know how to split the test suite into groups(slices). For this, a new internal pytest plugin was added. About plugin. - Tests within a slice are grouped by test modules because not all of the tests within the module are independent from each other. - Slices are balanced by the number of tests within test module. - To run some module within its own environment there is a dedicated slice option (could help with extremely slow tests) Examples. - To split `test_cmdline` tests into 2 slices and run the first one: ipa-run-tests --slices=2 --slice-num=1 test_cmdline - To split tests into 2 slices, then to move one module out to its own slice and run the second one: ipa-run-tests --slices=2 --slice-dedicated=test_cmdline/test_cli.py \ --slice-num=2 test_cmdline Fixes: https://pagure.io/freeipa/issue/8008 Signed-off-by: Stanislav Levin <s...@altlinux.org> --- ipatests/azure/azure-pipelines.yml | 18 +- ipatests/azure/azure-run-tests.sh | 9 +- ipatests/azure/templates/run-test.yml | 4 + ipatests/azure/templates/test-jobs.yml | 7 +- ipatests/conftest.py | 1 + ipatests/pytest_ipa/slicing.py | 200 ++++++++++++++++++ .../test_ipatests_plugins/test_slicing.py | 127 +++++++++++ 7 files changed, 351 insertions(+), 15 deletions(-) create mode 100644 ipatests/pytest_ipa/slicing.py create mode 100644 ipatests/test_ipatests_plugins/test_slicing.py diff --git a/ipatests/azure/azure-pipelines.yml b/ipatests/azure/azure-pipelines.yml index 89f81b87f3..6d75956bdb 100644 --- a/ipatests/azure/azure-pipelines.yml +++ b/ipatests/azure/azure-pipelines.yml @@ -128,8 +128,8 @@ jobs: - template: templates/test-jobs.yml parameters: - jobName: Base - jobTitle: Base tests + jobName: BASE_XMLRPC + jobTitle: BASE and XMLRPC tests testsToRun: - test_cmdline - test_install @@ -138,20 +138,12 @@ jobs: - test_ipaplatform - test_ipapython - test_ipatests_plugins - testsToIgnore: - - test_integration - - test_webui - - test_ipapython/test_keyring.py - taskToRun: run-tests - -- template: templates/test-jobs.yml - parameters: - jobName: XMLRPC - jobTitle: XMLRPC tests - testsToRun: - test_xmlrpc testsToIgnore: - test_integration - test_webui - test_ipapython/test_keyring.py + testsToDedicate: + - test_xmlrpc/test_dns_plugin.py taskToRun: run-tests + tasksParallel: 3 diff --git a/ipatests/azure/azure-run-tests.sh b/ipatests/azure/azure-run-tests.sh index aef13568ec..c66e590281 100755 --- a/ipatests/azure/azure-run-tests.sh +++ b/ipatests/azure/azure-run-tests.sh @@ -6,6 +6,9 @@ server_password=Secret123 # Normalize spacing and expand the list afterwards. Remove {} for the single list element case tests_to_run=$(eval "echo {$(echo $TESTS_TO_RUN | sed -e 's/[ \t]+*/,/g')}" | tr -d '{}') tests_to_ignore=$(eval "echo --ignore\ {$(echo $TESTS_TO_IGNORE | sed -e 's/[ \t]+*/,/g')}" | tr -d '{}') +tests_to_dedicate= +[[ -n "$TESTS_TO_DEDICATE" ]] && \ +tests_to_dedicate=$(eval "echo --slice-dedicated={$(echo $TESTS_TO_DEDICATE | sed -e 's/[ \t]+*/,/g')}" | tr -d '{}') systemctl --now enable firewalld echo "Installing FreeIPA master for the domain ${server_domain} and realm ${server_realm}" @@ -39,7 +42,11 @@ if [ "$install_result" -eq 0 ] ; then ipa-test-task --help ipa-run-tests --help - ipa-run-tests ${tests_to_ignore} --verbose --with-xunit '-k not test_dns_soa' ${tests_to_run} + ipa-run-tests ${tests_to_ignore} \ + ${tests_to_dedicate} \ + --slices=${SYSTEM_TOTALJOBSINPHASE:-1} \ + --slice-num=${SYSTEM_JOBPOSITIONINPHASE:-1} \ + --verbose --with-xunit '-k not test_dns_soa' ${tests_to_run} tests_result=$? else echo "ipa-server-install failed with code ${save_result}, skip IPA tests" diff --git a/ipatests/azure/templates/run-test.yml b/ipatests/azure/templates/run-test.yml index 1e44984194..301115f16a 100644 --- a/ipatests/azure/templates/run-test.yml +++ b/ipatests/azure/templates/run-test.yml @@ -5,6 +5,7 @@ parameters: taskToRun: 'run-tests' testsToRun: '' testsToIgnore: '' + testsToDedicate: '' steps: - script: | @@ -22,7 +23,10 @@ steps: set -e docker exec --env TESTS_TO_RUN="${{ parameters.testsToRun }}" \ --env TESTS_TO_IGNORE="${{ parameters.testsToIgnore }}" \ + --env TESTS_TO_DEDICATE="${{ parameters.testsToDedicate }}" \ --env CI_RUNNER_LOGS_DIR="${{ parameters.logsPath }}" \ + --env SYSTEM_TOTALJOBSINPHASE=$(System.TotalJobsInPhase) \ + --env SYSTEM_JOBPOSITIONINPHASE=$(System.JobPositionInPhase) \ --privileged -t \ $(createContainer.containerName) \ /bin/bash --noprofile --norc -x /freeipa/ipatests/azure/azure-${{parameters.taskToRun}}.sh diff --git a/ipatests/azure/templates/test-jobs.yml b/ipatests/azure/templates/test-jobs.yml index c79058c9e7..5b0173fdf3 100644 --- a/ipatests/azure/templates/test-jobs.yml +++ b/ipatests/azure/templates/test-jobs.yml @@ -3,7 +3,9 @@ parameters: jobTitle: '' testsToIgnore: [] testsToRun: [] + testsToDedicate: [] taskToRun: '' + tasksParallel: 1 jobs: - job: ${{ parameters.jobName }} @@ -12,6 +14,8 @@ jobs: condition: succeeded() pool: vmImage: 'Ubuntu-16.04' + strategy: + parallel: ${{ parameters.tasksParallel }} steps: - template: setup-test-environment.yml - template: run-test.yml @@ -21,6 +25,7 @@ jobs: taskToRun: ${{ parameters.taskToRun}} testsToRun: ${{ join(' ', parameters.testsToRun ) }} testsToIgnore: ${{ join(' ', parameters.testsToIgnore ) }} + testsToDedicate: ${{ join(' ', parameters.testsToDedicate ) }} - task: PublishTestResults@2 inputs: testResultsFiles: $(CI_RUNNER_LOGS_DIR)/nosetests.xml @@ -28,4 +33,4 @@ jobs: condition: succeededOrFailed() - template: save-test-artifacts.yml parameters: - logsArtifact: logs-${{parameters.jobName}}-$(Build.BuildId)-$(Agent.OS)-$(Agent.OSArchitecture) + logsArtifact: logs-${{parameters.jobName}}-$(Build.BuildId)-$(System.JobPositionInPhase)-$(Agent.OS)-$(Agent.OSArchitecture) diff --git a/ipatests/conftest.py b/ipatests/conftest.py index ae523c88a1..62045de319 100644 --- a/ipatests/conftest.py +++ b/ipatests/conftest.py @@ -25,6 +25,7 @@ pytest_plugins = [ 'ipatests.pytest_ipa.additional_config', + 'ipatests.pytest_ipa.slicing', 'ipatests.pytest_ipa.beakerlib', 'ipatests.pytest_ipa.declarative', 'ipatests.pytest_ipa.nose_compat', diff --git a/ipatests/pytest_ipa/slicing.py b/ipatests/pytest_ipa/slicing.py new file mode 100644 index 0000000000..1722085142 --- /dev/null +++ b/ipatests/pytest_ipa/slicing.py @@ -0,0 +1,200 @@ +# +# Copyright (C) 2019 FreeIPA Contributors see COPYING for license +# + +""" +The main purpose of this plugin is to slice a test suite into +several pieces to run each within its own test environment(for example, +an Agent of Azure Pipelines). + +Tests within a slice are grouped by test modules because not all of the tests +within the module are independent from each other. + +Slices are balanced by the number of tests within test module. +* Actually, tests should be grouped by the execution duration. +This could be achieved by the caching of tests results. Azure Pipelines +caching is in development. * +To workaround slow tests a dedicated slice is added. + +:param slices: A total number of slices to split the test suite into +:param slice-num: A number of slice to run +:param slice-dedicated: A file path to the module to run in its own slice + +**Examples** + +Inputs: +ipa-run-tests test_cmdline --collectonly -qq +... +test_cmdline/test_cli.py: 39 +test_cmdline/test_help.py: 7 +test_cmdline/test_ipagetkeytab.py: 16 +... + +* Split tests into 2 slices and run the first one: + +ipa-run-tests --slices=2 --slice-num=1 test_cmdline + +The outcome would be: +... +Running slice: 1 (46 tests) +Modules: +test_cmdline/test_cli.py: 39 +test_cmdline/test_help.py: 7 +... + +* Split tests into 2 slices, move one module out to its own slice +and run the second one + +ipa-run-tests --slices=2 --slice-dedicated=test_cmdline/test_cli.py \ + --slice-num=2 test_cmdline + +The outcome would be: +... +Running slice: 2 (23 tests) +Modules: +test_cmdline/test_ipagetkeytab.py: 16 +test_cmdline/test_help.py: 7 +... + +""" +import pytest + + +def pytest_addoption(parser): + group = parser.getgroup("slicing") + group.addoption( + '--slices', dest='slices_num', type=int, + help='The number of slices to split the test suite into') + group.addoption( + '--slice-num', dest='slice_num', type=int, + help='The specific number of slice to run') + group.addoption( + '--slice-dedicated', action="append", dest='slices_dedicated', + help='The file path to the module to run in dedicated slice') + + +@pytest.hookimpl(hookwrapper=True) +def pytest_collection_modifyitems(session, config, items): + yield + slice_count = config.getoption('slices_num') + slice_id = config.getoption('slice_num') + modules_dedicated = config.getoption('slices_dedicated') + # deduplicate + if modules_dedicated: + modules_dedicated = list(set(modules_dedicated)) + + # sanity check + if not slice_count or not slice_id: + return + + # nothing to do + if slice_count == 1: + return + + if modules_dedicated and len(modules_dedicated) > slice_count: + raise ValueError( + "Dedicated slice number({}) shouldn't be greater than the number " + "of slices({})".format(len(modules_dedicated), slice_count)) + + if slice_id > slice_count: + raise ValueError( + "Slice number({}) shouldn't be greater than the number of slices" + "({})".format(slice_id, slice_count)) + + modules = [] + # Calculate modules within collection + # Note: modules within pytest collection could be placed in not consecutive + # order + for number, item in enumerate(items): + name = item.nodeid.split("::", 1)[0] + if not modules or name != modules[-1]["name"]: + modules.append({"name": name, "begin": number, "end": number}) + else: + modules[-1]["end"] = number + + if slice_count > len(modules): + raise ValueError( + "Total number of slices({}) shouldn't be greater than the number " + "of Python test modules({})".format(slice_count, len(modules))) + + slices_dedicated = [] + if modules_dedicated: + slices_dedicated = [ + [m] for m in modules for x in modules_dedicated if x in m["name"] + ] + if modules_dedicated and len(slices_dedicated) != len(modules_dedicated): + raise ValueError( + "The number of dedicated slices({}) should be equal to the " + "number of dedicated modules({})".format( + slices_dedicated, modules_dedicated)) + + if (slices_dedicated and len(slices_dedicated) == slice_count and + len(slices_dedicated) != len(modules)): + raise ValueError( + "The total number of slices({}) is not sufficient to run dedicated" + " modules({}) as well as usual ones({})".format( + slice_count, len(slices_dedicated), + len(modules) - len(slices_dedicated))) + + # remove dedicated modules from usual ones + for s in slices_dedicated: + for m in s: + if m in modules: + modules.remove(m) + + avail_slice_count = slice_count - len(slices_dedicated) + # initialize slices with empty lists + slices = [[] for i in range(slice_count)] + + # initialize slices with dedicated ones + for sn, s in enumerate(slices_dedicated): + slices[sn] = s + + # initial reverse sort by the number of tests in a test module + modules.sort(reverse=True, key=lambda x: x["end"] - x["begin"] + 1) + reverse = True + while modules: + for sslice_num, sslice in enumerate(sorted( + modules[:avail_slice_count], + reverse=reverse, key=lambda x: x["end"] - x["begin"] + 1)): + slices[len(slices_dedicated) + sslice_num].append(sslice) + + modules[:avail_slice_count] = [] + reverse = not reverse + + calc_ntests = sum(x["end"] - x["begin"] + 1 for s in slices for x in s) + assert calc_ntests == len(items) + assert len(slices) == slice_count + + # the range of the given argument `slice_id` begins with 1(one) + sslice = slices[slice_id - 1] + + new_items = [] + for m in sslice: + new_items += items[m["begin"]:m["end"] + 1] + items[:] = new_items + + tw = config.get_terminal_writer() + if tw: + tw.line() + tw.write( + "Running slice: {} ({} tests)\n".format( + slice_id, + len(items), + ), + cyan=True, + bold=True, + ) + tw.write( + "Modules:\n", + yellow=True, + bold=True, + ) + for module in sslice: + tw.write( + "{}: {}\n".format( + module["name"], + module["end"] - module["begin"] + 1), + yellow=True, + ) + tw.line() diff --git a/ipatests/test_ipatests_plugins/test_slicing.py b/ipatests/test_ipatests_plugins/test_slicing.py new file mode 100644 index 0000000000..e21af33a1d --- /dev/null +++ b/ipatests/test_ipatests_plugins/test_slicing.py @@ -0,0 +1,127 @@ +# +# Copyright (C) 2019 FreeIPA Contributors see COPYING for license +# + +import glob + +import pytest + +MOD_NAME = "test_module_{}" +FUNC_NAME = "test_func_{}" +PYTEST_INTERNAL_ERROR = 3 +MODS_NUM = 5 + + +@pytest.fixture +def ipatestdir(testdir): + """ + Create MODS_NUM test modules within testdir. + Each module contains 1 test function. + """ + testdir.makeconftest( + """ + pytest_plugins = ["ipatests.pytest_ipa.slicing"] + """ + ) + for i in range(MODS_NUM): + testdir.makepyfile( + **{MOD_NAME.format(i): + """ + def {func}(): + pass + """.format(func=FUNC_NAME.format(i)) + } + ) + return testdir + + +@pytest.mark.parametrize( + "nslices,nslices_d,groups", + [(2, 0, [[x for x in range(MODS_NUM) if x % 2 == 0], + [x for x in range(MODS_NUM) if x % 2 != 0]]), + (2, 1, [[0], [x for x in range(1, MODS_NUM)]]), + (1, 0, [[x for x in range(MODS_NUM)]]), + (1, 1, [[x for x in range(MODS_NUM)]]), + (MODS_NUM, MODS_NUM, [[x] for x in range(MODS_NUM)]), + ]) +def test_slicing(ipatestdir, nslices, nslices_d, groups): + """ + Positive tests. + + Run `nslices` slices, including `nslices_d` dedicated slices. + The `groups` is an expected result of slices grouping. + + For example, there are 5 test modules. If one runs them in + two slices (without dedicated ones) the expected result will + be [[0, 2, 4], [1, 3]]. This means, that first slice will run + modules 0, 2, 4, second one - 1 and 3. + + Another example, there are 5 test modules. We want to run them + in two slices. Also we specify module 0 as dedicated. + The expected result will be [[0], [1, 2, 3, 4]], which means, that + first slice will run module 0, second one - 1, 2, 3, 4. + + If the given slice count is one, then this plugin does nothing. + """ + for sl in range(nslices): + args = [ + "-v", + "--slices={}".format(nslices), + "--slice-num={}".format(sl + 1) + ] + for dslice in range(nslices_d): + args.append( + "--slice-dedicated={}.py".format(MOD_NAME.format(dslice))) + result = ipatestdir.runpytest(*args) + assert result.ret == 0 + result.assert_outcomes(passed=len(groups[sl])) + for mod_num in groups[sl]: + result.stdout.fnmatch_lines(["*{mod}.py::{func} PASSED*".format( + mod=MOD_NAME.format(mod_num), + func=FUNC_NAME.format(mod_num))]) + + +@pytest.mark.parametrize( + "nslices,nslices_d,nslice,dmod,err_message", + [(2, 3, 1, None, + "Dedicated slice number({}) shouldn't be greater than" + " the number of slices({})".format(3, 2)), + (MODS_NUM, 0, MODS_NUM + 1, None, + "Slice number({}) shouldn't be greater than the number of slices" + "({})".format( + MODS_NUM + 1, MODS_NUM)), + (MODS_NUM + 1, 1, 1, None, + "Total number of slices({}) shouldn't be greater" + " than the number of Python test modules({})".format( + MODS_NUM + 1, MODS_NUM)), + (MODS_NUM, MODS_NUM, 1, "notexisted_module", + "The number of dedicated slices({}) should be equal to the " + "number of dedicated modules({})".format( + [], ["notexisted_module.py"])), + (MODS_NUM - 1, MODS_NUM - 1, 1, None, + "The total number of slices({}) is not sufficient to" + " run dedicated modules({}) as well as usual ones({})".format( + MODS_NUM - 1, MODS_NUM - 1, 1)), + ]) +def test_slicing_negative(ipatestdir, nslices, nslices_d, nslice, dmod, + err_message): + """ + Negative scenarios + """ + args = [ + "-v", + "--slices={}".format(nslices), + "--slice-num={}".format(nslice) + ] + if dmod is None: + for dslice in range(nslices_d): + args.append( + "--slice-dedicated={}.py".format(MOD_NAME.format(dslice))) + else: + args.append( + "--slice-dedicated={}.py".format(dmod)) + result = ipatestdir.runpytest(*args) + assert result.ret == PYTEST_INTERNAL_ERROR + result.assert_outcomes() + result.stdout.fnmatch_lines(["*ValueError: {err_message}*".format( + err_message=glob.escape(err_message))]) From 4e7dc6cc811c4e98611b435c303c44e1b6eb893f Mon Sep 17 00:00:00 2001 From: Stanislav Levin <s...@altlinux.org> Date: Thu, 11 Jul 2019 22:12:54 +0300 Subject: [PATCH 3/3] Avoid use of '/tmp' for pip operations `ipa-run-tests` is not an entry_point script, so pip during an installation of ipatests package checks if the file path is executable. If not - just don't set the executable permission bits. pip's working directory defaults to /tmp/xxx. Thus, if /tmp is mounted with noexec such scripts lose their executable ability after an installation into virtualenv. This was found on Travis + freeipa/freeipa-test-runner:master-latest docker image. Build directory of pip could be changed via env variable PIP_BUILD, for example. Fixes: https://pagure.io/freeipa/issue/8009 Signed-off-by: Stanislav Levin <s...@altlinux.org> --- .tox-install.sh | 16 ++++++++++++++-- tox.ini | 8 ++++---- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/.tox-install.sh b/.tox-install.sh index d9c44602b2..95f5c1e70a 100755 --- a/.tox-install.sh +++ b/.tox-install.sh @@ -4,8 +4,9 @@ set -ex FLAVOR="$1" ENVPYTHON="$(realpath "$2")" ENVSITEPACKAGESDIR="$(realpath "$3")" -# 3...end are package requirements -shift 3 +ENVDIR="$4" +# 4...end are package requirements +shift 4 TOXINIDIR="$(cd "$(dirname "$0")" && pwd)" @@ -25,10 +26,21 @@ if [ ! -f "${TOXINIDIR}/tox.ini" ]; then exit 3 fi +if [ ! -d "${ENVDIR}" ]; then + echo "${ENVDIR}: no such directory" + exit 4 +fi + # https://pip.pypa.io/en/stable/user_guide/#environment-variables export PIP_CACHE_DIR="${TOXINIDIR}/.tox/cache" mkdir -p "${PIP_CACHE_DIR}" +# /tmp could be mounted with noexec option. +# pip checks if path is executable and if not then doesn't set such +# permission bits +export PIP_BUILD="${ENVDIR}/pip_build" +rm -rf "${PIP_BUILD}" + DISTBUNDLE="${TOXINIDIR}/dist/bundle" mkdir -p "${DISTBUNDLE}" diff --git a/tox.ini b/tox.ini index 1905be6523..19abffbc80 100644 --- a/tox.ini +++ b/tox.ini @@ -8,7 +8,7 @@ skipsdist=true # always re-create virtual env. A special install helper is used to configure, # build and install packages. recreate=True -install_command={toxinidir}/.tox-install.sh wheel_bundle {envpython} {envsitepackagesdir} {packages} +install_command={toxinidir}/.tox-install.sh wheel_bundle {envpython} {envsitepackagesdir} {envdir} {packages} changedir={envdir} setenv= HOME={envtmpdir} @@ -17,7 +17,7 @@ deps= ipatests commands= {envbindir}/ipa --help - {envpython} -bb {envbindir}/ipa-run-tests --ipaclient-unittests --junitxml={envdir}/junit-{envname}.xml + {envbindir}/ipa-run-tests --junitxml={envdir}/junit-{envname}.xml {posargs:--ipaclient-unittests} [testenv:pylint3] basepython=python3 @@ -34,7 +34,7 @@ commands= [testenv:pypi] recreate=True -install_command={toxinidir}/.tox-install.sh pypi_packages {envpython} {envsitepackagesdir} {packages} +install_command={toxinidir}/.tox-install.sh pypi_packages {envpython} {envsitepackagesdir} {envdir} {packages} changedir={envdir} setenv= HOME={envtmpdir} @@ -46,7 +46,7 @@ deps= ipaserver ipatests commands= - {envpython} -m pytest {toxinidir}/pypi/test_placeholder.py + {envpython} -m pytest {posargs:{toxinidir}/pypi/test_placeholder.py} [pycodestyle] # E402 module level import not at top of file
_______________________________________________ FreeIPA-devel mailing list -- freeipa-devel@lists.fedorahosted.org To unsubscribe send an email to freeipa-devel-le...@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-devel@lists.fedorahosted.org