Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-testflo for openSUSE:Factory 
checked in at 2022-10-12 18:24:24
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-testflo (Old)
 and      /work/SRC/openSUSE:Factory/.python-testflo.new.2275 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-testflo"

Wed Oct 12 18:24:24 2022 rev:8 rq:1009837 version:1.4.9

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-testflo/python-testflo.changes    
2021-07-14 23:59:27.213309046 +0200
+++ /work/SRC/openSUSE:Factory/.python-testflo.new.2275/python-testflo.changes  
2022-10-12 18:25:55.473849307 +0200
@@ -1,0 +2,9 @@
+Tue Oct 11 15:53:25 UTC 2022 - Yogalakshmi Arunachalam <yarunacha...@suse.com>
+
+- Update to version 1.4.9 
+  * added --durations option and fixed some config file issues #71
+  * added --durations option that prints the n longest running tests (similar 
to pytest)
+  * fixed the way the .testflo config file is processed so it should support 
setting any, or at least most of the command line options
+  * added the --skip_dirs command line option (skip_dirs could be defined in 
the config file but it wasn't a valid command line option)
+
+-------------------------------------------------------------------

Old:
----
  testflo-1.4.2.tar.gz

New:
----
  testflo-1.4.9.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-testflo.spec ++++++
--- /var/tmp/diff_new_pack.q0rXRi/_old  2022-10-12 18:25:56.749852495 +0200
+++ /var/tmp/diff_new_pack.q0rXRi/_new  2022-10-12 18:25:56.753852505 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-testflo
 #
-# Copyright (c) 2021 SUSE LLC
+# Copyright (c) 2022 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -19,7 +19,7 @@
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 %define skip_python36 1
 Name:           python-testflo
-Version:        1.4.2
+Version:        1.4.9
 Release:        0
 Summary:        A flow-based testing framework
 License:        Apache-2.0

++++++ testflo-1.4.2.tar.gz -> testflo-1.4.9.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/PKG-INFO new/testflo-1.4.9/PKG-INFO
--- old/testflo-1.4.2/PKG-INFO  2020-06-10 18:17:38.000000000 +0200
+++ new/testflo-1.4.9/PKG-INFO  2022-07-25 20:09:09.121010300 +0200
@@ -1,73 +1,8 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: testflo
-Version: 1.4.2
+Version: 1.4.9
 Summary: A simple flow-based testing framework
-Home-page: UNKNOWN
-Author: UNKNOWN
-Author-email: UNKNOWN
 License: Apache 2.0
-Description: 
-                usage: testflo [options]
-        
-                positional arguments:
-                  test                  A test method, test case, module, or 
directory to run.
-        
-                optional arguments:
-                  -h, --help            show this help message and exit
-                  -c FILE, --config FILE
-                                        Path of config file where preferences 
are specified.
-                  -t FILE, --testfile FILE
-                                        Path to a file containing one testspec 
per line.
-                  --maxtime TIME_LIMIT  Specifies a time limit in seconds for 
tests to be
-                                        saved to the quicktests.in file.
-                  -n NUM_PROCS, --numprocs NUM_PROCS
-                                        Number of processes to run. By 
default, this will use
-                                        the number of CPUs available. To force 
serial
-                                        execution, specify a value of 1.
-                  -o FILE, --outfile FILE
-                                        Name of test report file. Default is
-                                        testflo_report.out.
-                  -v, --verbose         Include testspec and elapsed time in 
screen output.
-                                        Also shows all stderr output, even if 
test doesn't
-                                        fail
-                  --compact             Limit output to a single character for 
each test.
-                  --dryrun              Don't actually run tests, but print 
which tests would
-                                        have been run.
-                  --pre_announce        Announce the name of each test before 
it runs. This
-                                        can help track down a hanging test. 
This automatically
-                                        sets -n 1.
-                  -f, --fail            Save failed tests to failtests.in file.
-                  --full_path           Display full test specs instead of 
shortened names.
-                  -i, --isolated        Run each test in a separate subprocess.
-                  --nompi               Force all tests to run without MPI. 
This can be useful
-                                        for debugging.
-                  -x, --stop            Stop after the first test failure, or 
as soon as
-                                        possible when running concurrent tests.
-                  -s, --nocapture       Standard output (stdout) will not be 
captured and will
-                                        be written to the screen immediately.
-                  --coverage            Perform coverage analysis and display 
results on
-                                        stdout
-                  --coverage-html       Perform coverage analysis and display 
results in
-                                        browser
-                  --coverpkg PKG        Add the given package to the coverage 
list. You can
-                                        use this option multiple times to 
cover multiple
-                                        packages.
-                  --cover-omit FILE     Add a file name pattern to remove it 
from coverage.
-                  -b, --benchmark       Specifies that benchmarks are to be 
run rather than
-                                        tests, so only files starting with 
"benchmark\_" will
-                                        be executed.
-                  -d FILE, --datafile FILE
-                                        Name of benchmark data file. Default is
-                                        benchmark_data.csv.
-                  --noreport            Don't create a test results file.
-                  -m GLOB, --match GLOB, --testmatch GLOB
-                                        Pattern to use for test discovery. 
Multiple patterns
-                                        are allowed.
-                  --timeout TIMEOUT     Timeout in seconds. Test will be 
terminated if it
-                                        takes longer than timeout. Only works 
for tests
-                                        running in a subprocess (MPI and 
isolated).
-              
-Platform: UNKNOWN
 Classifier: Development Status :: 4 - Beta
 Classifier: License :: OSI Approved :: Apache Software License
 Classifier: Natural Language :: English
@@ -79,3 +14,66 @@
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
 Classifier: Programming Language :: Python :: Implementation :: CPython
+License-File: LICENSE.txt
+
+
+        usage: testflo [options]
+
+        positional arguments:
+          test                  A test method, test case, module, or directory 
to run.
+
+        optional arguments:
+          -h, --help            show this help message and exit
+          -c FILE, --config FILE
+                                Path of config file where preferences are 
specified.
+          -t FILE, --testfile FILE
+                                Path to a file containing one testspec per 
line.
+          --maxtime TIME_LIMIT  Specifies a time limit in seconds for tests to 
be
+                                saved to the quicktests.in file.
+          -n NUM_PROCS, --numprocs NUM_PROCS
+                                Number of processes to run. By default, this 
will use
+                                the number of CPUs available. To force serial
+                                execution, specify a value of 1.
+          -o FILE, --outfile FILE
+                                Name of test report file. Default is
+                                testflo_report.out.
+          -v, --verbose         Include testspec and elapsed time in screen 
output.
+                                Also shows all stderr output, even if test 
doesn't
+                                fail
+          --compact             Limit output to a single character for each 
test.
+          --dryrun              Don't actually run tests, but print which 
tests would
+                                have been run.
+          --pre_announce        Announce the name of each test before it runs. 
This
+                                can help track down a hanging test. This 
automatically
+                                sets -n 1.
+          -f, --fail            Save failed tests to failtests.in file.
+          --full_path           Display full test specs instead of shortened 
names.
+          -i, --isolated        Run each test in a separate subprocess.
+          --nompi               Force all tests to run without MPI. This can 
be useful
+                                for debugging.
+          -x, --stop            Stop after the first test failure, or as soon 
as
+                                possible when running concurrent tests.
+          -s, --nocapture       Standard output (stdout) will not be captured 
and will
+                                be written to the screen immediately.
+          --coverage            Perform coverage analysis and display results 
on
+                                stdout
+          --coverage-html       Perform coverage analysis and display results 
in
+                                browser
+          --coverpkg PKG        Add the given package to the coverage list. 
You can
+                                use this option multiple times to cover 
multiple
+                                packages.
+          --cover-omit FILE     Add a file name pattern to remove it from 
coverage.
+          -b, --benchmark       Specifies that benchmarks are to be run rather 
than
+                                tests, so only files starting with 
"benchmark\_" will
+                                be executed.
+          -d FILE, --datafile FILE
+                                Name of benchmark data file. Default is
+                                benchmark_data.csv.
+          --noreport            Don't create a test results file.
+          -m GLOB, --match GLOB, --testmatch GLOB
+                                Pattern to use for test discovery. Multiple 
patterns
+                                are allowed.
+          --timeout TIMEOUT     Timeout in seconds. Test will be terminated if 
it
+                                takes longer than timeout. Only works for tests
+                                running in a subprocess (MPI and isolated).
+      
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/setup.py new/testflo-1.4.9/setup.py
--- old/testflo-1.4.2/setup.py  2020-06-10 17:14:57.000000000 +0200
+++ new/testflo-1.4.9/setup.py  2022-07-25 18:54:24.000000000 +0200
@@ -3,7 +3,7 @@
 import re
 
 __version__ = re.findall(
-    r"""__version__ = ["']+([0-9\.]*)["']+""",
+    r"""__version__ = ["']+([0-9\.\-dev]*)["']+""",
     open('testflo/__init__.py').read(),
 )[0]
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/__init__.py 
new/testflo-1.4.9/testflo/__init__.py
--- old/testflo-1.4.2/testflo/__init__.py       2020-06-10 18:14:32.000000000 
+0200
+++ new/testflo-1.4.9/testflo/__init__.py       2022-07-25 20:08:14.000000000 
+0200
@@ -1 +1 @@
-__version__ = '1.4.2'
+__version__ = '1.4.9'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/discover.py 
new/testflo-1.4.9/testflo/discover.py
--- old/testflo-1.4.2/testflo/discover.py       2020-06-10 17:14:57.000000000 
+0200
+++ new/testflo-1.4.9/testflo/discover.py       2021-12-12 23:38:56.000000000 
+0100
@@ -6,7 +6,7 @@
 
 from os.path import basename, dirname, isdir
 
-from testflo.util import find_files, get_module, ismethod
+from testflo.util import find_files, get_module, get_testpath, ismethod
 from testflo.test import Test
 
 def _has_class_fixture(tcase):
@@ -174,7 +174,7 @@
         file system path to the .py file.
         """
 
-        module, _, rest = testspec.partition(':')
+        module, rest = get_testpath(testspec)
         if rest:
             tcasename, _, method = rest.partition('.')
             if method:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/duration.py 
new/testflo-1.4.9/testflo/duration.py
--- old/testflo-1.4.2/testflo/duration.py       1970-01-01 01:00:00.000000000 
+0100
+++ new/testflo-1.4.9/testflo/duration.py       2022-07-21 20:19:19.000000000 
+0200
@@ -0,0 +1,45 @@
+import sys
+import os
+
+class DurationSummary(object):
+    """Writes a summary of the tests taking the longest time."""
+
+    def __init__(self, options, stream=sys.stdout):
+        self.stream = stream
+        self.options = options
+        self.startdir = os.getcwd()
+
+    def get_iter(self, input_iter):
+        durations = []
+
+        for test in input_iter:
+            durations.append((test.spec, test.end_time - test.start_time))
+            yield test
+
+        write = self.stream.write
+        mintime = self.options.durations_min
+
+        if mintime > 0.:
+            title = " Max duration tests with duration >= {} sec 
".format(mintime)
+        else:
+            title = " Max duration tests "
+
+        eqs = "=" * 16
+
+        write("\n\n{}{}{}\n\n".format(eqs, title, eqs))
+        count = self.options.durations
+
+        for spec, duration in sorted(durations, key=lambda t: t[1], 
reverse=True):
+            if duration < mintime:
+                break
+
+            if spec.startswith(self.startdir):
+                spec = spec[len(self.startdir):]
+
+            write("{:8.3f} sec - {}\n".format(duration, spec))
+
+            count -= 1
+            if count <= 0:
+                break
+
+        write("\n" + "=" * (len(title) + 2 * len(eqs)) + "\n")
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/main.py 
new/testflo-1.4.9/testflo/main.py
--- old/testflo-1.4.2/testflo/main.py   2020-06-10 17:14:57.000000000 +0200
+++ new/testflo-1.4.9/testflo/main.py   2022-07-25 18:54:24.000000000 +0200
@@ -32,14 +32,16 @@
 
 from fnmatch import fnmatch, fnmatchcase
 
+import testflo
 from testflo.runner import ConcurrentTestRunner
 from testflo.printer import ResultPrinter
 from testflo.benchmark import BenchmarkWriter
 from testflo.summary import ResultSummary
+from testflo.duration import DurationSummary
 from testflo.discover import TestDiscoverer
 from testflo.filters import TimeFilter, FailFilter
 
-from testflo.util import read_config_file, read_test_file
+from testflo.util import read_config_file, read_test_file, _get_parser
 from testflo.cover import setup_coverage, finalize_coverage
 from testflo.options import get_options
 from testflo.qman import get_server_queue
@@ -59,7 +61,7 @@
             yield test
 
 
-def run_pipeline(source, pipe):
+def run_pipeline(source, pipe, disallow_skipped):
     """Run a pipeline of test iteration objects."""
 
     global _start_time
@@ -71,12 +73,21 @@
     for i,p in enumerate(pipe):
         iters.append(p(iters[i]))
 
-    return_code = 0
-
+    n_failed = 0
+    n_skipped = 0
     # iterate over the last iter in the pipeline and we're done
     for result in iters[-1]:
         if result.status == 'FAIL' and not result.expected_fail:
-            return_code = 1
+            n_failed += 1
+        elif result.status == 'SKIP':
+            n_skipped += 1
+
+    if n_failed > 0:
+        return_code = 1
+    elif n_skipped > 1 and disallow_skipped:
+        return_code = 2
+    else:
+        return_code = 0
 
     return return_code
 
@@ -86,6 +97,11 @@
         args = sys.argv[1:]
 
     options = get_options(args)
+
+    if options.version:
+        print("testflo version %s" % testflo.__version__)
+        return 0
+
     nprocs = options.num_procs
 
     options.skip_dirs = []
@@ -100,6 +116,7 @@
 skip_dirs=site-packages,
     dist-packages,
     build,
+    _build,
     contrib
 """)
     read_config_file(rcfile, options)
@@ -124,11 +141,15 @@
         tests = [os.getcwd()]
 
     def dir_exclude(d):
+        base = os.path.basename(d)
         for skip in options.skip_dirs:
-            if fnmatch(os.path.basename(d), skip):
+            if fnmatch(base, skip):
                 return True
         return False
 
+    # set this so code will know when it's running under testflo
+    os.environ['TESTFLO_RUNNING'] = '1'
+
     setup_coverage(options)
 
     if options.noreport:
@@ -188,6 +209,9 @@
             if options.benchmark:
                 pipeline.append(BenchmarkWriter(stream=bdata).get_iter)
 
+            if options.durations:
+                pipeline.append(DurationSummary(options).get_iter)
+
             if options.compact:
                 verbose = -1
             else:
@@ -199,6 +223,9 @@
             ])
             if not options.noreport:
                 # print verbose results and summary to a report file
+                if options.durations:
+                    pipeline.append(DurationSummary(options, 
stream=report).get_iter)
+
                 pipeline.extend([
                     ResultPrinter(options, report, verbose=1).get_iter,
                     ResultSummary(options, stream=report).get_iter,
@@ -210,7 +237,7 @@
         if options.save_fails:
             pipeline.append(FailFilter().get_iter)
 
-        retval = run_pipeline(tests, pipeline)
+        retval = run_pipeline(tests, pipeline, options.disallow_skipped)
 
         finalize_coverage(options)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/mpirun.py 
new/testflo-1.4.9/testflo/mpirun.py
--- old/testflo-1.4.2/testflo/mpirun.py 2020-06-10 17:14:57.000000000 +0200
+++ new/testflo-1.4.9/testflo/mpirun.py 2022-07-25 18:54:24.000000000 +0200
@@ -10,6 +10,8 @@
     import os
     import traceback
 
+    os.environ['OPENMDAO_USE_MPI'] = '1'
+
     from mpi4py import MPI
     from testflo.test import Test
     from testflo.cover import setup_coverage, save_coverage
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/test.py 
new/testflo-1.4.9/testflo/test.py
--- old/testflo-1.4.2/testflo/test.py   2020-06-10 17:14:57.000000000 +0200
+++ new/testflo-1.4.9/testflo/test.py   2022-07-21 20:16:33.000000000 +0200
@@ -8,6 +8,7 @@
 import subprocess
 from tempfile import mkstemp
 from importlib import import_module
+from contextlib import contextmanager
 
 from types import FunctionType, ModuleType
 from io import StringIO
@@ -18,7 +19,7 @@
 from testflo.cover import start_coverage, stop_coverage
 
 from testflo.util import get_module, ismethod, get_memory_usage, \
-                         _options2args
+                         get_testpath, _options2args
 from testflo.devnull import DevNull
 
 
@@ -51,22 +52,21 @@
 _testing_path = ['.'] + sys.path
 
 
-class TestContext(object):
-    """Supports using the 'with' statement in place of try-finally to
-    set sys.path for a test.
-    """
+@contextmanager
+def testcontext(test):
+    global _testing_path
+    old_sys_path = sys.path
 
-    def __init__(self, test):
-        self.test = test
-        self.old_sys_path = sys.path
-
-    def __enter__(self):
-        global _testing_path
-        _testing_path[0] = self.test.test_dir
-        sys.path = _testing_path
+    _testing_path[0] = test.test_dir
+    sys.path = _testing_path
 
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        sys.path = self.old_sys_path
+    try:
+        yield
+    except Exception:
+        test.status = 'FAIL'
+        test.err_msg = traceback.format_exc()
+    finally:
+        sys.path = old_sys_path
 
 
 class Test(object):
@@ -78,7 +78,9 @@
     def __init__(self, testspec, options):
         self.spec = testspec
         self.options = options
-        self.test_dir = os.path.dirname(testspec.split(':',1)[0])
+
+        testpath, rest = get_testpath(testspec)
+        self.test_dir = os.path.dirname(testpath)
 
         self.status = None
         self.err_msg = ''
@@ -111,7 +113,7 @@
         """Get the test's module, testcase (if any), function name,
         N_PROCS (for mpi tests) and ISOLATED and set our attributes.
         """
-        with TestContext(self):
+        with testcontext(self):
             try:
                 mod, self.tcasename, self.funcname = 
_parse_test_path(self.spec)
                 self.modpath = mod.__name__
@@ -223,18 +225,20 @@
             return self
 
         MPI = None
-        if queue is not None and self.nprocs > 0 and not self.options.nompi:
+        if self.nprocs > 0 and not self.options.nompi:
             try:
                 from mpi4py import MPI
             except ImportError:
                 pass
             else:
-                return self._run_mpi(queue)
+                if queue is not None:
+                    return self._run_mpi(queue)
         elif self.options.isolated:
             return self._run_isolated(queue)
 
-        with TestContext(self):
-            mod = import_module(self.modpath)
+        with testcontext(self):
+            testpath, _ = get_testpath(self.spec)
+            _, mod = get_module(testpath)
 
             testcase = getattr(mod, self.tcasename) if self.tcasename is not 
None else None
             funcname, nprocs = (self.funcname, self.nprocs)
@@ -353,9 +357,9 @@
         """Returns the testspec with only the file's basename instead
         of its full path.
         """
-        parts = self.spec.split(':', 1)
-        fname = os.path.basename(parts[0])
-        return ':'.join((fname, parts[-1]))
+        testpath, rest = get_testpath(self.spec)
+        fname = os.path.basename(testpath)
+        return ':'.join((fname, rest))
 
     def __str__(self):
         if self.err_msg:
@@ -378,27 +382,16 @@
     file system path to the .py file.  A value of None in the tuple
     indicates that that part of the testspec was not present.
     """
+    testpath, rest = get_testpath(testspec)
+    _, mod = get_module(testpath)
 
-    testcase = funcname = tcasename = None
-    testspec = testspec.strip()
-    parts = testspec.split(':')
-    if len(parts) > 1 and parts[1].startswith('\\'):  # windows abs path
-        module = ':'.join(parts[:2])
-        if len(parts) == 3:
-            rest = parts[2]
-        else:
-            rest = ''
-    else:
-        module, _, rest = testspec.partition(':')
-
-    _, mod = get_module(module)
+    funcname = tcasename = None
 
     if rest:
         objname, _, funcname = rest.partition('.')
         obj = getattr(mod, objname)
         if isclass(obj) and issubclass(obj, TestCase):
             tcasename = objname
-            testcase = obj
             if funcname:
                 meth = getattr(obj, funcname)
                 if not ismethod(meth):
@@ -429,7 +422,7 @@
     except _UnexpectedSuccess:
         status = 'OK'
         expected = True
-    except Exception as err:
+    except Exception:
         status = 'FAIL'
         sys.stderr.write(traceback.format_exc())
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo/util.py 
new/testflo-1.4.9/testflo/util.py
--- old/testflo-1.4.2/testflo/util.py   2020-06-10 17:14:57.000000000 +0200
+++ new/testflo-1.4.9/testflo/util.py   2022-07-21 20:19:19.000000000 +0200
@@ -6,6 +6,7 @@
 import sys
 import itertools
 import inspect
+import importlib
 import warnings
 from importlib import import_module
 
@@ -14,7 +15,7 @@
 from fnmatch import fnmatch
 from os.path import join, dirname, basename, isfile,  abspath, split, splitext
 
-from argparse import ArgumentParser
+from argparse import ArgumentParser, _AppendAction
 
 from testflo.cover import start_coverage, stop_coverage
 
@@ -26,6 +27,8 @@
 
     parser = ArgumentParser()
     parser.usage = "testflo [options]"
+    parser.add_argument('--version', action='store_true', dest='version',
+                        help="Display the version number and exit.")
     parser.add_argument('-c', '--config', action='store', dest='cfg',
                         metavar='FILE',
                         help='Path of config file where preferences are 
specified.')
@@ -38,10 +41,10 @@
                              'the quicktests.in file.')
 
     parser.add_argument('-n', '--numprocs', type=int, action='store',
-                        dest='num_procs', metavar='NUM_PROCS',
-                        help='Number of processes to run. By default, this 
will '
-                             'use the number of CPUs available.  To force 
serial'
-                             ' execution, specify a value of 1.')
+                        dest='num_procs', metavar='NUM_TEST_PROCS',
+                        help='Number of concurrent test processes to run. By 
default, this will '
+                             'use the number of virtual processors available.  
To force tests to '
+                             'run consecutively, specify a value of 1.')
     parser.add_argument('-o', '--outfile', action='store', dest='outfile',
                         metavar='FILE', default='testflo_report.out',
                         help='Name of test report file.  Default is 
testflo_report.out.')
@@ -92,14 +95,26 @@
                         metavar='FILE', default='benchmark_data.csv',
                         help='Name of benchmark data file.  Default is 
benchmark_data.csv.')
 
+    parser.add_argument('--durations', action='store', type=int, 
dest='durations', default=0,
+                        metavar='NUM',
+                        help="Display 'NUM' tests with longest durations.")
+
+    parser.add_argument('--durations-min', action='store', type=float, 
dest='durations_min',
+                        default=0.005, metavar='MIN_TIME',
+                        help='Specify the minimum duration test to include in 
the durations list.')
+
     parser.add_argument('--noreport', action='store_true', dest='noreport',
                         help="Don't create a test results file.")
 
     parser.add_argument('--show_skipped', action='store_true', 
dest='show_skipped',
                         help="Display a list of any skipped tests in the 
summary.")
 
+    parser.add_argument('--disallow_skipped', action='store_true', 
dest='disallow_skipped',
+                        help="Return exit code 2 if no tests failed but some 
tests are skipped.")
+
     parser.add_argument('tests', metavar='test', nargs='*',
-                        help='A test method, test case, module, or directory 
to run.')
+                        help='A test method, test case, module, or directory 
to run. If not '
+                             'supplied, the current working directory is 
assumed.')
 
     parser.add_argument('-m', '--match', '--testmatch', action='append', 
dest='test_glob',
                         metavar='GLOB',
@@ -109,12 +124,18 @@
     parser.add_argument('--exclude', action='append', dest='excludes', 
metavar='GLOB', default=[],
                         help="Pattern to exclude test functions. Multiple 
patterns are allowed.")
 
-    parser.add_argument('--timeout', action='store', dest='timeout', 
type=float,
-                        help='Timeout in seconds. Test will be terminated if 
it takes longer than timeout. Only'
-                             ' works for tests running in a subprocess (MPI 
and isolated).')
+    parser.add_argument('--skip_dir', action='append', dest='skip_dirs', 
metavar='GLOB', default=[],
+                        help="Pattern to skip directories. Multiple patterns 
are allowed. Patterns "
+                        "are applied only to local dir names, not full paths.")
+
+    parser.add_argument('--timeout', action='store', dest='timeout', 
type=float, metavar='TIME_LIMIT',
+                        help="Timeout in seconds. A test will be terminated if 
it takes longer than "
+                             "'TIME_LIMIT'. Only works for tests running in a 
subprocess "
+                             "(MPI or isolated).")
 
     return parser
 
+
 def _options2args():
     """Gets the testflo args that should be used in subprocesses."""
 
@@ -250,10 +271,13 @@
         pnames = []
     else:
         pnames = [splitext(basename(fpath))[0]]
+
     path = dirname(abspath(fpath))
+
     while isfile(join(path, '__init__.py')):
-            path, pname = split(path)
-            pnames.append(pname)
+        path, pname = split(path)
+        pnames.append(pname)
+
     return '.'.join(pnames[::-1])
 
 
@@ -268,44 +292,40 @@
     return pdirs[::-1]
 
 
+def get_testpath(testspec):
+    """Return the path to the test module separated from
+    the rest of the test spec.
+    """
+    testspec = testspec.strip()
+    parts = testspec.split(':')
+    if len(parts) > 1 and parts[1].startswith('\\'):  # windows abs path
+        path = ':'.join(parts[:2])
+        if len(parts) == 3:
+            rest = parts[2]
+        else:
+            rest = ''
+    else:
+        path, _, rest = testspec.partition(':')
+    return path, rest
+
+
 def find_module(name):
     """Return the pathname of the Python file corresponding to the
     given module name, or None if it can't be found. The
     file must be an uncompiled Python (.py) file.
     """
+    try:
+        info = importlib.util.find_spec(name)
+    except ImportError:
+        info = None
+    if info is not None:
+        return info.origin
 
-    nameparts = name.split('.')
-
-    endings = [join(*nameparts)]
-    endings.append(join(endings[0], '__init__.py'))
-    endings[0] += '.py'
-
-    for entry in sys.path:
-        for ending in endings:
-            f = join(entry, ending)
-            if isfile(f):
-                return f
-    return None
-
-
-def get_module(fname):
-    """Given a filename or module path name, return a tuple
-    of the form (filename, module).
-    """
-
-    if fname.endswith('.py'):
-        modpath = fpath2modpath(fname)
-        if not modpath:
-            raise RuntimeError("can't find module %s" % fname)
-    else:
-        modpath = fname
-        fname = find_module(modpath)
 
-        if not fname:
-            raise ImportError("can't import %s" % modpath)
+_mod2file = {}  # keep track of non-pkg files to detect and flag dups
 
-    start_coverage()
 
+def try_import(fname, modpath):
     try:
         mod = import_module(modpath)
     except ImportError:
@@ -325,6 +345,48 @@
             del sys.modules[modpath]
         finally:
             sys.path = oldpath
+
+    return mod
+
+
+def get_module(fname):
+    """Given a filename or module path name, return a tuple
+    of the form (filename, module).
+    """
+
+    if fname.endswith('.py'):
+        modpath = fpath2modpath(fname)
+        if not modpath:
+            raise RuntimeError("can't find module %s" % fname)
+
+        if modpath in _mod2file:
+            old = _mod2file[modpath]
+            if old != fname:
+                raise RuntimeError("module '%s' was already imported earlier 
from file '%s' so "
+                                    "it can't be imported from file '%s'. To 
fix this problem, "
+                                    "either rename the file or add the file to 
a python package "
+                                    "so the resulting module path will be 
unique." %
+                                    (modpath, old, fname))
+        else:
+            _mod2file[modpath] = fname
+
+    else:
+        modpath = fname
+        fname = find_module(modpath)
+
+        if fname:
+            _mod2file[modpath] = fname
+        else:
+            # check for a non-pkg module
+            if modpath in _mod2file:
+                fname = _mod2file[modpath]
+            else:
+                raise ImportError("can't import %s" % modpath)
+
+    start_coverage()
+
+    try:
+        mod = try_import(fname, modpath)
     finally:
         stop_coverage()
 
@@ -344,19 +406,43 @@
                 yield line
 
 
+_parser_types = None
+
+
+def _get_parser_action_map():
+    global _parser_types
+
+    if _parser_types is None:
+        _parser_types = {}
+        p = _get_parser()
+        for action in p._actions:
+            _parser_types[action.dest] = action
+
+    return _parser_types
+
+
 def read_config_file(cfgfile, options):
     config = ConfigParser()
-    config.readfp(open(cfgfile))
-
-    if config.has_option('testflo', 'skip_dirs'):
-        skips = config.get('testflo', 'skip_dirs')
-        options.skip_dirs = [s.strip() for s in skips.split(',') if s.strip()]
+    config.read_file(open(cfgfile), source=cfgfile)
 
-    if config.has_option('testflo', 'num_procs'):
-        options.num_procs = int(config.get('testflo', 'num_procs'))
+    if 'testflo' in config:
+        parser_map = _get_parser_action_map()
 
-    if config.has_option('testflo', 'noreport'):
-        options.noreport = bool(config.get('testflo', 'noreport'))
+        for name, optstr in config['testflo'].items():
+            if name not in parser_map:
+                warnings.warn("Unknown option '{}' in testflo config file 
'{}'.".format(name,
+                                                                               
         cfgfile))
+                continue
+
+            action = parser_map[name]
+            typ = action.type
+            if typ is None:
+                typ = lambda x: x
+
+            if isinstance(action, _AppendAction):
+                setattr(options, name, [typ(s.strip()) for s in 
optstr.split(',') if s.strip()])
+            else:
+                setattr(options, name, typ(optstr))
 
 
 def get_memory_usage():
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo.egg-info/PKG-INFO 
new/testflo-1.4.9/testflo.egg-info/PKG-INFO
--- old/testflo-1.4.2/testflo.egg-info/PKG-INFO 2020-06-10 18:17:37.000000000 
+0200
+++ new/testflo-1.4.9/testflo.egg-info/PKG-INFO 2022-07-25 20:09:09.000000000 
+0200
@@ -1,73 +1,8 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: testflo
-Version: 1.4.2
+Version: 1.4.9
 Summary: A simple flow-based testing framework
-Home-page: UNKNOWN
-Author: UNKNOWN
-Author-email: UNKNOWN
 License: Apache 2.0
-Description: 
-                usage: testflo [options]
-        
-                positional arguments:
-                  test                  A test method, test case, module, or 
directory to run.
-        
-                optional arguments:
-                  -h, --help            show this help message and exit
-                  -c FILE, --config FILE
-                                        Path of config file where preferences 
are specified.
-                  -t FILE, --testfile FILE
-                                        Path to a file containing one testspec 
per line.
-                  --maxtime TIME_LIMIT  Specifies a time limit in seconds for 
tests to be
-                                        saved to the quicktests.in file.
-                  -n NUM_PROCS, --numprocs NUM_PROCS
-                                        Number of processes to run. By 
default, this will use
-                                        the number of CPUs available. To force 
serial
-                                        execution, specify a value of 1.
-                  -o FILE, --outfile FILE
-                                        Name of test report file. Default is
-                                        testflo_report.out.
-                  -v, --verbose         Include testspec and elapsed time in 
screen output.
-                                        Also shows all stderr output, even if 
test doesn't
-                                        fail
-                  --compact             Limit output to a single character for 
each test.
-                  --dryrun              Don't actually run tests, but print 
which tests would
-                                        have been run.
-                  --pre_announce        Announce the name of each test before 
it runs. This
-                                        can help track down a hanging test. 
This automatically
-                                        sets -n 1.
-                  -f, --fail            Save failed tests to failtests.in file.
-                  --full_path           Display full test specs instead of 
shortened names.
-                  -i, --isolated        Run each test in a separate subprocess.
-                  --nompi               Force all tests to run without MPI. 
This can be useful
-                                        for debugging.
-                  -x, --stop            Stop after the first test failure, or 
as soon as
-                                        possible when running concurrent tests.
-                  -s, --nocapture       Standard output (stdout) will not be 
captured and will
-                                        be written to the screen immediately.
-                  --coverage            Perform coverage analysis and display 
results on
-                                        stdout
-                  --coverage-html       Perform coverage analysis and display 
results in
-                                        browser
-                  --coverpkg PKG        Add the given package to the coverage 
list. You can
-                                        use this option multiple times to 
cover multiple
-                                        packages.
-                  --cover-omit FILE     Add a file name pattern to remove it 
from coverage.
-                  -b, --benchmark       Specifies that benchmarks are to be 
run rather than
-                                        tests, so only files starting with 
"benchmark\_" will
-                                        be executed.
-                  -d FILE, --datafile FILE
-                                        Name of benchmark data file. Default is
-                                        benchmark_data.csv.
-                  --noreport            Don't create a test results file.
-                  -m GLOB, --match GLOB, --testmatch GLOB
-                                        Pattern to use for test discovery. 
Multiple patterns
-                                        are allowed.
-                  --timeout TIMEOUT     Timeout in seconds. Test will be 
terminated if it
-                                        takes longer than timeout. Only works 
for tests
-                                        running in a subprocess (MPI and 
isolated).
-              
-Platform: UNKNOWN
 Classifier: Development Status :: 4 - Beta
 Classifier: License :: OSI Approved :: Apache Software License
 Classifier: Natural Language :: English
@@ -79,3 +14,66 @@
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
 Classifier: Programming Language :: Python :: Implementation :: CPython
+License-File: LICENSE.txt
+
+
+        usage: testflo [options]
+
+        positional arguments:
+          test                  A test method, test case, module, or directory 
to run.
+
+        optional arguments:
+          -h, --help            show this help message and exit
+          -c FILE, --config FILE
+                                Path of config file where preferences are 
specified.
+          -t FILE, --testfile FILE
+                                Path to a file containing one testspec per 
line.
+          --maxtime TIME_LIMIT  Specifies a time limit in seconds for tests to 
be
+                                saved to the quicktests.in file.
+          -n NUM_PROCS, --numprocs NUM_PROCS
+                                Number of processes to run. By default, this 
will use
+                                the number of CPUs available. To force serial
+                                execution, specify a value of 1.
+          -o FILE, --outfile FILE
+                                Name of test report file. Default is
+                                testflo_report.out.
+          -v, --verbose         Include testspec and elapsed time in screen 
output.
+                                Also shows all stderr output, even if test 
doesn't
+                                fail
+          --compact             Limit output to a single character for each 
test.
+          --dryrun              Don't actually run tests, but print which 
tests would
+                                have been run.
+          --pre_announce        Announce the name of each test before it runs. 
This
+                                can help track down a hanging test. This 
automatically
+                                sets -n 1.
+          -f, --fail            Save failed tests to failtests.in file.
+          --full_path           Display full test specs instead of shortened 
names.
+          -i, --isolated        Run each test in a separate subprocess.
+          --nompi               Force all tests to run without MPI. This can 
be useful
+                                for debugging.
+          -x, --stop            Stop after the first test failure, or as soon 
as
+                                possible when running concurrent tests.
+          -s, --nocapture       Standard output (stdout) will not be captured 
and will
+                                be written to the screen immediately.
+          --coverage            Perform coverage analysis and display results 
on
+                                stdout
+          --coverage-html       Perform coverage analysis and display results 
in
+                                browser
+          --coverpkg PKG        Add the given package to the coverage list. 
You can
+                                use this option multiple times to cover 
multiple
+                                packages.
+          --cover-omit FILE     Add a file name pattern to remove it from 
coverage.
+          -b, --benchmark       Specifies that benchmarks are to be run rather 
than
+                                tests, so only files starting with 
"benchmark\_" will
+                                be executed.
+          -d FILE, --datafile FILE
+                                Name of benchmark data file. Default is
+                                benchmark_data.csv.
+          --noreport            Don't create a test results file.
+          -m GLOB, --match GLOB, --testmatch GLOB
+                                Pattern to use for test discovery. Multiple 
patterns
+                                are allowed.
+          --timeout TIMEOUT     Timeout in seconds. Test will be terminated if 
it
+                                takes longer than timeout. Only works for tests
+                                running in a subprocess (MPI and isolated).
+      
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo.egg-info/SOURCES.txt 
new/testflo-1.4.9/testflo.egg-info/SOURCES.txt
--- old/testflo-1.4.2/testflo.egg-info/SOURCES.txt      2020-06-10 
18:17:37.000000000 +0200
+++ new/testflo-1.4.9/testflo.egg-info/SOURCES.txt      2022-07-25 
20:09:09.000000000 +0200
@@ -9,6 +9,7 @@
 testflo/cover.py
 testflo/devnull.py
 testflo/discover.py
+testflo/duration.py
 testflo/filters.py
 testflo/isolatedrun.py
 testflo/main.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.4.2/testflo.egg-info/entry_points.txt 
new/testflo-1.4.9/testflo.egg-info/entry_points.txt
--- old/testflo-1.4.2/testflo.egg-info/entry_points.txt 2020-06-10 
18:17:37.000000000 +0200
+++ new/testflo-1.4.9/testflo.egg-info/entry_points.txt 2022-07-25 
20:09:09.000000000 +0200
@@ -1,4 +1,2 @@
-
-          [console_scripts]
-          testflo=testflo.main:main
-      
\ No newline at end of file
+[console_scripts]
+testflo = testflo.main:main

Reply via email to