Hello community,

here is the log from the commit of package python-testflo for openSUSE:Factory 
checked in at 2019-01-11 14:03:57
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-testflo (Old)
 and      /work/SRC/openSUSE:Factory/.python-testflo.new.28833 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-testflo"

Fri Jan 11 14:03:57 2019 rev:4 rq:662894 version:1.3.4

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-testflo/python-testflo.changes    
2018-12-24 11:45:06.609254375 +0100
+++ /work/SRC/openSUSE:Factory/.python-testflo.new.28833/python-testflo.changes 
2019-01-11 14:04:33.827855914 +0100
@@ -1,0 +2,18 @@
+Fri Jan  4 18:21:36 UTC 2019 - Todd R <toddrme2...@gmail.com>
+
+- Update to testflo version 1.3.4
+  * bug fix
+- Update to testflo version 1.3.3
+  * bug fix
+- Update to testflo version 1.3.2
+  * added support for ISOLATED attribute
+- Update to testflo version 1.3.1
+Aug 17, 2018
+  * output from --pre_announce now looks better, with the result ('.', 'S', or 
'F') showing on the same line as the
+    "about to run ..." instead of on the following line
+  * comments are now allowed inside of a test list file
+  * added a --full_path option so that full testspec paths will be displayed. 
Having the full path make it easier to
+    copy and paste the testspec to run testflo on just that single test.
+  * updated the long_description in setup.py for pypi.
+
+-------------------------------------------------------------------

Old:
----
  LICENSE.txt
  testflo-1.2.tar.gz

New:
----
  testflo-1.3.4.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-testflo.spec ++++++
--- /var/tmp/diff_new_pack.Cj5vHD/_old  2019-01-11 14:04:34.439855298 +0100
+++ /var/tmp/diff_new_pack.Cj5vHD/_new  2019-01-11 14:04:34.439855298 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package python-testflo
 #
-# Copyright (c) 2018 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,14 +18,13 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-testflo
-Version:        1.2
+Version:        1.3.4
 Release:        0
 Summary:        A flow-based testing framework
 License:        Apache-2.0
 Group:          Development/Languages/Python
 Url:            https://github.com/OpenMDAO/testflo
 Source:         
https://files.pythonhosted.org/packages/source/t/testflo/testflo-%{version}.tar.gz
-Source10:       
https://raw.githubusercontent.com/OpenMDAO/testflo/%{version}/LICENSE.txt
 # PATCH-FIX-OPENSUSE use_setuptools.patch -- some of the optional features we 
want need setuptools
 Patch0:         use_setuptools.patch
 BuildRequires:  %{python_module setuptools}
@@ -33,10 +32,14 @@
 BuildRequires:  fdupes
 BuildRequires:  python-rpm-macros
 # SECTION test requirements
+BuildRequires:  %{python_module coverage}
 BuildRequires:  %{python_module mpi4py}
+BuildRequires:  %{python_module psutil}
 # /SECTION
 Requires:       python-six
+Recommends:     python-coverage
 Recommends:     python-mpi4py
+Recommends:     python-psutil
 BuildArch:      noarch
 Requires(post):   update-alternatives
 Requires(preun):  update-alternatives
@@ -52,7 +55,6 @@
 
 %prep
 %setup -q -n testflo-%{version}
-cp %{SOURCE10} .
 %patch0 -p1
 
 %build
@@ -63,8 +65,9 @@
 %python_expand %fdupes %{buildroot}%{$python_sitelib}
 %python_clone -a %{buildroot}%{_bindir}/testflo
 
-%check
-%python_expand $python -m unittest testflo.test
+# Tests not included in sdists
+# %%check
+# %%python_expand $python -B -m unittest testflo.test
 
 %post
 %python_install_alternative testflo
@@ -74,6 +77,7 @@
 
 %files %{python_files}
 %license LICENSE.txt
+%doc README.md RELEASE_NOTES.txt
 %python_alternative %{_bindir}/testflo
 %{python_sitelib}/*
 

++++++ testflo-1.2.tar.gz -> testflo-1.3.4.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/DESIGN.txt new/testflo-1.3.4/DESIGN.txt
--- old/testflo-1.2/DESIGN.txt  1970-01-01 01:00:00.000000000 +0100
+++ new/testflo-1.3.4/DESIGN.txt        2017-05-30 16:08:07.000000000 +0200
@@ -0,0 +1,43 @@
+testflo is a python testing framework that uses a pipeline of
+iterators to process test specifications, run the tests, and process the
+results.
+
+The testflo API consists of a single callable that takes
+an input iterator of Test objects as an argument and returns an
+output iterator of Test objects.  The source of the pipeline is a plain
+python iterator since it doesn't need an input iterator. By simply adding
+members to the testflo pipeline, it's easy to add new features.
+
+The pipeline starts with an iterator of strings that I'll call
+'general test specifiers'.  These can have any of the following forms:
+
+<module or file path>
+<module or file path>:<TestCase class name>.<method name>
+<module or file path>:<function name>
+<directory path>
+
+where <module or file path> is either the filesystem pathname of the
+python file containing the test(s) or the python module path, e.g.,
+'foo.bar.baz'.
+
+The general test specifiers are iterated over by the TestDiscoverer, who
+generates an output iterator of Test objects. There is a Test object for each
+individual test.  As of version 1.1, the objects in the TestDiscoverer's
+output iterator can be either individual Test objects or lists of Test
+objects. This change was necessary to support module level and TestCase
+class level setup and teardown functions.  The thought was that all tests
+under either a module level setup/teardown or a TestCase class level
+setup/teardown should be grouped and executed in the same process, so
+when these functions are present, the Test objects are grouped into a list
+and sent together to the ConcurrentTestRunner.  After execution, the rest
+of the pipeline sees only individual Test objects.
+
+The ConcurrentTestRunner
+executes each test and passes an iterator of those to the ResultPrinter,
+who then passes them on to the ResultSummary.
+
+The multiprocessing library is used in the ConcurrentTestRunner to support 
concurrent
+execution of tests.  It adds Test objects to a shared Queue that the
+worker processes pull from. Then the workers place the finished Test objects in
+a 'done' Queue that the ConcurrentTestRunner pulls from and passes downstream 
for
+display, summary, or whatever.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/LICENSE.txt new/testflo-1.3.4/LICENSE.txt
--- old/testflo-1.2/LICENSE.txt 1970-01-01 01:00:00.000000000 +0100
+++ new/testflo-1.3.4/LICENSE.txt       2017-05-30 16:08:07.000000000 +0200
@@ -0,0 +1,13 @@
+testflo Open Source License:
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/PKG-INFO new/testflo-1.3.4/PKG-INFO
--- old/testflo-1.2/PKG-INFO    2017-01-30 22:00:28.000000000 +0100
+++ new/testflo-1.3.4/PKG-INFO  2018-12-06 16:34:08.000000000 +0100
@@ -1,10 +1,70 @@
 Metadata-Version: 1.0
 Name: testflo
-Version: 1.2
-Summary: A simple flow based testing framework
+Version: 1.3.4
+Summary: A simple flow-based testing framework
 Home-page: UNKNOWN
 Author: UNKNOWN
 Author-email: UNKNOWN
 License: Apache 2.0
-Description: UNKNOWN
+Description: 
+                usage: testflo [options]
+        
+                positional arguments:
+                  test                  A test method, test case, module, or 
directory to run.
+        
+                optional arguments:
+                  -h, --help            show this help message and exit
+                  -c FILE, --config FILE
+                                        Path of config file where preferences 
are specified.
+                  -t FILE, --testfile FILE
+                                        Path to a file containing one testspec 
per line.
+                  --maxtime TIME_LIMIT  Specifies a time limit in seconds for 
tests to be
+                                        saved to the quicktests.in file.
+                  -n NUM_PROCS, --numprocs NUM_PROCS
+                                        Number of processes to run. By 
default, this will use
+                                        the number of CPUs available. To force 
serial
+                                        execution, specify a value of 1.
+                  -o FILE, --outfile FILE
+                                        Name of test report file. Default is
+                                        testflo_report.out.
+                  -v, --verbose         Include testspec and elapsed time in 
screen output.
+                                        Also shows all stderr output, even if 
test doesn't
+                                        fail
+                  --compact             Limit output to a single character for 
each test.
+                  --dryrun              Don't actually run tests, but print 
which tests would
+                                        have been run.
+                  --pre_announce        Announce the name of each test before 
it runs. This
+                                        can help track down a hanging test. 
This automatically
+                                        sets -n 1.
+                  -f, --fail            Save failed tests to failtests.in file.
+                  --full_path           Display full test specs instead of 
shortened names.
+                  -i, --isolated        Run each test in a separate subprocess.
+                  --nompi               Force all tests to run without MPI. 
This can be useful
+                                        for debugging.
+                  -x, --stop            Stop after the first test failure, or 
as soon as
+                                        possible when running concurrent tests.
+                  -s, --nocapture       Standard output (stdout) will not be 
captured and will
+                                        be written to the screen immediately.
+                  --coverage            Perform coverage analysis and display 
results on
+                                        stdout
+                  --coverage-html       Perform coverage analysis and display 
results in
+                                        browser
+                  --coverpkg PKG        Add the given package to the coverage 
list. You can
+                                        use this option multiple times to 
cover multiple
+                                        packages.
+                  --cover-omit FILE     Add a file name pattern to remove it 
from coverage.
+                  -b, --benchmark       Specifies that benchmarks are to be 
run rather than
+                                        tests, so only files starting with 
"benchmark\_" will
+                                        be executed.
+                  -d FILE, --datafile FILE
+                                        Name of benchmark data file. Default is
+                                        benchmark_data.csv.
+                  --noreport            Don't create a test results file.
+                  -m GLOB, --match GLOB, --testmatch GLOB
+                                        Pattern to use for test discovery. 
Multiple patterns
+                                        are allowed.
+                  --timeout TIMEOUT     Timeout in seconds. Test will be 
terminated if it
+                                        takes longer than timeout. Only works 
for tests
+                                        running in a subprocess (MPI and 
isolated).
+              
 Platform: UNKNOWN
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/README.md new/testflo-1.3.4/README.md
--- old/testflo-1.2/README.md   1970-01-01 01:00:00.000000000 +0100
+++ new/testflo-1.3.4/README.md 2018-11-16 20:41:08.000000000 +0100
@@ -0,0 +1,181 @@
+testflo
+=======
+
+testflo is a python testing framework that uses a pipeline of
+iterators to process test specifications, run the tests, and process the
+results.
+
+Why write another testing framework?
+------------------------------------
+
+testflo was written to support testing of the OpenMDAO framework.
+Some OpenMDAO features require execution under MPI while some others don't,
+so we wanted a testing framework that could run all of our tests in the same
+way and would allow us to build all of our tests using unittest.TestCase
+objects that we were already familiar with.  The MPI testing functionality
+was originally implemented using the nose testing framework.  It worked, but
+was always buggy, and the size and complexity of the nose framework made it
+difficult to know exactly what was going on.
+
+Enter testflo, an attempt to build a simpler testing framework that would have
+the basic functionality of other test frameworks, with the additional
+ability to run MPI unit tests that are very similar to regular unit tests.
+
+
+Some testflo features
+---------------------
+
+*    MPI unit testing
+*    *pre_announce* option to print test name before running in order to
+     quickly identify hanging MPI tests
+*    concurrent testing  (on by default, use '-n 1' to turn it off)
+*    test coverage
+*    flexible execution - can be given a directory, a file, a module path,
+     *file:testcase.method*, *module:testcase.method*, or a file containing
+     a list of any of the above. Has options to generate test list files
+     containing all failed tests or all tests that execute within a certain
+     time limit.
+*    end of testing summary
+
+
+Usage
+-----
+
+For a full list of testflo options, execute the following:
+
+`testflo -h`
+
+
+NOTE: Because testflo runs tests concurrently by default, your tests must be
+written with concurrency in mind or they may fail.  For example, if multiple
+tests write output to a file with the same name, you have to make sure that 
those
+tests are executed in different directories to prevent that file from being
+corrupted.  If your tests are not written to run concurrently, you can always
+just run them with `testflo -n 1` and run them in serial instead.
+
+The following is an example of what an MPI unit test looks like.  To tell
+testflo that a TestCase is an MPI TestCase, you add a class attribute
+called N_PROCS to it and set it to the number of MPI processes to use for the
+test.  That's all there is to it. Of course, depending on what sort of MPI code
+you're testing, it's up to you to potentially test for different things on
+different ranks.
+
+
+```python
+
+class MyMPI_TestCase(TestCase):
+
+    N_PROCS = 4  # this is how many MPI processes to use for this TestCase.
+
+    def test_foo(self):
+
+        # do your MPI testing here, e.g.,
+
+        if self.comm.rank == 0:
+            # some test only valid on rank 0...
+
+
+```
+
+
+Here's an example of testflo output for openmdao.core:
+
+
+```
+
+openmdao$ testflo openmdao.core
+............................................................................
+............................................................................
+............................................................................
+..............................
+
+OK
+
+Passed:  258
+Failed:  0
+Skipped: 0
+
+
+Ran 258 tests using 8 processes
+Sum of test times: 00:00:6.09
+Wall clock time:   00:00:1.82
+Speedup: 3.347731
+
+```
+
+Running testflo in verbose mode on openmdao.core.test.test_problem is shown
+below. The verbose output contains the full test name as well as the elapsed
+time and memory usage.
+
+
+```
+
+openmdao$ testflo openmdao.core.test.test_problem -v
+openmdao.core.test.test_problem:TestCheckSetup.test_pbo_messages ... OK 
(00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_check_promotes ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_conflicting_connections ... 
OK (00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_conflicting_promoted_state_vars
 ... OK (00:00:0.00, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_conflicting_promotions ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestCheckSetup.test_out_of_order ... OK 
(00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_explicit_connection_errors 
... OK (00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_find_subsystem ... OK 
(00:00:0.00, 69 MB)
+openmdao.core.test.test_problem:TestCheckSetup.test_cycle ... OK (00:00:0.06, 
69 MB)
+openmdao.core.test.test_problem:TestProblem.test_input_input_explicit_conns_no_conn
 ... OK (00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_illegal_desvar ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_input_input_explicit_conns_w_conn
 ... OK (00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_check_connections ... OK 
(00:00:0.06, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_mode_auto ... OK (00:00:0.03, 
69 MB)
+openmdao.core.test.test_problem:TestProblem.test_check_parallel_derivs ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_simplest_run ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_basic_run ... OK (00:00:0.03, 
69 MB)
+openmdao.core.test.test_problem:TestProblem.test_change_solver_after_setup ... 
OK (00:00:0.04, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_no_vecs ... OK (00:00:0.08, 
69 MB)
+openmdao.core.test.test_problem:TestProblem.test_src_idx_gt_src_size ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_src_idx_neg ... OK 
(00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_simplest_run_w_promote ... OK 
(00:00:0.02, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_unconnected_param_access ... 
OK (00:00:0.01, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_variable_access_before_setup 
... OK (00:00:0.00, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_scalar_sizes ... OK 
(00:00:0.07, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_byobj_run ... OK (00:00:0.01, 
69 MB)
+openmdao.core.test.test_problem:TestProblem.test_error_change_after_setup ... 
OK (00:00:0.31, 70 MB)
+openmdao.core.test.test_problem:TestProblem.test_unconnected_param_access_with_promotes
 ... OK (00:00:0.04, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_variable_access ... OK 
(00:00:0.06, 69 MB)
+openmdao.core.test.test_problem:TestProblem.test_iprint ... OK (00:00:0.25, 73 
MB)
+
+
+OK
+
+Passed:  30
+Failed:  0
+Skipped: 0
+
+
+Ran 30 tests using 8 processes
+Sum of test times: 00:00:1.24
+Wall clock time:   00:00:1.17
+Speedup: 1.054168
+
+```
+
+Operating Systems and Python Versions
+-------------------------------------
+
+testflo is used to test OpenMDAO as part of its CI process,
+so we run it nearly every day on linux, Windows and OS X under
+python 2.7 and 3.5.
+
+
+You can install testflo directly from github using the following command:
+
+`pip install git+https://github.com/OpenMDAO/testflo.git`
+
+
+or install from PYPI using:
+
+
+`pip install testflo`
+
+
+
+If you try it out and find any problems, submit them as issues on github at
+https://github.com/OpenMDAO/testflo.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/RELEASE_NOTES.txt 
new/testflo-1.3.4/RELEASE_NOTES.txt
--- old/testflo-1.2/RELEASE_NOTES.txt   1970-01-01 01:00:00.000000000 +0100
+++ new/testflo-1.3.4/RELEASE_NOTES.txt 2018-12-06 16:29:30.000000000 +0100
@@ -0,0 +1,38 @@
+
+testflo version 1.3.4 Release Notes
+Dec 6, 2018
+
+* bug fix
+
+testflo version 1.3.3 Release Notes
+Dec 3, 2018
+
+* bug fix
+
+testflo version 1.3.2 Release Notes
+Nov 17, 2018
+
+Features:
+* added support for ISOLATED attribute
+
+testflo version 1.3.1 Release Notes
+Aug 17, 2018
+
+Updates:
+* output from --pre_announce now looks better, with the result ('.', 'S', or 
'F') showing on the same line as the
+    "about to run ..." instead of on the following line
+* comments are now allowed inside of a test list file
+* added a --full_path option so that full testspec paths will be displayed. 
Having the full path make it easier to
+    copy and paste the testspec to run testflo on just that single test.
+* updated the long_description in setup.py for pypi.
+
+testflo version 1.1 Release Notes
+September 27, 2016
+
+Features:
+* supports setUpModule/tearDownModule
+* supports setUpClass/tearDownClass
+* supports expected failures
+* supports unittest.skip class decorator
+* added --compact option to print only single character test results without
+  showing error or skip messages
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/setup.py new/testflo-1.3.4/setup.py
--- old/testflo-1.2/setup.py    2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/setup.py  2018-11-30 18:53:36.000000000 +0100
@@ -1,12 +1,80 @@
-
 from distutils.core import setup
 
+import re
+
+__version__ = re.findall(
+    r"""__version__ = ["']+([0-9\.]*)["']+""",
+    open('testflo/__init__.py').read(),
+)[0]
+
 setup(name='testflo',
-      version='1.2',
-      description="A simple flow based testing framework",
+      version=__version__,
+      description="A simple flow-based testing framework",
+      long_description="""
+        usage: testflo [options]
+
+        positional arguments:
+          test                  A test method, test case, module, or directory 
to run.
+
+        optional arguments:
+          -h, --help            show this help message and exit
+          -c FILE, --config FILE
+                                Path of config file where preferences are 
specified.
+          -t FILE, --testfile FILE
+                                Path to a file containing one testspec per 
line.
+          --maxtime TIME_LIMIT  Specifies a time limit in seconds for tests to 
be
+                                saved to the quicktests.in file.
+          -n NUM_PROCS, --numprocs NUM_PROCS
+                                Number of processes to run. By default, this 
will use
+                                the number of CPUs available. To force serial
+                                execution, specify a value of 1.
+          -o FILE, --outfile FILE
+                                Name of test report file. Default is
+                                testflo_report.out.
+          -v, --verbose         Include testspec and elapsed time in screen 
output.
+                                Also shows all stderr output, even if test 
doesn't
+                                fail
+          --compact             Limit output to a single character for each 
test.
+          --dryrun              Don't actually run tests, but print which 
tests would
+                                have been run.
+          --pre_announce        Announce the name of each test before it runs. 
This
+                                can help track down a hanging test. This 
automatically
+                                sets -n 1.
+          -f, --fail            Save failed tests to failtests.in file.
+          --full_path           Display full test specs instead of shortened 
names.
+          -i, --isolated        Run each test in a separate subprocess.
+          --nompi               Force all tests to run without MPI. This can 
be useful
+                                for debugging.
+          -x, --stop            Stop after the first test failure, or as soon 
as
+                                possible when running concurrent tests.
+          -s, --nocapture       Standard output (stdout) will not be captured 
and will
+                                be written to the screen immediately.
+          --coverage            Perform coverage analysis and display results 
on
+                                stdout
+          --coverage-html       Perform coverage analysis and display results 
in
+                                browser
+          --coverpkg PKG        Add the given package to the coverage list. 
You can
+                                use this option multiple times to cover 
multiple
+                                packages.
+          --cover-omit FILE     Add a file name pattern to remove it from 
coverage.
+          -b, --benchmark       Specifies that benchmarks are to be run rather 
than
+                                tests, so only files starting with 
"benchmark\_" will
+                                be executed.
+          -d FILE, --datafile FILE
+                                Name of benchmark data file. Default is
+                                benchmark_data.csv.
+          --noreport            Don't create a test results file.
+          -m GLOB, --match GLOB, --testmatch GLOB
+                                Pattern to use for test discovery. Multiple 
patterns
+                                are allowed.
+          --timeout TIMEOUT     Timeout in seconds. Test will be terminated if 
it
+                                takes longer than timeout. Only works for tests
+                                running in a subprocess (MPI and isolated).
+      """,
       license='Apache 2.0',
       install_requires=[
         'six',
+        'coverage'
       ],
       packages=['testflo'],
       entry_points="""
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/__init__.py 
new/testflo-1.3.4/testflo/__init__.py
--- old/testflo-1.2/testflo/__init__.py 2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/__init__.py       2018-12-06 16:28:46.000000000 
+0100
@@ -0,0 +1 @@
+__version__ = '1.3.4'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/discover.py 
new/testflo-1.3.4/testflo/discover.py
--- old/testflo-1.2/testflo/discover.py 2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/discover.py       2018-11-16 19:02:04.000000000 
+0100
@@ -4,7 +4,6 @@
 from unittest import TestCase
 import six
 
-from fnmatch import fnmatchcase
 from os.path import basename, dirname, isdir
 
 from testflo.util import find_files, get_module, ismethod
@@ -23,10 +22,10 @@
 class TestDiscoverer(object):
 
     def __init__(self, module_pattern=six.text_type('test*.py'),
-                       func_pattern=six.text_type('test*'),
+                       func_match=lambda f: fnmatchcase(f, 'test*'),
                        dir_exclude=None):
         self.module_pattern = module_pattern
-        self.func_pattern = func_pattern
+        self.func_match = func_match
         self.dir_exclude = dir_exclude
 
         # to support module and class fixtures, we need to be able to
@@ -140,7 +139,7 @@
                         for result in self._testcase_iter(filename, obj):
                             yield result
 
-                    elif isfunction(obj) and fnmatchcase(name, 
self.func_pattern):
+                    elif isfunction(obj) and self.func_match(name):
                         yield Test(':'.join((filename, obj.__name__)))
 
     def _testcase_iter(self, fname, testcase):
@@ -148,9 +147,8 @@
         TestCase class.
         """
         tcname = ':'.join((fname, testcase.__name__))
-        pat = self.func_pattern
         for name, method in getmembers(testcase, ismethod):
-            if fnmatchcase(name, pat):
+            if self.func_match(name):
                 yield Test('.'.join((tcname, method.__name__)))
 
     def _testspec_iter(self, testspec):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/main.py 
new/testflo-1.3.4/testflo/main.py
--- old/testflo-1.2/testflo/main.py     2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/main.py   2018-11-16 19:02:04.000000000 +0100
@@ -28,21 +28,17 @@
 import sys
 import six
 import time
-import traceback
-import subprocess
-import multiprocessing
 
-from fnmatch import fnmatch
+from fnmatch import fnmatch, fnmatchcase
 
-from testflo.runner import ConcurrentTestRunner, TestRunner
-from testflo.test import Test
+from testflo.runner import ConcurrentTestRunner
 from testflo.printer import ResultPrinter
 from testflo.benchmark import BenchmarkWriter
 from testflo.summary import ResultSummary
 from testflo.discover import TestDiscoverer
 from testflo.filters import TimeFilter, FailFilter
 
-from testflo.util import read_config_file, read_test_file, _get_parser
+from testflo.util import read_config_file, read_test_file
 from testflo.cover import setup_coverage, finalize_coverage
 from testflo.options import get_options
 from testflo.qman import get_server_queue
@@ -78,7 +74,7 @@
 
     # iterate over the last iter in the pipline and we're done
     for result in iters[-1]:
-        if result.status == 'FAIL':
+        if result.status == 'FAIL' and not result.expected_fail:
             return_code = 1
 
     return return_code
@@ -123,16 +119,30 @@
 
     setup_coverage(options)
 
+    if options.noreport:
+        report_file = open(os.devnull, 'a')
+    else:
+        report_file = open(options.outfile, 'w')
+
+    if not options.test_glob:
+        options.test_glob = ['test*']
+
+    def func_matcher(funcname):
+        for pattern in options.test_glob:
+            if fnmatchcase(funcname, pattern):
+                return True
+        return False
+
     if options.benchmark:
         options.num_procs = 1
         options.isolated = True
         discoverer = 
TestDiscoverer(module_pattern=six.text_type('benchmark*.py'),
-                                    func_pattern=six.text_type('benchmark*'),
+                                    func_match=lambda f: fnmatchcase(f, 
'benchmark*'),
                                     dir_exclude=dir_exclude)
         benchmark_file = open(options.benchmarkfile, 'a')
     else:
         discoverer = TestDiscoverer(dir_exclude=dir_exclude,
-                                    
func_pattern=six.text_type(options.test_glob))
+                                    func_match=func_matcher)
         benchmark_file = open(os.devnull, 'a')
 
     retval = 0
@@ -143,7 +153,7 @@
     else:
         manager, queue = (None, None)
 
-    with open(options.outfile, 'w') as report, benchmark_file as bdata:
+    with report_file as report, benchmark_file as bdata:
         pipeline = [
             discoverer.get_iter,
         ]
@@ -167,13 +177,13 @@
                 verbose = int(options.verbose)
 
             pipeline.extend([
-                ResultPrinter(verbose=verbose).get_iter,
+                ResultPrinter(options, verbose=verbose).get_iter,
                 ResultSummary(options).get_iter,
             ])
             if not options.noreport:
                 # print verbose results and summary to a report file
                 pipeline.extend([
-                    ResultPrinter(report, verbose=1).get_iter,
+                    ResultPrinter(options, report, verbose=1).get_iter,
                     ResultSummary(options, stream=report).get_iter,
                 ])
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/printer.py 
new/testflo-1.3.4/testflo/printer.py
--- old/testflo-1.2/testflo/printer.py  2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/printer.py        2018-11-16 19:02:04.000000000 
+0100
@@ -22,8 +22,9 @@
     still displayed in verbose form.
     """
 
-    def __init__(self, stream=sys.stdout, verbose=0):
+    def __init__(self, options, stream=sys.stdout, verbose=0):
         self.stream = stream
+        self.options = options
         self.verbose = verbose
 
     def get_iter(self, input_iter):
@@ -65,5 +66,7 @@
                                                     stats, 
result.memory_usage))
         else:
             stream.write(_result_map[(result.status, result.expected_fail)])
+            if self.options.pre_announce:
+                stream.write('\n')
 
         stream.flush()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/runner.py 
new/testflo-1.3.4/testflo/runner.py
--- old/testflo-1.2/testflo/runner.py   2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/runner.py 2018-11-16 19:02:04.000000000 +0100
@@ -1,6 +1,8 @@
 """
 Methods and class for running tests.
 """
+from __future__ import print_function
+
 import sys
 import os
 
@@ -53,7 +55,8 @@
             stop = False
             for test in tests:
                 if self.pre_announce:
-                    print("    about to run %s" % test.short_name())
+                    print("    about to run %s " % test.short_name(), end='')
+                    sys.stdout.flush()
                 result = test.run(self._queue)
                 yield result
                 if self.stop:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/summary.py 
new/testflo-1.3.4/testflo/summary.py
--- old/testflo-1.2/testflo/summary.py  2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/summary.py        2018-11-16 19:02:04.000000000 
+0100
@@ -12,6 +12,12 @@
         self.options = options
         self._start_time = time.time()
 
+    def get_test_name(self, test):
+        if self.options.full_path:
+            return test.spec
+        else:
+            return test.short_name()
+
     def get_iter(self, input_iter):
         oks = 0
         total = 0
@@ -26,7 +32,7 @@
 
             if test.status == 'OK':
                 if test.expected_fail:
-                    fails.append(test.short_name())
+                    fails.append(self.get_test_name(test))
                 else:
                     oks += 1
                 test_sum_time += (test.end_time-test.start_time)
@@ -34,15 +40,15 @@
                 if test.expected_fail:
                     oks += 1
                 else:
-                    fails.append(test.short_name())
+                    fails.append(self.get_test_name(test))
                 test_sum_time += (test.end_time-test.start_time)
             elif test.status == 'SKIP':
-                skips.append(test.short_name())
+                skips.append(self.get_test_name(test))
 
             yield test
 
         # now summarize the run
-        if skips:
+        if skips and self.options.verbose:  # only list skips in verbose mode
             write("\n\nThe following tests were skipped:\n")
             for s in sorted(skips):
                 write(s)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/test.py 
new/testflo-1.3.4/testflo/test.py
--- old/testflo-1.2/testflo/test.py     2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/test.py   2018-12-06 16:30:55.000000000 +0100
@@ -1,10 +1,12 @@
+from __future__ import print_function
+
 import os
 import sys
 import time
 import traceback
 from inspect import isclass
-import pickle
 from subprocess import Popen, PIPE
+from tempfile import mkstemp
 
 from types import FunctionType, ModuleType
 from six.moves import cStringIO
@@ -98,6 +100,7 @@
         self.nocapture = options.nocapture
         self.isolated = options.isolated
         self.mpi = not options.nompi
+        self.timeout = options.timeout
         self.expected_fail = False
         self.test_dir = os.path.dirname(testspec.split(':',1)[0])
         self._mod_fixture_first = False
@@ -107,7 +110,9 @@
 
         if not err_msg:
             with TestContext(self):
-                self.mod, self.tcase, self.funcname, self.nprocs = 
self._get_test_info()
+                self.mod, self.tcase, self.funcname, self.nprocs, isolated = 
self._get_test_info()
+                if isolated:
+                    self.isolated = isolated
         else:
             self.mod = self.tcase = self.funcname = None
 
@@ -128,11 +133,12 @@
         return iter((self,))
 
     def _get_test_info(self):
-        """Get the test's module, testcase (if any), function name and
-        N_PROCS (for mpi tests).
+        """Get the test's module, testcase (if any), function name,
+        N_PROCS (for mpi tests) and ISOLATED.
         """
         parent = funcname = mod = testcase = None
         nprocs = 0
+        isolated = False
 
         try:
             mod, testcase, funcname = _parse_test_path(self.spec)
@@ -147,10 +153,78 @@
                 if testcase is not None:
                     parent = testcase
                     nprocs = getattr(testcase, 'N_PROCS', 0)
+                    isolated = getattr(testcase, 'ISOLATED', False)
                 else:
                     parent = mod
 
-        return mod, testcase, funcname, nprocs
+        return mod, testcase, funcname, nprocs, isolated
+
+    def _run_sub(self, cmd, queue):
+        """
+        Run a command in a subprocess.
+        """
+        try:
+            add_queue_to_env(queue)
+
+            if self.nocapture:
+                out = sys.stdout
+            else:
+                out = open(os.devnull, 'w')
+
+            errfd, tmperr = mkstemp()
+            err = os.fdopen(errfd, 'w')
+
+            p = Popen(cmd, stdout=out, stderr=err, env=os.environ,
+                      universal_newlines=True)  # text mode
+            count = 0
+            timedout = False
+
+            if self.timeout < 0.0:  # infinite timeout
+                p.wait()
+            else:
+                poll_interval = 0.2
+                while p.poll() is None:
+                    if count * poll_interval > self.timeout:
+                        p.terminate()
+                        timedout = True
+                        break
+                    time.sleep(poll_interval)
+                    count += 1
+
+            err.close()
+
+            with open(tmperr, 'r') as f:
+                errmsg = f.read()
+            os.remove(tmperr)
+
+            os.environ['TESTFLO_QUEUE'] = ''
+
+            if timedout:
+                result = self
+                self.status = 'FAIL'
+                self.err_msg = 'TIMEOUT after %s sec. ' % self.timeout
+                if errmsg:
+                    self.err_msg += errmsg
+            else:
+                if p.returncode != 0:
+                    print(errmsg)
+                result = queue.get()
+        except:
+            # we generally shouldn't get here, but just in case,
+            # handle it so that the main process doesn't hang at the
+            # end when it tries to join all of the concurrent processes.
+            self.status = 'FAIL'
+            self.err_msg = traceback.format_exc()
+            result = self
+
+            err.close()
+        finally:
+            if not self.nocapture:
+                out.close()
+            sys.stdout.flush()
+            sys.stderr.flush()
+
+        return result
 
     def _run_isolated(self, queue):
         """This runs the test in a subprocess,
@@ -161,17 +235,16 @@
                os.path.join(os.path.dirname(__file__), 'isolatedrun.py'),
                self.spec]
 
-        add_queue_to_env(queue)
-
-        p = Popen(cmd, stdout=PIPE, stderr=PIPE, env=os.environ)
-        out, err = p.communicate()
-        if self.nocapture:
-            sys.stdout.write(out)
-            sys.stderr.write(err)
-
-        os.environ['TESTFLO_QUEUE'] = ''
+        try:
+            result = self._run_sub(cmd, queue)
+        except:
+            # we generally shouldn't get here, but just in case,
+            # handle it so that the main process doesn't hang at the
+            # end when it tries to join all of the concurrent processes.
+            self.status = 'FAIL'
+            self.err_msg = traceback.format_exc()
+            result = self
 
-        result = queue.get()
         result.isolated = True
 
         return result
@@ -185,23 +258,12 @@
             if mpirun_exe is None:
                 raise Exception("mpirun or mpiexec was not found in the system 
path.")
 
-
             cmd = [mpirun_exe, '-n', str(self.nprocs),
                    sys.executable,
                    os.path.join(os.path.dirname(__file__), 'mpirun.py'),
                    self.spec] + _get_testflo_subproc_args()
 
-            add_queue_to_env(queue)
-
-            p = Popen(cmd, stdout=PIPE, stderr=PIPE, env=os.environ)
-            out, err = p.communicate()
-            if self.nocapture:
-                sys.stdout.write(out)
-                sys.stderr.write(err)
-
-            os.environ['TESTFLO_QUEUE'] = ''
-
-            result = queue.get()
+            result = self._run_sub(cmd, queue)
 
         except:
             # we generally shouldn't get here, but just in case,
@@ -211,10 +273,6 @@
             self.err_msg = traceback.format_exc()
             result = self
 
-        finally:
-            sys.stdout.flush()
-            sys.stderr.flush()
-
         return result
 
     def run(self, queue=None):
@@ -231,7 +289,7 @@
 
         with TestContext(self):
             if self.tcase is None:
-                mod, testcase, funcname, nprocs = self._get_test_info()
+                mod, testcase, funcname, nprocs, _ = self._get_test_info()
             else:
                 mod, testcase, funcname, nprocs = (self.mod, self.tcase, 
self.funcname, self.nprocs)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/testflo-1.2/testflo/util.py 
new/testflo-1.3.4/testflo/util.py
--- old/testflo-1.2/testflo/util.py     2017-01-30 21:50:12.000000000 +0100
+++ new/testflo-1.3.4/testflo/util.py   2018-11-16 19:02:04.000000000 +0100
@@ -16,9 +16,6 @@
 except ImportError:
     pass
 
-from multiprocessing.connection import arbitrary_address
-import socket
-
 from fnmatch import fnmatch
 from os.path import join, dirname, basename, isfile,  abspath, split, splitext
 
@@ -71,6 +68,8 @@
                              "can help track down a hanging test. This 
automatically sets -n 1.")
     parser.add_argument('-f', '--fail', action='store_true', dest='save_fails',
                         help="Save failed tests to failtests.in file.")
+    parser.add_argument('--full_path', action='store_true', dest='full_path',
+                        help="Display full test specs instead of shortened 
names.")                        
     parser.add_argument('-i', '--isolated', action='store_true', 
dest='isolated',
                         help="Run each test in a separate subprocess.")
     parser.add_argument('--nompi', action='store_true', dest='nompi',
@@ -109,9 +108,15 @@
     parser.add_argument('tests', metavar='test', nargs='*',
                         help='A test method, test case, module, or directory 
to run.')
 
-    parser.add_argument('-m', '--match', '--testmatch', action='store', 
dest='test_glob',
-                        metavar='GLOB', help='Pattern to use for test 
discovery.',
-                        default='test*')
+    parser.add_argument('-m', '--match', '--testmatch', action='append', 
dest='test_glob',
+                        metavar='GLOB',
+                        help='Pattern to use for test discovery. Multiple 
patterns are allowed.',
+                        default=[])
+
+    parser.add_argument('--timeout', action='store', dest='timeout',
+                        default=-1.0, type=float,
+                        help='Timeout in seconds. Test will be terminated if 
it takes longer than timeout. Only'
+                             ' works for tests running in a subprocess (MPI 
and isolated).')
 
     return parser
 
@@ -336,6 +341,10 @@
     """Reads a file containing one testspec per line."""
     with open(os.path.abspath(testfile), 'r') as f:
         for line in f:
+            idx = line.find('#')
+            if idx >= 0:
+                line = line[:idx]
+
             line = line.strip()
             if line:
                 yield line
@@ -352,6 +361,9 @@
     if config.has_option('testflo', 'num_procs'):
         options.num_procs = int(config.get('testflo', 'num_procs'))
 
+    if config.has_option('testflo', 'noreport'):
+        options.noreport = bool(config.get('testflo', 'noreport'))
+
 
 def get_memory_usage():
     """return memory usage for the current process"""

++++++ use_setuptools.patch ++++++
--- /var/tmp/diff_new_pack.Cj5vHD/_old  2019-01-11 14:04:34.495855242 +0100
+++ /var/tmp/diff_new_pack.Cj5vHD/_new  2019-01-11 14:04:34.495855242 +0100
@@ -1,17 +1,20 @@
-From: toddrme2...@gmail.com
-Date: 2017-05-24
-Subject: use setuptools instead of distutils
-
-Some of the optional commands need setuptools.
-In particular this is needed for the entrypoints.
+From 31d00f83262e66298f759b8e2b45718cd534622c Mon Sep 17 00:00:00 2001
+From: Todd <toddrme2...@gmail.com>
+Date: Fri, 4 Jan 2019 13:48:50 -0500
+Subject: [PATCH] Use setuptools
 
+`entry_points` requires setuptools.  It doesn't work with distutils.
 ---
+ setup.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/setup.py b/setup.py
+index 4a4f6fd..55cf9ba 100644
 --- a/setup.py
 +++ b/setup.py
-@@ -1,5 +1,5 @@
- 
+@@ -1,4 +1,4 @@
 -from distutils.core import setup
 +from setuptools import setup
  
- setup(name='testflo',
-       version='1.2',
+ import re
+ 


Reply via email to