On 1/13/21 2:38 PM, Rainer Orth wrote:
Hi Martin,

On 1/6/21 12:36 AM, Jeff Law wrote:
unresolved "could not find python interpreter $testcase" in
run-gcov-pytest if you find the right magic in the output of your spawn.

Achieved that with the updated patch.

Ready for master?

unfortunately, your patch has a large number of problems:

Hello.

Thank you for the investigation.


* On targets where run-gcov-pytest decides that pytest isn't available
   (incorrectly in some cases), mail-report.log is cluttered with

UNRESOLVED: could not find Python interpreter and (or) pytest module for 
pr98273.C

   I fear you've been misled by David and Jeff here: UNRESOLVED isn't
   appropriate for cases like this.  Please read the DejaGnu manual for
   the semantics of the various test outcomes.  If anything (we often
   just silently skip testcases that cannot be run on some target), use
   UNSUPPORTED instead.

Shame on me, I misread what I was suggested.


* Besides, the test outcomes are not generic message facilities but are
   supposed to follow a common format:

   <result>: <testname> [<subtest>]

   with <testname> the pathname to the test relative to (in this case)
   gcc/testsuite.  In this case, this might be something like

   UNSUPPORTED: g++.dg/gcov/pr98273.C run-gcov-pytest

   Currently, you don't have the pathname in run-gcov-pytest, though.

All right, now one will see:

UNSUPPORTED: g++.dg/gcov/pr98273.C run-gcov-pytest could not find Python 
interpreter and (or) pytest module


* If we now have an (even optional) dependency on python/pytest, this
   (with the exact versions and use) needs to be documented in
   install.texi.

Done that.


* Speaking of documenting, the new run-gcov-pytest needs to be
   documented in sourcebuild.texi.

Likewise here.


* On to the implementation: your test for the presence of pytest is
   wrong:

     set result [remote_exec host "pytest -m pytest --version"]

   has nothing to do with what you actually use later: on all of Fedora
   29, Ubuntu 20.04, and Solaris 11.4 (with a caveat) pytest is Python
   2.7 based, but you don't check that.  It is well possible that pytest
   for 2.7 is installed, but pytest for Python 3.x isn't.

   Besides, while Solaris 11.4 does bundle pytest, they don't deliver
   pytest, but only py.test due to a conflict with a different pytest from
   logilab-common, cf. https://github.com/pytest-dev/pytest/issues/1833.

   This is immaterial, however, since what you actually run is

     spawn -noecho python3 -m pytest --color=no -rA -s --tb=no 
$srcdir/$subdir/$pytest_script

   So you should just run python3 -m pytest --version instead to check
   for the presence of the version you're going to use.

   Btw., there's a mess with pytest on Fedora 29: running the above gives

I must confirm this is mess. I definitely don't want to support Python2 and I 
think
the best way would be to use 'env python3', hope it's portable enough.
@David: What do you think?


[...]
pluggy.PluginValidationError: Plugin 'benchmark' could not be loaded: (pytest 
3.6.4 (/usr/lib/python3.7/site-packages), Requirement.parse('pytest>=3.8'))!

   Seems the packagers have broken things there.

   On top of all this, I wonder why you insist on a particular Python
   version here: I tried your single testcase and it PASSes just as well
   with Python 2.7!?  One reason I'm asking is that Solaris 11.3 bundles
   both Python 2.7 and 3.4, but (unlike Linux and Solaris 11.4) don't
   have /usr/bin/python3, just python (which is 2.7), python2.7, and
   python3.4.  Not that it matters too much, but you should be aware of
   the issue.

   When running the test on Solaris 11.4 (with the bundled pytest 4.4.0),
   I get

============================= test session starts ==============================
platform sunos5 -- Python 3.7.9, pytest-4.4.0, py-1.8.0, pluggy-0.9.0
rootdir: /vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov
collected 2 items

../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py
 ..

=========================== 2 passed in 0.04 seconds ===========================

while 4.6.9 on Linux gives

============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-4.6.9, py-1.8.1, pluggy-0.13.0
rootdir: /vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov
collected 2 items

../../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py
 ..

=========================== short test summary info ============================
PASSED 
../../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py::test_basics
PASSED 
../../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py::test_lines
=========================== 2 passed in 0.17 seconds ===========================

   Obviously pytest -rA was introduced only after 4.4.0 and the 'A' is
   silently ignored.  Fortunately, I can just use -rap instead which
   works with both versions.

This will be fixed by this:
env python3 -m pytest --color=no -rA -s --tb=no --version


   After this has been processed by gcov.exp, g++.sum contains

PASS:  
../../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py::test_basic
PASS:  
../../../../../../../../../../vol/gcc/src/hg/master/local/gcc/testsuite/g++.dg/gcov/test-pr98273.py::test_line

   which is again completely wrong in light of what I wrote above on the
   format of test names: it lacks the testname part completely and
   contains absolute pathnames which makes it impossible to compare
   testresults from different systems.  Instead, there should be some
   sort of tag, perhaps patterned after what the various scan-* functions
   do.

Fixed that too.

Martin


Please fix.

        Rainer


>From 6e9b95688e4a0ba8cfcc50d442d457f34ee41db0 Mon Sep 17 00:00:00 2001
From: Martin Liska <mli...@suse.cz>
Date: Thu, 14 Jan 2021 09:09:32 +0100
Subject: [PATCH] Pytest in tests: improve

gcc/ChangeLog:

	* doc/install.texi: Document that some tests need pytest module.
	* doc/sourcebuild.texi: Likewise.

gcc/testsuite/ChangeLog:

	* lib/gcov.exp: Use 'env python3' for execution of pytests.
	Check that pytest accepts all needed options first.
	Improve formatting of PASS/FAIL lines.
---
 gcc/doc/install.texi       |  2 +-
 gcc/doc/sourcebuild.texi   |  4 ++++
 gcc/testsuite/lib/gcov.exp | 31 ++++++++++++++++++++++++-------
 3 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/gcc/doc/install.texi b/gcc/doc/install.texi
index 17b5382010e..958da8bc350 100644
--- a/gcc/doc/install.texi
+++ b/gcc/doc/install.texi
@@ -2990,7 +2990,7 @@ Second, you must have the testing tools installed.  This includes
 the DejaGnu site has links to these. For running the BRIG frontend
 tests, a tool to assemble the binary BRIGs from HSAIL text,
 @uref{https://github.com/HSAFoundation/HSAIL-Tools/,,HSAILasm} must
-be installed.
+be installed. Some optional tests also require Python3 and pytest module.
 
 If the directories where @command{runtest} and @command{expect} were
 installed are not in the @env{PATH}, you may need to set the following
diff --git a/gcc/doc/sourcebuild.texi b/gcc/doc/sourcebuild.texi
index 3d0873dd074..b9cbe21a4bb 100644
--- a/gcc/doc/sourcebuild.texi
+++ b/gcc/doc/sourcebuild.texi
@@ -3092,6 +3092,10 @@ Check line counts in @command{gcov} tests.
 @item run-gcov [branches] [calls] @{ @var{opts} @var{sourcefile} @}
 Check branch and/or call counts, in addition to line counts, in
 @command{gcov} tests.
+
+@item run-gcov-pytest @{ @var{sourcefile} @var{pytest_file} @}
+Check output of @command{gcov} intermediate format with a pytest
+script.
 @end table
 
 @subsubsection Clean up generated test files
diff --git a/gcc/testsuite/lib/gcov.exp b/gcc/testsuite/lib/gcov.exp
index 4bcab1d3f1d..ed1fae0c13c 100644
--- a/gcc/testsuite/lib/gcov.exp
+++ b/gcc/testsuite/lib/gcov.exp
@@ -247,6 +247,19 @@ proc verify-calls { testname testcase file } {
     return $failed
 }
 
+proc gcov-pytest-format-line { args } {
+    global subdir
+
+    set testcase [lindex $args 0]
+    set pytest_script [lindex $args 1]
+    set output_line [lindex $args 2]
+
+    set index [string first "::" $output_line]
+    set test_output [string range $output_line [expr $index + 2] [string length $output_line]]
+
+    return "$subdir/$testcase ${pytest_script}::${test_output}"
+}
+
 # Call by dg-final to run gcov --json-format which produces a JSON file
 # that is later analysed by a pytest Python script.
 # We pass filename of a test via GCOV_PATH environment variable.
@@ -261,30 +274,34 @@ proc run-gcov-pytest { args } {
     set testcase [remote_download host $testcase]
     set result [remote_exec host $GCOV "$testcase -i"]
 
-    set result [remote_exec host "pytest -m pytest --version"]
+    set pytest_cmd "env python3 -m pytest --color=no -rA -s --tb=no"
+    set result [remote_exec host "$pytest_cmd --version"]
     set status [lindex $result 0]
     if { $status != 0 } then {
-      unresolved "could not find Python interpreter and (or) pytest module for $testcase"
+      unsupported "$subdir/$testcase run-gcov-pytest could not find Python interpreter and (or) pytest module"
       return
     }
 
     set pytest_script [lindex $args 1]
     setenv GCOV_PATH $testcase
-    verbose "pytest_script: $pytest_script" 2
-    spawn -noecho python3 -m pytest --color=no -rA -s --tb=no $srcdir/$subdir/$pytest_script
+    verbose "pytest_script: $srcdir $subdir $pytest_script" 2
+    spawn -noecho env python3 -m pytest --color=no -rA -s --tb=no $srcdir/$subdir/$pytest_script
 
     set prefix "\[^\r\n\]*"
     expect {
       -re "FAILED($prefix)\[^\r\n\]+\r\n" {
-       fail "$expect_out(1,string)"
+       set output [gcov-pytest-format-line $testcase $pytest_script $expect_out(1,string)]
+       fail $output
        exp_continue
       }
       -re "ERROR($prefix)\[^\r\n\]+\r\n" {
-       fail "$expect_out(1,string)"
+       set output [gcov-pytest-format-line $testcase $pytest_script $expect_out(1,string)]
+       fail $output
        exp_continue
       }
       -re "PASSED($prefix)\[^\r\n\]+\r\n" {
-       pass "$expect_out(1,string)"
+       set output [gcov-pytest-format-line $testcase $pytest_script $expect_out(1,string)]
+       pass $output
        exp_continue
       }
     }
-- 
2.29.2

Reply via email to