On Mon, May 06, 2024 at 09:25:37AM GMT, Thomas Huth wrote:
> On 04/05/2024 14.28, Nicholas Piggin wrote:
> > There are times we would like to test a function that is known to fail
> > in some conditions due to a bug in implementation (QEMU, KVM, or even
> > hardware). It would be nice to count these as known failures and not
> > report a summary failure.
> > 
> > xfail is not the same thing, xfail means failure is required and a pass
> > causes the test to fail. So add kfail for known failures.
> 
> Actually, I wonder whether that's not rather a bug in report_xfail()
> instead. Currently, when you call report_xfail(true, ...), the result is
> *always* counted as a failure, either as an expected failure (if the test
> really failed), or as a normal failure (if the test succeeded). What's the
> point of counting a successful test as a failure??
> 
> Andrew, you've originally introduced report_xfail in commit a5af7b8a67e,
> could you please comment on this?
> 

An expected failure passes when the test fails and fails when the test
passes, i.e.

  XFAIL == PASS (but separately accounted with 'xfailures')
  XPASS == FAIL

If we expect something to fail and it passes then this may be due to the
thing being fixed, so we should change the test to expect success, or
due to the test being written incorrectly for our expectations. Either
way, when an expected failure doesn't fail, it means our expectations are
wrong and we need to be alerted to that, hence a FAIL is reported.

Thanks,
drew

> IMHO we should rather do something like this instead:
> 
> diff --git a/lib/report.c b/lib/report.c
> --- a/lib/report.c
> +++ b/lib/report.c
> @@ -98,7 +98,7 @@ static void va_report(const char *msg_fmt,
>                 skipped++;
>         else if (xfail && !pass)
>                 xfailures++;
> -       else if (xfail || !pass)
> +       else if (!xfail && !pass)
>                 failures++;
> 
>         spin_unlock(&lock);
> 
>  Thomas
> 

Reply via email to