Am 28.02.19 um 14:44 schrieb Vladimir Sitnikov:
Hi,

I happen to maintain software, and I see a common pitfall when it comes to
unit tests.
Please don't take it personally, however please consider it when creating
new tests and/or when reviewing patches.

I'm afraid the below might be too long, however below applies to JMeter
committers, to JMeter contributors, and it applies to any developers in
general.

TL;DR:

General rule: failing test should look like a good bug report. Thus
Assert.fail() is bad.
Agreed, at least without a meaningful description message.

Consider using "single assertion" per test method. Having separate test
methods helps manual execution of the tests, and it makes test report
cleaner

Consider using assertEquals(String message, expected, actual) instead of
assertTrue(expected == actual). The former allows you to provide human
readable message and it integrates well with IDEs (i.e. it allows to open
diff of expected and actual).

If using just assertTrue(expected == actual) all you get is a stacktrace
and if such a test fails a developer has to reverse engineer the intention
behind that code.
+1

== Problem statement ==

Sample test code (it was added as a part of Bug 62785):

SampleResult sample = sampler.sample();
assertFalse(sample.getResponseDataAsString().contains("java.io.FileNotFoundException:"));

Even though the test makes sure the code works, there's a maintenance
problem with this test.
When the test fails it is very cryptic.

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at
org.apache.jmeter.protocol.http.sampler.TestHTTPSamplers.testRawBodyFromFile(TestHTTPSamplers.java:385)

Of course we see that assertFalse was violated, however it is absolutely
obscure why the test failed and it is obscure what it was trying to do.

== Desired behaviour ==

Test failure should look like a good bug report.
For instance, if one can take a decent bug report by a simple copy&paste
"AssertionError" plus the stacktrace, then the assert is good.

== Possible approaches ==

Note: this all depends on a case-by-case basis.

A) Use message parameter of assert* calls.
For instance: assertEquals("Response message should not contain
FileNotFoundException", false,
sample.getResponseDataAsString().contains("FileNotFoundException:"));

Note it would still produce quite weird output like "expected false got
true".
In general, assertEquals for comparing booleans is almost always wrong.

B) use if + assert.fail
if (sample.getResponseDataAsString().contains("FileNotFoundException:")) {
Assert.fail("Response must not contain FNFE");
}

C) Use Hamcrest matches. For instance:
assertThat(sample.getResponseDataAsString(),
not(containsString("java.io.FileNotFoundException:")));

Of course "C" might be less verbose in this case, however there's
**extremely** important improvement which "C" brings over A and B (it can
be fixed though, but I made a deliberate mistake there).
...
...
Ok, you are great if you discovered that: assertThat print actual response
data in case the assert fails. In other words, we would know what was the
response.

I don't get what you are after here. Is this meant to be sarcasm?

Your original version of "B" included the response string to make it more verbose and more tester friendly, so they are -- in my eyes -- rather equivalent in output.

What I really like about "C" is that it is all code and a "standardized" output, which I expect a tester to be familiar with. I find myself often struggling to find the right words to state the expectation of the assertion. Should I print out the thing I want, the thing that failed or both?


D) Gradle + Spock might improve readability as well (see
https://twitter.com/VladimirSitnikv/status/1100666414548045825 )

== Ok, I use message/assertThat, is it enough? ==
No. assertThat does not always help.
For instance, take a look into TimerServiceTest
https://github.com/apache/jmeter/blob/3d03af3d78746d67e7a81d2481e5e741b3907664/test/src/org/apache/jmeter/timers/TimerServiceTest.java#L34

public void testBigInitialDelay() {
long now = System.currentTimeMillis();
long adjustedDelay = sut.adjustDelay(Long.MAX_VALUE, now + 1000L);
// As #adjustDelay uses System#currentTimeMillis we can't be sure, that
the value is exact 1000L
Assert.assertThat(Math.abs(adjustedDelay - 1000L) < 150L,
CoreMatchers.is(true));
}

Does the test look good to you?
...
...
Felix?

No need to call me out :) I didn't like the test, when I wrote it, but at least I wrote it. It is a really good example of bad assertThat usage. I think it would be better to have Matcher that matches with a bit of tolerance. Than we could write "Assert.assertThat(adjustedDelay, ToleranceMatcher.isAbout(1000L, 150L))" which could result in an error message like

Expected: is about <1000L> with tolerance of <150L>
  but was: <800L>

...
Ok, if you can spot the error, you are great.
The test will allow shorter delays than expected (but I thought that would be OK and not likely) and it probably has problems with really large negative delays. But I would be interested in the error you have found.

The problem here is even though assertThat is used, test failure won't
explain the nature of the test. It won't explain WHY "true" was expected
and so on.

Proper assert should be like the following:
if (...) { // Or Spock or whatever
Assert.fail("TimerService.sut.adjustDelay(Long.MAX_VALUE, now + 1000L)
should return something close to 1000 since now+1000 would anyway be less
than Long.MAX_VALUE. Actual result is " + adjustedDelay);
}

What I see here as a problem, is that you repeat values, that you used for testing previously. That is easily overlooked, when the test is refactored and as stated above, the programmer has do good texting.

On the other hand, this message is nice.


== Next steps ==

0) The most common code smell is "computations in assert* messages".

1) "Constant" messages do not really help.
For instance, assertEquals("Incorrect entry returned",expected[i], out[i]);
is a bad message.
It does not explain "why" something was expected. What was the input? What
was the action? Nothing of that is present in "incorrect entry returned"
message.

2) If you create a test assertion or an assert in the production code,
think how the failure look like.
You might even want to temporary put wrong values and see if the message is
understandable and actionable.

3) If you review the patch, please pay attention on the way asserts are
written.
Note: reviewer often sees only positive side of the test. In other words,
CI says that tests pass. However reviewer does NOT see how the failure
would look like in CI.

+1

4) review patches and give feedback :)


Great applause to everybody who reach this line. Comments welcome.

Thanks for your thorough analysis and constructive feedback.

As always thoughtful and well put.

Felix


Vladimir

Reply via email to