I have been following this thread, and it strongly reminds me of Joe Darcy's parables of elephants and blind men. [1,2]

In this context, the discussion has been about testing, and the underlying presumption that one size fits all.

I venture to suggest that one size does /not/ fit all, and that we have to be able to support a wide range of testing paradigms.

1) Some folk will want to do black box testing, with their tests contained in their own first class module, to exercise the code the way that real clients might do

2) Some folk will want to do black box testing, with their test code on the classpath, in the unnamed module, because they don't want to update their test code

3) Some folk will want to do white box testing, using code conventions is widespread use today, where the test code is alongside the code being tested, in the same package. This generates two sub-cases: a) is the test code compiled into the module itself, in a special test-specific build, or b) is the test code dynamically patched into the module

4) Some folk will want to do white box testing, but will be less concerned about the package used to contain the package. That has the same two sub-cases: a) is the test code compiled into the module itself, in a special test-specific build, or b) is the test code dynamically patched into the module

Now you take all those scenarios, and cross-product them with issues like, is there a test framework involved, such as JUnit or TestNG, and, where should that framework be placed: can it be an automatic module, or it is sufficient to put the framework on the classpath, in the unnamed module?

When you understand all that, then you begin to see the shape of the elephant in the room that is the testing problem.

But, to stretch the analogy to breaking point, it's not the only elephant in the room. There's another, different, elephant called "strong encapsulation", and there is a strong conflict between the desire for easy white box testing and the desire for strong encapsulation. While the past 20 years of Java have led to easy simple white box testing, leveraging split packages by adding jars onto the class path, we've also seen the the problems that such techniques can lead to ... hence the desire for Project Jigsaw, and the requirement [3] for strong encapsulation. We have to accept that the better we succeed at strong encapsulation, the harder it will be to use the old ways of white box testing. Conversely, the more we hold to the old ways of working, the less we will succeed at satisfying the goal of strong encapsulation.

So, ultimately, the trick is to figure out how to walk the tightrope between the two elephants in the room. Previously, as a community, we've built tools to help manage the task. Now, it is time for the tools and our practices to evolve, to meet the new demands of testing in a modular world.

Do we have all the answers today? Almost certainly not. But I will note one of the unheralded success stories of Jigsaw and OpenJDK. Testing has always been important, and we have managed to keep running almost all of the JDK unit and regression tests with little to no change in functionality in almost all of the tests, and we have started supporting new testing scenarios as well. Looking back at the list I gave at the beginning, support for 1 is coming available, but not yet widely used, many JDK tests use case 2, JDK has a number of tests that use 3b and 4b, and we have tests that use JUnit and TestNG.

So, command line options like -XaddExports, -Xpatch, -Xmodule, etc, may not be pretty, but they can be composed to cover a wide range of testing scenarios, without giving up too much on the goal of strong encapsulation, and that at least is some degree of success.

-- Jon

[1] https://blogs.oracle.com/darcy/entry/project_coin_elephants_knapsacks_language
[2] http://en.wikipedia.org/wiki/Blind_Men_and_an_Elephant
[3] http://openjdk.java.net/projects/jigsaw/spec/reqs/

Reply via email to