Just two cents (or a little more) from the peanut gallery...

On Tue, Mar 21, 2006 at 10:45:42PM -0500, Geir Magnusson Jr wrote:
> Tim Ellison wrote:
> >Just to clarify terminology -- unit tests are a 'style' of test that
> >focus on particular units of functionality.  Unit tests can be both
> >implementation tests and API tests.  Implementation tests are specific
> >to our implementation (the mechanism, hidden to the end user, by which
> >we chose to implement the APIs); and API tests are common to all
> >conformant implementations (they test the APIs used by the end user).
> 
> So can we refer to "implementation tests" as "unit tests", because I 
> would bet that's a well-understood useage, and refer to things that are 
> strictly testing the API as "API tests".

Thinking more about all this verbiage, and looking at a bunch of "unit
tests" in many apache packages, I think the definitions are inherently
too vague to get consensus on. It comes down to "what is a unit", and
this is an age-old discussion (see: metric system vs inches) we should
not try and have.

It gets us into arguments like "that is not a proper unit test". 'Why
not?' "The unit is too big." 'Well, our units are just bigger than yours,
you silly Brits!' "Why you little...!"

So I will suggest we don't go and try to define "unit test" and stop using
the phrase when we want to make distinctions between stuff.

Eg I would suggest that we bite the bullet and go something like this:

  "unit test" --> any test runnable by a "unit testing framework" such as
          JUnit or Cactus.

  "implementation test" --> a test run to verify that a specific piece
          of code, preferably as small a piece as is seperately
          testable, behaves as expected.

  "specification test" --> a test run to verify that an implementation is
          conformant with some specification, prefereably as small a piece
          of the specification for which a test can be defined.

  "API test" --> a specification test where the specification takes the
          form of an API definition (perhaps a java interface with
          supporting javadocs, perhaps just javadocs, perhaps IDL...)

  "tck test" --> any test defined as part of something that is called a
          "TCK" or technology compatibility kit. TCK tests are
          supposed to be specification tests.

> >Geir Magnusson Jr wrote:
> >>Good unit tests are going to be testing things that are package
> >>protected.  You can't do that if you aren't in the same package
> >>(obviously).
> >
> >We have implementation tests that require package private, and maybe
> >even private access to our implementation classes both in the java.* and
> >o.a.h.* packages.

This seems correct.

> >The 'problem' is that we cannot define classes in java.* packages that
> >are loaded by the application classloader.  That is counter to
> >specification and prohibited by the VM.
> >
> >We also have API tests that should not have access to package private
> >and even private types in the implementation.

This seems correct too.

> >The 'problem' is that running API tests in java.* packages does provide
> >such access, and worse runs those tests on the bootclassloader which
> >gives them special security access not afforded to our users. 

This makes sense.

> > I've said this lots of times before. 

Usually that means one is not coming across well, not that people aren't
trying to listen or anything like that :-)

> > We already see lots of errors caused by
> >oversight of the classloader differences.
> 
> Right.  And I think the solution is to think about this in some other 
> way than just running things in a VM, like a test harness that does the 
> right thing in terms of the classes being tested (what would be in the 
> boot classloader) and the classes doing the testing.

I don't know about that. I'm sure that if the problem is well-defined
enough solutions will become apparent, and I still don't quite get why it
is the subject of continuous debate (eg can't someone just go out and try
and do what you propose and show it works?).

> >>With the "custom" of putting in things in o.a.h.t are we
> >>implicitly discouraging good testing practice?
> >
> >This is laughable.
> 
> You are going to have to explain why it's "laughable".  If you are 
> testing a.b.c.Foo and you have to do it from a.b.c.test.FooTest, how can 
> you ever do implementation testing of Foo?  It's not an unreasonable 
> question.  Certainly not "laughable".

In general casting something someone else thinks as laughable is not very
conductive to working together. I thought the question was phrased in a
very thought-provoking manner :-).

In any case, the obvious answer to the question is that you can do it by
writing your implementation so that it is implementation testable in that
manner. This means not (or allmost not) using package-private access
definitions anywhere. If "protected" can make sense, you get to do things
such as

public class MyTestCase extends TestCase
{
  public static class MyExtended extends My
  {
     public My m;

     public MyExtended( My m )
     {
       this.m = m;
     }

     public Object exposedFoo()
     {
       return m.foo();
     }
  }
}

If "protected" does not make sense, you can put the "real" implementation
in some other package, and then the package-private stuff is nothing more
than a facade for that real implementation (you still can't
implementation-test the facade. What you can do is to use code generation
to create the facade, and then implementation test the code generation.
Or just not bother). Eg

--
package java.foo;

import o.a.h.j.foo.FooImpl;

class Foo { /* package private */
  private final FooImpl f = new FooImpl();

  void foo()
  {
    f.foo();
  }
}
--
package o.a.h.j.foo;

public class FooImpl
{
  public void foo()  // readily testable, cuz public
  {
    /* ... */
  }
}
--

The last option I'm aware of is to resort to using reflection,
since the runtime type system can bypass any and all access restrictions
if you have the appropriate security manager, but that leads to rather
painful test coding and makes the test coding error prone.

There is also the possibility that all the package-private materials in
reality are fully exercised if you test the public parts of the package
thoroughly enough. A coverage utility like clover can show that. XP
(extreme programming) purists (like me) might argue that if you have
package-private stuff that is not exerciseable through the public API
that the package-private stuff needs to be factored out. But lets try not
to argue too much :-)

> >>Given that this
> >>o.a.h.t.* pattern comes from Eclipse-land, how do they do it? 

I doubt it comes from Eclipse-land. If ViewCVS wasn't locked for CVS
I could probably find you code from 1997 at the ASF that has a .test.
package in the middle.

> >> I
> >>couldn't imagine that the Eclipse tests don't test package protected
> >>things.
> >
> >The only thing shared with Eclipse-land here is the *.tests.* package
> >name element, hardly significant or unique I expect.
> 
> Well, it is around here. While I haven't done a survey, I'm used to 
> projects keeping things in parallel trees to make it easy to test. 

If with "here" you mean "the ASF" I'm happy to challenge the assertion :-)

> Granted, projects don't have the problem we have.
> 
> The thing I'm asking for is this - how in Eclipse-land do they test 
> package protected stuff?  How do they do implementation tests?

I suspects its one or more of the above. For my own code, I tend to
design it so that implementation tests are not neccessary - eg I build
a large amount of specification tests (API tests) and verify that the
code coverage from running the API tests is 100%. Of course we don't
have that luxury (the API is already defined, and most of it probably
wasn't designed with this whole "purist" testing thing in mind).

> >Eclipse testing does not have the java.* namespace issues with
> >classloaders that we have got.
> 
> Right, but that's a classloader and security manager issue for the 
> testing framework, isn't it?
> 
> Hypothetically....suppose we decided (for whatever reason) that we 
> weren't going to test in situ to get better control of the environment. 
>  What would you do?

What does "in situ" mean?

> >>I've been short of Round Tuits lately, but I still would like to
> >>investigate a test harness that helps us by mitigating the security
> >>issues...
> >
> >Today we run all our tests in one suite on the classpath.  They are API
> >tests.
> 
> I hope they are more than API tests.

See above for why one could hope they don't need to more than API tests (I
doubt it, but in terms of what would be *nice*...)

> >I expect that we will at least have another test suite of implementation
> >tests.
> >
> >However, over the last few weeks we have been discussing the other
> >'dimensions' of testing that we want to embody, and we haven't settled
> >on a suitable way of representing those different dimensions.  Filenames
> >for testcases may do it if we can squeeze in enough information into a
> >filename (I don't like that approach, BTW)
> 
> I don't either.
> 
> , or explicitly defining
> >different suites of tests.
> 
> Which makes sense.

Yup. It could even make sense to build some rather large extensions to JUnit
to make all this stuff more manageable (eg we *can* do stuff like

MyApiTest extends AbstractHarmonyTestCase
{
  static { markTestStyle(API); }

  /* ... */
}

MyApiTest extends AbstractHarmonyTestCase
{
  static { markTestStyle(IMPL); }

  /* ... */
}

, or similar things using 1.5 annotations).


cheers!


Leo

Reply via email to