Mishura, Stepan M wrote:
> Tim Ellison wrote:
<snip>
>>You don't want to run all your tests on the bootclasspath ...
<snip>
> 
> Not 'all' tests - just unit tests for classlib :-)
> 
> I don't see big problems with sandbox and running unit tests on the
> bootclasspath. The sandbox consists of three elements: verifier, class
> loader and security manager. IMHO, most of classlib unit tests don't
> depend on sandbox and are designed to verify tested class functionality
> only. For example, if I want to verify that 1+1=2 why I have to remember
> about sandbox?

Of course, if you are only testing 1+1=2 then (hopefully!) you will get
the same result whether that test is run on the classpath or bootclasspath.

When you are running unit tests of java API that are implemented as
calls into areas of the class library, then they may well have different
behaviour.

If your unit tests are intended to test API, then they should be calling
the API in the same manner that an application will call the API.  Just
look at the number of places where we check to see if the classloader ==
null to see where this matters.

> Some of them may depend on security manager component if
> you like to run a test in some security restricted environment. But it
> is not a big issue: the security manager component can be configured
> dynamically and you should install your custom security manager before
> running a test.

Yes, we should do that on the application classloader to test security
manager functionality.

Regards,
Tim

> What do you think?
> 
> Thanks,
> Stepan Mishura
> Intel Middleware Products Division
> 
> 
>>-----Original Message-----
>>From: Tim Ellison [mailto:[EMAIL PROTECTED]
>>Sent: Thursday, January 12, 2006 8:04 PM
>>To: harmony-dev@incubator.apache.org
>>Subject: Re: Test suite layout
>>
>>Geir Magnusson Jr wrote:
>>
>>>Tim Ellison wrote:
>>
>><snip>
>>
>>>>We would have written it as java.io.tests, but the java.<whatever>
>>>>namespace is reserved, so the formula is simply
>>>>
>>>><package>.<type>  ->   org.apache.harmony.tests.<package>.<type>Test
>>>
>>>
>>>Ug - then you have the problem of not being in the namespace as what
> 
> you
> 
>>>are testing.
>>>
>>>THat's why people use parallel trees - so your test code is
> 
> physically
> 
>>>separate but you have freedom of package access.
>>
>>For 'normal' application code then you can do this, but since we are
>>writing the java packages themselves then you come unstuck because the
>>java packages have to run on the bootclasspath, and the tests on the
>>application classpath.
>>
>>You don't want to run all your tests on the bootclasspath (because then
>>it would not be subject to the same security sandboxing as applications
>>using the APIs you are testing); and you cannot put java.<whatever> on
>>the application classpath because the VM will catch you out (you'd get
> 
> a
> 
>>linkage error IIRC).
>>
>>
>>>>This makes it clear what is being tested, and where to add new tests
>>
>>etc.
>>
>>>
>>>So would
>>>
>>>  test/org/apache/harmony/io/TestFoo.java
>>>
>>>(to test something in org.apache.harmony.io, and arguable to test the
>>>Foo.java class in there.  (or TestFoo.java - it's early - no coffee
> 
> yet)
> 
>>Not sure what you are saying here... For java.<whatever> packages we
>>need a prefix on the test packages to keep the VM happy, but for
>>org.apache.harmony packages we can have either pre- or post-.
>>
>>I'd actually prefer a postfix of .tests for non-API packages, though I
>>can understand if people object to the inconsistency; so
>>
>>org.apache.harmony.tests.java.io.FileTest.java      <- test API
>>org.apache.harmony.io.tests.FileImpltest.java  <- test public methods
>>                                                 in our IO impl'
>>
>>
>>>Simiarly
>>>
>>>  test/java/util/TestMap.java
>>>
>>>
>>>>Then within the test class itself the methods are named after the
> 
> method
> 
>>>>under test, with a familar JNI-style encoding,  so we have things
> 
> like:
> 
>>>>org.apache.harmony.tests.java.io.FileTest contains
>>>>    public void test_ConstructorLjava_io_FileLjava_lang_String() {
>>>>    ...
>>>>    }
>>>>
>>>>and
>>>>
>>>>org.apache.harmony.tests.java.lang.FloatTest contains
>>>>    public void test_compareToLjava_lang_Float() {
>>>>    ...
>>>>    }
>>>
>>>
>>>...or whatever the convention is for JUnit.  I think that's one of
> 
> the
> 
>>>nice things about TestNG, is that it's annotated, so you have the
>>>freedom there.
>>>
>>>
>>>>
>>>>If the test is added due to a regression, then it is put into the
> 
> right
> 
>>>>place in the test suite, and flagged with a comment (i.e. a
> 
> reference to
> 
>>>>the Harmony JIRA number).
>>>
>>>
>>>Yes - and I'd even advocate a parallel directory there too so that
> 
> it's
> 
>>>clear that the regressions are different, but whatever.  The only
> 
> snag
> 
>>>there is name collision with the classes.
>>
>>I thought we'd agreed that 'regression' was not a useful classification
>>within the test suite layout ...
>>
>>
>>>I think that a simple comment is enough.  If we want to get cute,
> 
> maybe
> 
>>>a javadoc tag so we can manage mechanically in the future.
>>
>>ok -- do you have a usecase in mind?
>>
>>Regards,
>>Tim
>>
>>
>>
>>>>George Harley1 wrote:
>>>>
>>>>
>>>>>Hi,
>>>>>
>>>>>
>>>>>
>>>>>>I think that regression tests should be marked in some way.
>>>>>
>>>>>
>>>>>
>>>>>Agreed.  But can we please *resist* the temptation to do this by
>>>>>incorporating JIRA issue numbers into test case names (e.g. calling
>>>>>unit test methods test_26() or test_JIRA_26()). I've seen this kind
>>>>>of approach adopted in a couple of projects and, in my experience,
> 
> it
> 
>>>>>often leads to the scattering of duplicated test code around the
> 
> test
> 
>>>>>harness.
>>>>>
>>>>>Better, methinks, to either create a new test method with a
>>>>>meaningful name or else augment an existing method - whatever makes
>>>>>more sense for the particular issue. Then marking certain code as
>>>>>being for regression test purposes could be done in comments that
>>>>>include the URL of the JIRA issue. Perhaps an agreed tag like
> 
> "JIRA"
> 
>>>>>or "BUG" etc could be used as an eye-catcher as well ?
>>>>>e.g.
>>>>>// BUG http://issues.apache.org/jira/browse/HARMONY-26
>>>>>
>>>>>
>>>>>My 2 Euro Cents.
>>>>>Best regards, George
>>>>>________________________________________
>>>>>George C. Harley
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>"Mishura, Stepan M" <[EMAIL PROTECTED]> 12/01/2006 04:56
>>>>>Please respond to
>>>>>harmony-dev@incubator.apache.org
>>>>>
>>>>>
>>>>>To
>>>>><harmony-dev@incubator.apache.org>
>>>>>cc
>>>>>
>>>>>Subject
>>>>>RE: regression test suite
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>Hello,
>>>>>Tim Ellison wrote:
>>>>>[snip]
>>>>>
>>>>>
>>>>>
>>>>>>What is the useful distinction for regression tests being kept
>>>>>
>>>>>
>>>>>separate?
>>>>>
>>>>>
>>>>>
>>>>>>I can see that you may distinguish unit and 'system-level' tests
> 
> just
> 
>>>>>>because of the difference in frameworks etc. required, but why do
> 
> I
> 
>>>>>
>>>>>care
>>>>>
>>>>>
>>>>>
>>>>>>if the test was written due to a JIRA issue or test-based
> 
> development
> 
>>>>>
>>>>>or
>>>>>
>>>>>
>>>>>
>>>>>>someone who get's kicks out of writing tests to break the code?
>>>>>>
>>>>>
>>>>>
>>>>>I agree that separating regression tests doesn't make sense.
>>>>>However I think that regression tests should be marked in some way.
>>>>>This will signal a developer that a test was created to track
> 
> already
> 
>>>>>known issue. IMHO, a regression test should point out to a bug
> 
> report
> 
>>>>>and a bug report (after resolving a bug) should contain a reference
> 
> to
> 
>>>>>corresponding regression test in repository.
>>>>>
>>>>>Thanks,
>>>>>Stepan Mishura
>>>>>Intel Middleware Products Division
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>--
>>
>>Tim Ellison ([EMAIL PROTECTED])
>>IBM Java technology centre, UK.
> 
> 

-- 

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.

Reply via email to