Re: [Testing Convention] Keep tests small and fast

2006-03-30 Thread Alex Orlov
 [SNIP]

 IMHO, this relates to stress tests and load tests only. This means that we
 shouldn't put such kind of tests in a 'regular test suite'. The 'regular
 test suite' is used to verify regressions only. Returning back to a test's
 size, I think it is up to developer - we can only recommend not to test all
 functionality in one test case and split independent parts into a number of
 test case. But IMHO we can not fully avoid creating 'redundant code', such
 as, Assert 1:, Assert 2: . For example, if there is a constructor
 with several parameters and get-methods to return provided parameters then I
 wouldn't create 3 tests instead of the next one:

 public void test_Ctor() {
Ctor c = new Ctor(param1, param2, param3);

 assertEquals(Assert 1, param1, c.getParam1());
assertEquals(Assert 2, param2, c.getParam3());
 assertEquals(Assert 3, param3, c.getParam2());
 }

Hi folks,

I actully agree with Stepan. It all depends on the edvelopers since
the people do what is convenient and generally makes sense. However
from the tester's point of view (my one, you can guess :) ) when you
check several assertions in the test (like in
org.apache.harmony.tests.java.util.jar.test_putLjava_lang_ObjectLjava_lang_Object)
if one assertion is broken you won't see what's going on with others
unless you get this one fixed.

BTW do we have any way to include stress/reliability tests in the project?

Thanks,
Alex Orlov.
Intel Middleware Products Division


 Thanks,
 Stepan.


 Thanks a lot.
 
  Richard Liang wrote:
   Dears,
  
   As I cannot find similar pages about testing convention, I just create
   one with my rough ideas
   http://wiki.apache.org/harmony/Testing_Convention, so that we can
   document our decision timely  clearly.
  
   Geir Magnusson Jr wrote:
  
  
   Leo Simons wrote:
   Gentlemen!
  
   On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:
   On Monday 27 March 2006 10:14, mr B wrote:
   On 3/27/06, mr C wrote:
   [SNIP]
   [SNIP]
   [SNIP]
   On 1/1/2006, mr D wrote:
   [SNIP]
   Hmmm... Lemme support [SNIP]
   Now let me support [SNIP].
  
   The ASF front page says
  
 (...) The Apache projects are characterized by a collaborative,
   consensus
 based development process,  (...)
  
   That's not just some boilerplate. Consensus is a useful thing.
  
   How should we organize our tests? has now been the subject of
   debate for
   *months* around here, and every now and then much of the same
   discussion is
   rehashed.
  
   And we're making progress.  IMO, it really helped my thinking to
   distinguish formally between the implementation tests and the spec
   tests, because that *completely* helped me come to terms with the
   whole o.a.h.test.* issue.
  
   I now clearly see where o.a.h.test.*.HashMapTest fits, and where
   java.util.HashMapTest fits.
  
   I don't think our issues were that obvious before, at least to me.
   Now, I see clearly.
  
  
   I think it would be more productive to look for things to agree on
   (such as,
   we don't know, but we can find out, or we have different ideas on
   that,
   but there's room for both, or this way of doing things is not the
   best one
   but the stuff is still useful so let's thank the guy for his work
   anyway)
   than to keep delving deeper and deeper into these kinds of
   disagreements.
  
   Of course, the ASF front page doesn't say that apache projects are
   characterized by a *productive* development process. Its just my
   feeling that
   for a system as big as harmony we need to be *very* productive.
  
   You don't think we're making progress through these discussions?
  
  
   Think about it. Is your time better spent convincing lots of other
   people to do
   their testing differently, or is it better spent writing better tests?
  
   The issue isn't about convincing someone to do it differently, but
   understanding the full scope of problems, that we need to embrace
   both approaches, because they are apples and oranges, and we need
   both apples and oranges.  They aren't exclusionary.
  
   geir
  
  
  
 
 
  --
  Richard Liang
  China Software Development Lab, IBM
 
 
 


 --
 Thanks,
 Stepan Mishura
 Intel Middleware Products Division




Re: [Testing Convention] Keep tests small and fast

2006-03-30 Thread Richard Liang

Stepan Mishura wrote:

On 3/30/06, Richard Liang  wrote:
  

Dears,

I notice that we put all the test code into one big test method (for
example,

org.apache.harmony.tests.java.util.jar.test_putLjava_lang_ObjectLjava_lang_Object
).
This way we will lose some benefits of junit and even unit test:
1. Test code cannot share configuration code through setUp and tearDown
2. We have to add redundant code, such as, Assert 1:, Assert 2: 
to make the test results more comprehensive
3. It makes the test code more complex

Shall we just use small test cases?

You may want to read the description at:
http://www.javaworld.com/javaworld/jw-12-2000/jw-1221-junit_p.html

*Keep tests small and fast*
Executing every test for the entire system shouldn't take hours. Indeed,
developers will more consistently run tests that execute quickly.
Without regularly running the full set of tests, it will be difficult to
validate the entire system when changes are made. Errors will start to
creep back in, and the benefits of unit testing will be lost. This means
stress tests and load tests for single classes or small frameworks of
classes shouldn't be run as part of the unit test suite; they should be
executed separately.




 Hi Richard,

IMHO, this relates to stress tests and load tests only. This means that we
shouldn't put such kind of tests in a 'regular test suite'. The 'regular
test suite' is used to verify regressions only. Returning back to a test's
size, I think it is up to developer - we can only recommend not to test all
functionality in one test case and split independent parts into a number of
test case. But IMHO we can not fully avoid creating 'redundant code', such
as, Assert 1:, Assert 2: . For example, if there is a constructor
with several parameters and get-methods to return provided parameters then I
wouldn't create 3 tests instead of the next one:

public void test_Ctor() {
Ctor c = new Ctor(param1, param2, param3);

 assertEquals(Assert 1, param1, c.getParam1());
assertEquals(Assert 2, param2, c.getParam3());
 assertEquals(Assert 3, param3, c.getParam2());
}

  

Hello Stepan,

Sometimes we do have to use several assert to check the expected 
situation. It's true. However, what I mean is, let's take your 
test_Ctor as an example, if we want to design many test cases to 
verify the behavior of the constructor, the number of assert may 
increase dramatically.


Do you need just one test method or 4 methods for the following test?

public void test_Ctor() {
Ctor c = new Ctor(param1, param2, param3);

assertEquals(Assert 1, param1, c.getParam1());
assertEquals(Assert 2, param2, c.getParam3());
assertEquals(Assert 3, param3, c.getParam2());

Ctor c2 = new Ctor(null, param2, param3);

assertNull(Assert 4, c2.getParam1());
assertEquals(Assert 5, param2, c2.getParam3());
assertEquals(Assert 6, param3, c3.getParam2());

Ctor c3 = new Ctor(param1, null, param3);

assertEquals(Assert 7, param1, c3.getParam1());
assertNull(Assert 8, c3.getParam3());
assertEquals(Assert 9, param3, c3.getParam2());

Ctor c4 = new Ctor(param1, param2, null);

assertEquals(Assert 10, param1, c4.getParam1());
assertEquals(Assert 11, param2, c4.getParam3());
assertNull(Assert 12, c4.getParam2());

}




Thanks,
Stepan.


Thanks a lot.
  

Richard Liang wrote:


Dears,

As I cannot find similar pages about testing convention, I just create
one with my rough ideas
http://wiki.apache.org/harmony/Testing_Convention, so that we can
document our decision timely  clearly.

Geir Magnusson Jr wrote:
  

Leo Simons wrote:


Gentlemen!

On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:
  

On Monday 27 March 2006 10:14, mr B wrote:


On 3/27/06, mr C wrote:
[SNIP]
  

[SNIP]


[SNIP]
  

On 1/1/2006, mr D wrote:


[SNIP]
  

Hmmm... Lemme support [SNIP]
  

Now let me support [SNIP].


The ASF front page says

  (...) The Apache projects are characterized by a collaborative,
consensus
  based development process,  (...)

That's not just some boilerplate. Consensus is a useful thing.

How should we organize our tests? has now been the subject of
debate for
*months* around here, and every now and then much of the same
discussion is
rehashed.
  

And we're making progress.  IMO, it really helped my thinking to
distinguish formally between the implementation tests and the spec
tests, because that *completely* helped me come to terms with the
whole o.a.h.test.* issue.

I now clearly see where o.a.h.test.*.HashMapTest fits, and where
java.util.HashMapTest fits.

I don't think our issues were that obvious before, at least to me.
Now, I see clearly.



I think it would be more productive to look for things to agree on
(such as,
we don't know, but we can find out, or we have different ideas on

Re: [Testing Convention] Keep tests small and fast

2006-03-30 Thread will pugh
I'm not too familiar with the Harmony code yet, but since I've had a 
bunch of experience on large projects I thought I'd toss my $.02 in here.


1)  When dealing with a project as large and with as much surface area 
as a VM, your unit tests for the entire project will probably take 
several hours to run.  The trade off for heavy coverage is totally worth 
it, even if it takes a long time.  It does indeed mean you need to 
manage it.


2)  We tended to manage this by breaking up unit tests into Build 
Verification Tests(BVTs) and Developer Regression Tests(DRTs).  
Developers would be required to run DRTs before checking in, and BVTs 
would be run for every build (or with coninous integration, they would 
be constantly running every few hours).


3)  In the largest projects I've been on DRTs would be broken up further 
to be on a component level.  When you changed a component that other 
components depended on, we tended to depend on the good sense of the 
developer to run the DRTs for the related components (and depended on 
the CI or daily build to catch the problems that slipped through that 
net.)  We set a rule that DRTs for a given component could never take 
longer than 10 minutes to run.


Again, I'm sorry if this is irrelevent (since I'm not familiar enough 
with the Harmony code), but this process was reasonably effective for 
us.  The real pain ends up being how often changes in core code broke 
downstream components, but failing tests are only a symptom (and early 
warning system) for this.


The problem is that for core components, it was often important for the 
developers to run rather a rather long suite of tests before checking 
in, simply because there were so many components using their pieces.  We 
just bit that bullet.


   --Will


Richard Liang wrote:


Dears,

I notice that we put all the test code into one big test method (for 
example, 
org.apache.harmony.tests.java.util.jar.test_putLjava_lang_ObjectLjava_lang_Object). 
This way we will lose some benefits of junit and even unit test:

1. Test code cannot share configuration code through setUp and tearDown
2. We have to add redundant code, such as, Assert 1:, Assert 2: 
 to make the test results more comprehensive

3. It makes the test code more complex

Shall we just use small test cases?

You may want to read the description at: 
http://www.javaworld.com/javaworld/jw-12-2000/jw-1221-junit_p.html


*Keep tests small and fast*
Executing every test for the entire system shouldn't take hours. 
Indeed, developers will more consistently run tests that execute 
quickly. Without regularly running the full set of tests, it will be 
difficult to validate the entire system when changes are made. Errors 
will start to creep back in, and the benefits of unit testing will be 
lost. This means stress tests and load tests for single classes or 
small frameworks of classes shouldn't be run as part of the unit test 
suite; they should be executed separately.


Thanks a lot.

Richard Liang wrote:


Dears,

As I cannot find similar pages about testing convention, I just 
create one with my rough ideas 
http://wiki.apache.org/harmony/Testing_Convention, so that we can 
document our decision timely  clearly.


Geir Magnusson Jr wrote:




Leo Simons wrote:


Gentlemen!

On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:


On Monday 27 March 2006 10:14, mr B wrote:


On 3/27/06, mr C wrote:
[SNIP]


[SNIP]


[SNIP]


On 1/1/2006, mr D wrote:


[SNIP]



Hmmm... Lemme support [SNIP]


Now let me support [SNIP].



The ASF front page says

  (...) The Apache projects are characterized by a collaborative, 
consensus

  based development process,  (...)

That's not just some boilerplate. Consensus is a useful thing.

How should we organize our tests? has now been the subject of 
debate for
*months* around here, and every now and then much of the same 
discussion is

rehashed.



And we're making progress.  IMO, it really helped my thinking to 
distinguish formally between the implementation tests and the spec 
tests, because that *completely* helped me come to terms with the 
whole o.a.h.test.* issue.


I now clearly see where o.a.h.test.*.HashMapTest fits, and where 
java.util.HashMapTest fits.


I don't think our issues were that obvious before, at least to me.  
Now, I see clearly.




I think it would be more productive to look for things to agree on 
(such as,
we don't know, but we can find out, or we have different ideas 
on that,
but there's room for both, or this way of doing things is not the 
best one
but the stuff is still useful so let's thank the guy for his work 
anyway)
than to keep delving deeper and deeper into these kinds of 
disagreements.


Of course, the ASF front page doesn't say that apache projects are
characterized by a *productive* development process. Its just my 
feeling that

for a system as big as harmony we need to be *very* productive.



You don't think we're making progress through these discussions?



Think 

Re: [Testing Convention] Keep tests small and fast

2006-03-30 Thread Stepan Mishura
On 3/30/06, Richard Liang wrote:

 Stepan Mishura wrote:
  On 3/30/06, Richard Liang  wrote:
 [SNIP]
  IMHO, this relates to stress tests and load tests only. This means
 that we
  shouldn't put such kind of tests in a 'regular test suite'. The 'regular
  test suite' is used to verify regressions only. Returning back to a
 test's
  size, I think it is up to developer - we can only recommend not to test
 all
  functionality in one test case and split independent parts into a number
 of
  test case. But IMHO we can not fully avoid creating 'redundant code',
 such
  as, Assert 1:, Assert 2: . For example, if there is a
 constructor
  with several parameters and get-methods to return provided parameters
 then I
  wouldn't create 3 tests instead of the next one:
 
  public void test_Ctor() {
  Ctor c = new Ctor(param1, param2, param3);
 
   assertEquals(Assert 1, param1, c.getParam1());
  assertEquals(Assert 2, param2, c.getParam3());
   assertEquals(Assert 3, param3, c.getParam2());
  }
 
 
 Hello Stepan,

 Sometimes we do have to use several assert to check the expected
 situation. It's true. However, what I mean is, let's take your
 test_Ctor as an example, if we want to design many test cases to
 verify the behavior of the constructor, the number of assert may
 increase dramatically.

 Do you need just one test method or 4 methods for the following test?

 public void test_Ctor() {
 Ctor c = new Ctor(param1, param2, param3);

 assertEquals(Assert 1, param1, c.getParam1());
 assertEquals(Assert 2, param2, c.getParam3());
 assertEquals(Assert 3, param3, c.getParam2());

 Ctor c2 = new Ctor(null, param2, param3);

 assertNull(Assert 4, c2.getParam1());
 assertEquals(Assert 5, param2, c2.getParam3());
 assertEquals(Assert 6, param3, c3.getParam2());

 Ctor c3 = new Ctor(param1, null, param3);

 assertEquals(Assert 7, param1, c3.getParam1());
 assertNull(Assert 8, c3.getParam3());
 assertEquals(Assert 9, param3, c3.getParam2());

 Ctor c4 = new Ctor(param1, param2, null);

 assertEquals(Assert 10, param1, c4.getParam1());
 assertEquals(Assert 11, param2, c4.getParam3());
 assertNull(Assert 12, c4.getParam2());

 }


 Hi Richard,

I agree with you that your example demonstrates not quite efficient testing
- there is a lot of duplicate assertions, for example, Assert 2 checks the
same as Assert 5. I'd modify your example in the following way:

 public void test_Ctor() {
Ctor c = new Ctor(param1, param2, param3);

 assertEquals(Assert 1, param1, c.getParam1());
assertEquals(Assert 2, param2, c.getParam2());
 assertEquals(Assert 3, param3, c.getParam3());

assertNull(Assert 4, new Ctor(null, param2, param3).getParam1());
assertNull(Assert 5, new Ctor(param1, null, param3).getParam2());
assertNull(Assert 6, new Ctor(param1, param2, null).getParam3());
}

IMO, it does equivalent testing as in your example but it is shorter.

Thanks,


 Thanks,
  Stepan.
 
 
  Thanks a lot.
 
  Richard Liang wrote:
 
  Dears,
 
  As I cannot find similar pages about testing convention, I just create
  one with my rough ideas
  http://wiki.apache.org/harmony/Testing_Convention, so that we can
  document our decision timely  clearly.
 
  Geir Magnusson Jr wrote:
 
  Leo Simons wrote:
 
  Gentlemen!
 
  On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:
 
  On Monday 27 March 2006 10:14, mr B wrote:
 
  On 3/27/06, mr C wrote:
  [SNIP]
 
  [SNIP]
 
  [SNIP]
 
  On 1/1/2006, mr D wrote:
 
  [SNIP]
 
  Hmmm... Lemme support [SNIP]
 
  Now let me support [SNIP].
 
  The ASF front page says
 
(...) The Apache projects are characterized by a collaborative,
  consensus
based development process,  (...)
 
  That's not just some boilerplate. Consensus is a useful thing.
 
  How should we organize our tests? has now been the subject of
  debate for
  *months* around here, and every now and then much of the same
  discussion is
  rehashed.
 
  And we're making progress.  IMO, it really helped my thinking to
  distinguish formally between the implementation tests and the spec
  tests, because that *completely* helped me come to terms with the
  whole o.a.h.test.* issue.
 
  I now clearly see where o.a.h.test.*.HashMapTest fits, and where
  java.util.HashMapTest fits.
 
  I don't think our issues were that obvious before, at least to me.
  Now, I see clearly.
 
 
  I think it would be more productive to look for things to agree on
  (such as,
  we don't know, but we can find out, or we have different ideas on
  that,
  but there's room for both, or this way of doing things is not the
  best one
  but the stuff is still useful so let's thank the guy for his work
  anyway)
  than to keep delving deeper and deeper into these kinds of
  disagreements.
 
  Of course, the ASF front page doesn't say that apache projects are
  characterized by a *productive* development process. Its just my
  feeling that
  for a system as big as 

Re: [Testing Convention] Keep tests small and fast

2006-03-30 Thread Richard Liang

will pugh wrote:
I'm not too familiar with the Harmony code yet, but since I've had a 
bunch of experience on large projects I thought I'd toss my $.02 in here.


1)  When dealing with a project as large and with as much surface area 
as a VM, your unit tests for the entire project will probably take 
several hours to run.  The trade off for heavy coverage is totally 
worth it, even if it takes a long time.  It does indeed mean you need 
to manage it.


2)  We tended to manage this by breaking up unit tests into Build 
Verification Tests(BVTs) and Developer Regression Tests(DRTs).  
Developers would be required to run DRTs before checking in, and BVTs 
would be run for every build (or with coninous integration, they would 
be constantly running every few hours).


3)  In the largest projects I've been on DRTs would be broken up 
further to be on a component level.  When you changed a component that 
other components depended on, we tended to depend on the good sense of 
the developer to run the DRTs for the related components (and depended 
on the CI or daily build to catch the problems that slipped through 
that net.)  We set a rule that DRTs for a given component could never 
take longer than 10 minutes to run.


Again, I'm sorry if this is irrelevent (since I'm not familiar enough 
with the Harmony code), but this process was reasonably effective for 
us.  The real pain ends up being how often changes in core code broke 
downstream components, but failing tests are only a symptom (and early 
warning system) for this.


The problem is that for core components, it was often important for 
the developers to run rather a rather long suite of tests before 
checking in, simply because there were so many components using their 
pieces.  We just bit that bullet.


   --Will


Richard Liang wrote:


Dears,

I notice that we put all the test code into one big test method (for 
example, 
org.apache.harmony.tests.java.util.jar.test_putLjava_lang_ObjectLjava_lang_Object). 
This way we will lose some benefits of junit and even unit test:

1. Test code cannot share configuration code through setUp and tearDown
2. We have to add redundant code, such as, Assert 1:, Assert 2: 
 to make the test results more comprehensive

3. It makes the test code more complex

Shall we just use small test cases?

You may want to read the description at: 
http://www.javaworld.com/javaworld/jw-12-2000/jw-1221-junit_p.html


*Keep tests small and fast*
Executing every test for the entire system shouldn't take hours. 
Indeed, developers will more consistently run tests that execute 
quickly. Without regularly running the full set of tests, it will be 
difficult to validate the entire system when changes are made. Errors 
will start to creep back in, and the benefits of unit testing will be 
lost. This means stress tests and load tests for single classes or 
small frameworks of classes shouldn't be run as part of the unit test 
suite; they should be executed separately.


Thanks a lot.

Richard Liang wrote:


Dears,

As I cannot find similar pages about testing convention, I just 
create one with my rough ideas 
http://wiki.apache.org/harmony/Testing_Convention, so that we can 
document our decision timely  clearly.


Geir Magnusson Jr wrote:




Leo Simons wrote:


Gentlemen!

On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:


On Monday 27 March 2006 10:14, mr B wrote:


On 3/27/06, mr C wrote:
[SNIP]


[SNIP]


[SNIP]


On 1/1/2006, mr D wrote:


[SNIP]



Hmmm... Lemme support [SNIP]


Now let me support [SNIP].



The ASF front page says

  (...) The Apache projects are characterized by a collaborative, 
consensus

  based development process,  (...)

That's not just some boilerplate. Consensus is a useful thing.

How should we organize our tests? has now been the subject of 
debate for
*months* around here, and every now and then much of the same 
discussion is

rehashed.



And we're making progress.  IMO, it really helped my thinking to 
distinguish formally between the implementation tests and the spec 
tests, because that *completely* helped me come to terms with the 
whole o.a.h.test.* issue.


I now clearly see where o.a.h.test.*.HashMapTest fits, and where 
java.util.HashMapTest fits.


I don't think our issues were that obvious before, at least to me.  
Now, I see clearly.




I think it would be more productive to look for things to agree on 
(such as,
we don't know, but we can find out, or we have different ideas 
on that,
but there's room for both, or this way of doing things is not 
the best one
but the stuff is still useful so let's thank the guy for his work 
anyway)
than to keep delving deeper and deeper into these kinds of 
disagreements.


Of course, the ASF front page doesn't say that apache projects are
characterized by a *productive* development process. Its just my 
feeling that

for a system as big as harmony we need to be *very* productive.



You don't think we're making progress through these 

Re: [Testing Convention] Keep tests small and fast

2006-03-29 Thread Stepan Mishura
On 3/30/06, Richard Liang  wrote:

 Dears,

 I notice that we put all the test code into one big test method (for
 example,

 org.apache.harmony.tests.java.util.jar.test_putLjava_lang_ObjectLjava_lang_Object
 ).
 This way we will lose some benefits of junit and even unit test:
 1. Test code cannot share configuration code through setUp and tearDown
 2. We have to add redundant code, such as, Assert 1:, Assert 2: 
 to make the test results more comprehensive
 3. It makes the test code more complex

 Shall we just use small test cases?

 You may want to read the description at:
 http://www.javaworld.com/javaworld/jw-12-2000/jw-1221-junit_p.html

 *Keep tests small and fast*
 Executing every test for the entire system shouldn't take hours. Indeed,
 developers will more consistently run tests that execute quickly.
 Without regularly running the full set of tests, it will be difficult to
 validate the entire system when changes are made. Errors will start to
 creep back in, and the benefits of unit testing will be lost. This means
 stress tests and load tests for single classes or small frameworks of
 classes shouldn't be run as part of the unit test suite; they should be
 executed separately.


 Hi Richard,

IMHO, this relates to stress tests and load tests only. This means that we
shouldn't put such kind of tests in a 'regular test suite'. The 'regular
test suite' is used to verify regressions only. Returning back to a test's
size, I think it is up to developer - we can only recommend not to test all
functionality in one test case and split independent parts into a number of
test case. But IMHO we can not fully avoid creating 'redundant code', such
as, Assert 1:, Assert 2: . For example, if there is a constructor
with several parameters and get-methods to return provided parameters then I
wouldn't create 3 tests instead of the next one:

public void test_Ctor() {
Ctor c = new Ctor(param1, param2, param3);

 assertEquals(Assert 1, param1, c.getParam1());
assertEquals(Assert 2, param2, c.getParam3());
 assertEquals(Assert 3, param3, c.getParam2());
}

Thanks,
Stepan.


Thanks a lot.

 Richard Liang wrote:
  Dears,
 
  As I cannot find similar pages about testing convention, I just create
  one with my rough ideas
  http://wiki.apache.org/harmony/Testing_Convention, so that we can
  document our decision timely  clearly.
 
  Geir Magnusson Jr wrote:
 
 
  Leo Simons wrote:
  Gentlemen!
 
  On Mon, Mar 27, 2006 at 11:07:51AM +0200, mr A wrote:
  On Monday 27 March 2006 10:14, mr B wrote:
  On 3/27/06, mr C wrote:
  [SNIP]
  [SNIP]
  [SNIP]
  On 1/1/2006, mr D wrote:
  [SNIP]
  Hmmm... Lemme support [SNIP]
  Now let me support [SNIP].
 
  The ASF front page says
 
(...) The Apache projects are characterized by a collaborative,
  consensus
based development process,  (...)
 
  That's not just some boilerplate. Consensus is a useful thing.
 
  How should we organize our tests? has now been the subject of
  debate for
  *months* around here, and every now and then much of the same
  discussion is
  rehashed.
 
  And we're making progress.  IMO, it really helped my thinking to
  distinguish formally between the implementation tests and the spec
  tests, because that *completely* helped me come to terms with the
  whole o.a.h.test.* issue.
 
  I now clearly see where o.a.h.test.*.HashMapTest fits, and where
  java.util.HashMapTest fits.
 
  I don't think our issues were that obvious before, at least to me.
  Now, I see clearly.
 
 
  I think it would be more productive to look for things to agree on
  (such as,
  we don't know, but we can find out, or we have different ideas on
  that,
  but there's room for both, or this way of doing things is not the
  best one
  but the stuff is still useful so let's thank the guy for his work
  anyway)
  than to keep delving deeper and deeper into these kinds of
  disagreements.
 
  Of course, the ASF front page doesn't say that apache projects are
  characterized by a *productive* development process. Its just my
  feeling that
  for a system as big as harmony we need to be *very* productive.
 
  You don't think we're making progress through these discussions?
 
 
  Think about it. Is your time better spent convincing lots of other
  people to do
  their testing differently, or is it better spent writing better tests?
 
  The issue isn't about convincing someone to do it differently, but
  understanding the full scope of problems, that we need to embrace
  both approaches, because they are apples and oranges, and we need
  both apples and oranges.  They aren't exclusionary.
 
  geir
 
 
 


 --
 Richard Liang
 China Software Development Lab, IBM





--
Thanks,
Stepan Mishura
Intel Middleware Products Division