RE: Unit testing

2007-04-10 Thread Patrick Linskey
> I think we need to take responsibility for this individually. Test  
> coverage reports usually show woeful under-testing, so it's 
> really up  
> to the developer to know whether a test case is needed or not.

Currently, this will definitely be the case in OpenJPA, since the bulk
of the Kodo test cases were JDO-based, and we at BEA have not yet ported
them out to OpenJPA.

-Patrick

-- 
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it. 

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, April 10, 2007 8:25 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: Re: Unit testing
> 
> I think we need to take responsibility for this individually. Test  
> coverage reports usually show woeful under-testing, so it's 
> really up  
> to the developer to know whether a test case is needed or not.
> 
> It might be necessary at times to commit an urgent fix 
> without having  
> a full test suite to prove that the fix works, but best practice is  
> to provide a test case along with the fix. We could make a guideline  
> that until a test case is checked in, the JIRA issue should remain  
> open. And when reviewing others' work, the missing test case can be  
> used as a reason to "un-resolve" an issue.
> 
> But I agree with Kevin that integrating new features without test  
> cases is evil.
> 
> In my experience, reminders long after the issue is resolved don't  
> really work that well.
> 
> Craig
> 
> On Apr 10, 2007, at 7:57 AM, Phill Moran wrote:
> 
> > Via test coverage reports  would we not know what parts do not have
> > corresponding tests and through commit history we would know who  
> > added the
> > fix/feature. So before the next release we can gently remind the  
> > committers to
> > submit the missing test.
> > This is likely to be only the occasional committer problem as a  
> > regular
> > committer would monitor the forward progress they would see that a  
> > test is
> > missing and could add it - sort of its own reminder.
> >
> > Phill
> >
> > -Original Message-
> > From: Michael Dick [mailto:[EMAIL PROTECTED]
> > Sent: April 10, 2007 10:41 AM
> > To: open-jpa-dev@incubator.apache.org
> > Subject: Re: Unit testing
> >
> > +1. We can do a better job of adding unit tests.
> >
> > Is there a way to link automated checking with JIRA? For bugs,  
> > features, and
> > improvements we'll want new unit tests. For other changes that  
> > don't require a
> > JIRA report (build files, null checks etc) we won't require 
> new tests.
> > I'm not sure whether this approach would work if the JIRA report was
> > resolved/closed without code changes (config issue, classpath  
> > problem, etc).
> >
> >
> > Even if we can't automate the checking with JIRA and svn I think  
> > this is a good
> > rule of thumb. Rather than policing every commit for unit tests we  
> > can just
> > check JIRA reports. It also allows some leeway if you were to  
> > commit unit tests
> > separately from the code changes. If you're reviewing the changes  
> > based
> > primarily on the commit email it might be easier to have them  
> > separate.
> >
> > On 4/10/07, Kevin Sutter <[EMAIL PROTECTED]> wrote:
> >>
> >> Patrick,
> >> I agree with your concern.  Early on, I was at fault a couple of  
> >> times
> >> with integrating changes without a corresponding unit 
> testcase.  But,
> >> with "gentle" reminders by the OpenJPA community, I've started to
> >> remember.  So, as a start, I would challenge everyone to start
> >> monitoring the commits and request appropriate unit tests if none  
> >> were
> >> provided.  Either by replying to openjpa-dev or directly 
> to the JIRA
> >> issue.  Maybe with enough "public reminders", we'll start 
> to get the
> >> point across.
> >>
> >> I do know that in some cases, creating a unit test for a 
> specific bug
> >> is 

RE: Unit testing

2007-04-10 Thread Phill Moran
+1
I like this idea you can't close the item without a test case. 

Phill

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: April 10, 2007 11:25 AM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Unit testing

I think we need to take responsibility for this individually. Test coverage
reports usually show woeful under-testing, so it's really up to the developer to
know whether a test case is needed or not.

It might be necessary at times to commit an urgent fix without having a full
test suite to prove that the fix works, but best practice is to provide a test
case along with the fix. We could make a guideline that until a test case is
checked in, the JIRA issue should remain open. And when reviewing others' work,
the missing test case can be used as a reason to "un-resolve" an issue.

But I agree with Kevin that integrating new features without test cases is evil.

In my experience, reminders long after the issue is resolved don't really work
that well.

Craig

On Apr 10, 2007, at 7:57 AM, Phill Moran wrote:

> Via test coverage reports  would we not know what parts do not have 
> corresponding tests and through commit history we would know who added 
> the fix/feature. So before the next release we can gently remind the 
> committers to submit the missing test.
> This is likely to be only the occasional committer problem as a 
> regular committer would monitor the forward progress they would see 
> that a test is missing and could add it - sort of its own reminder.
>
> Phill
>
> -Original Message-
> From: Michael Dick [mailto:[EMAIL PROTECTED]
> Sent: April 10, 2007 10:41 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: Re: Unit testing
>
> +1. We can do a better job of adding unit tests.
>
> Is there a way to link automated checking with JIRA? For bugs, 
> features, and improvements we'll want new unit tests. For other 
> changes that don't require a JIRA report (build files, null checks 
> etc) we won't require new tests.
> I'm not sure whether this approach would work if the JIRA report was 
> resolved/closed without code changes (config issue, classpath problem, 
> etc).
>
>
> Even if we can't automate the checking with JIRA and svn I think this 
> is a good rule of thumb. Rather than policing every commit for unit 
> tests we can just check JIRA reports. It also allows some leeway if 
> you were to commit unit tests separately from the code changes. If 
> you're reviewing the changes based primarily on the commit email it 
> might be easier to have them separate.
>
> On 4/10/07, Kevin Sutter <[EMAIL PROTECTED]> wrote:
>>
>> Patrick,
>> I agree with your concern.  Early on, I was at fault a couple of 
>> times with integrating changes without a corresponding unit testcase.  
>> But, with "gentle" reminders by the OpenJPA community, I've started 
>> to remember.  So, as a start, I would challenge everyone to start 
>> monitoring the commits and request appropriate unit tests if none 
>> were provided.  Either by replying to openjpa-dev or directly to the 
>> JIRA issue.  Maybe with enough "public reminders", we'll start to get 
>> the point across.
>>
>> I do know that in some cases, creating a unit test for a specific bug 
>> is easier said than done.  Since we are pulling OpenJPA into a larger 
>> product (WebSphere in my case), some of the test scenarios that 
>> uncover a bug are quite complicated and involved.  Creating an 
>> individual unit testcase to reproduce the problem is more effort and 
>> when time is critical, we sometimes go for the quick fix, ensure we 
>> don't regress, and integrate the change.
>> I'm not trying to justify the lack of unit tests for these 
>> situations, I'm just explaining some background on why it happens 
>> once in a while.
>>
>> As far as new "features" being integrated into OpenJPA, there should 
>> be no excuse for lack of unit tests.  We need to provide repeatable 
>> testcases for these new features.  If users are providing patches for 
>> these new features, then it's easy to stop the commit until testcases 
>> are provided.  If new features are committed without corresponding 
>> testcases, should we back out the changes?  I know that's kind of 
>> extreme, but it would make a point.  Of course, then we get into the 
>> feature vs bug fix discussion, but if everyone works at providing 
>> unit tests, then it's a moot point.
>>
>> Kevin
>>
>> On 4/9/07, Patrick Linskey <[EMAIL PROTECTED]> 

Re: Unit testing

2007-04-10 Thread Craig L Russell
I think we need to take responsibility for this individually. Test  
coverage reports usually show woeful under-testing, so it's really up  
to the developer to know whether a test case is needed or not.


It might be necessary at times to commit an urgent fix without having  
a full test suite to prove that the fix works, but best practice is  
to provide a test case along with the fix. We could make a guideline  
that until a test case is checked in, the JIRA issue should remain  
open. And when reviewing others' work, the missing test case can be  
used as a reason to "un-resolve" an issue.


But I agree with Kevin that integrating new features without test  
cases is evil.


In my experience, reminders long after the issue is resolved don't  
really work that well.


Craig

On Apr 10, 2007, at 7:57 AM, Phill Moran wrote:


Via test coverage reports  would we not know what parts do not have
corresponding tests and through commit history we would know who  
added the
fix/feature. So before the next release we can gently remind the  
committers to

submit the missing test.
This is likely to be only the occasional committer problem as a  
regular
committer would monitor the forward progress they would see that a  
test is

missing and could add it - sort of its own reminder.

Phill

-Original Message-
From: Michael Dick [mailto:[EMAIL PROTECTED]
Sent: April 10, 2007 10:41 AM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Unit testing

+1. We can do a better job of adding unit tests.

Is there a way to link automated checking with JIRA? For bugs,  
features, and
improvements we'll want new unit tests. For other changes that  
don't require a

JIRA report (build files, null checks etc) we won't require new tests.
I'm not sure whether this approach would work if the JIRA report was
resolved/closed without code changes (config issue, classpath  
problem, etc).



Even if we can't automate the checking with JIRA and svn I think  
this is a good
rule of thumb. Rather than policing every commit for unit tests we  
can just
check JIRA reports. It also allows some leeway if you were to  
commit unit tests
separately from the code changes. If you're reviewing the changes  
based
primarily on the commit email it might be easier to have them  
separate.


On 4/10/07, Kevin Sutter <[EMAIL PROTECTED]> wrote:


Patrick,
I agree with your concern.  Early on, I was at fault a couple of  
times

with integrating changes without a corresponding unit testcase.  But,
with "gentle" reminders by the OpenJPA community, I've started to
remember.  So, as a start, I would challenge everyone to start
monitoring the commits and request appropriate unit tests if none  
were

provided.  Either by replying to openjpa-dev or directly to the JIRA
issue.  Maybe with enough "public reminders", we'll start to get the
point across.

I do know that in some cases, creating a unit test for a specific bug
is easier said than done.  Since we are pulling OpenJPA into a larger
product (WebSphere in my case), some of the test scenarios that
uncover a bug are quite complicated and involved.  Creating an
individual unit testcase to reproduce the problem is more effort and
when time is critical, we sometimes go for the quick fix, ensure we
don't regress, and integrate the change.
I'm not trying to justify the lack of unit tests for these  
situations,
I'm just explaining some background on why it happens once in a  
while.


As far as new "features" being integrated into OpenJPA, there should
be no excuse for lack of unit tests.  We need to provide repeatable
testcases for these new features.  If users are providing patches for
these new features, then it's easy to stop the commit until testcases
are provided.  If new features are committed without corresponding
testcases, should we back out the changes?  I know that's kind of
extreme, but it would make a point.  Of course, then we get into the
feature vs bug fix discussion, but if everyone works at providing  
unit

tests, then it's a moot point.

Kevin

On 4/9/07, Patrick Linskey <[EMAIL PROTECTED]> wrote:



It should be part of the commit acceptance process.


I agree that it should be part of the process, but I hope that with
sufficient discipline and attention, we can avoid having to enforce
this via automated rules. I definitely make changes that don't merit
unit tests, such as changes to localized strings, null checks, build
file changes, etc.

-Patrick

--
Patrick Linskey
BEA Systems, Inc.


___
Notice:  This email message, together with any attachments, may
contain information  of  BEA Systems,  Inc.,  its subsidiaries  and
affiliated entities,  that may be confidential,  proprietary,
copyrighted  and/or legally privileged, and is intended solely for
the use of the individ

RE: Unit testing

2007-04-10 Thread Phill Moran
Via test coverage reports  would we not know what parts do not have
corresponding tests and through commit history we would know who added the
fix/feature. So before the next release we can gently remind the committers to
submit the missing test. 
This is likely to be only the occasional committer problem as a regular
committer would monitor the forward progress they would see that a test is
missing and could add it - sort of its own reminder.

Phill

-Original Message-
From: Michael Dick [mailto:[EMAIL PROTECTED] 
Sent: April 10, 2007 10:41 AM
To: open-jpa-dev@incubator.apache.org
Subject: Re: Unit testing

+1. We can do a better job of adding unit tests.

Is there a way to link automated checking with JIRA? For bugs, features, and
improvements we'll want new unit tests. For other changes that don't require a
JIRA report (build files, null checks etc) we won't require new tests.
I'm not sure whether this approach would work if the JIRA report was
resolved/closed without code changes (config issue, classpath problem, etc).


Even if we can't automate the checking with JIRA and svn I think this is a good
rule of thumb. Rather than policing every commit for unit tests we can just
check JIRA reports. It also allows some leeway if you were to commit unit tests
separately from the code changes. If you're reviewing the changes based
primarily on the commit email it might be easier to have them separate.

On 4/10/07, Kevin Sutter <[EMAIL PROTECTED]> wrote:
>
> Patrick,
> I agree with your concern.  Early on, I was at fault a couple of times 
> with integrating changes without a corresponding unit testcase.  But, 
> with "gentle" reminders by the OpenJPA community, I've started to 
> remember.  So, as a start, I would challenge everyone to start 
> monitoring the commits and request appropriate unit tests if none were 
> provided.  Either by replying to openjpa-dev or directly to the JIRA 
> issue.  Maybe with enough "public reminders", we'll start to get the 
> point across.
>
> I do know that in some cases, creating a unit test for a specific bug 
> is easier said than done.  Since we are pulling OpenJPA into a larger 
> product (WebSphere in my case), some of the test scenarios that 
> uncover a bug are quite complicated and involved.  Creating an 
> individual unit testcase to reproduce the problem is more effort and 
> when time is critical, we sometimes go for the quick fix, ensure we 
> don't regress, and integrate the change.
> I'm not trying to justify the lack of unit tests for these situations, 
> I'm just explaining some background on why it happens once in a while.
>
> As far as new "features" being integrated into OpenJPA, there should 
> be no excuse for lack of unit tests.  We need to provide repeatable 
> testcases for these new features.  If users are providing patches for 
> these new features, then it's easy to stop the commit until testcases 
> are provided.  If new features are committed without corresponding 
> testcases, should we back out the changes?  I know that's kind of 
> extreme, but it would make a point.  Of course, then we get into the 
> feature vs bug fix discussion, but if everyone works at providing unit 
> tests, then it's a moot point.
>
> Kevin
>
> On 4/9/07, Patrick Linskey <[EMAIL PROTECTED]> wrote:
> >
> > > It should be part of the commit acceptance process.
> >
> > I agree that it should be part of the process, but I hope that with 
> > sufficient discipline and attention, we can avoid having to enforce 
> > this via automated rules. I definitely make changes that don't merit 
> > unit tests, such as changes to localized strings, null checks, build 
> > file changes, etc.
> >
> > -Patrick
> >
> > --
> > Patrick Linskey
> > BEA Systems, Inc.
> >
> > 
> > ___
> > Notice:  This email message, together with any attachments, may 
> > contain information  of  BEA Systems,  Inc.,  its subsidiaries  and  
> > affiliated entities,  that may be confidential,  proprietary,  
> > copyrighted  and/or legally privileged, and is intended solely for 
> > the use of the individual or entity named in this message. If you 
> > are not the intended recipient, and have received this message in 
> > error, please immediately return this by email and then delete it.
> >
> > > -Original Message-
> > > From: Phill Moran [mailto:[EMAIL PROTECTED]
> > > Sent: Monday, April 09, 2007 10:02 AM
> > > To: open-jpa-dev@incubator.apache.org
> > > Subject: RE: Unit testing
> > >
> > > +1
>

Re: Unit testing

2007-04-10 Thread Michael Dick

+1. We can do a better job of adding unit tests.

Is there a way to link automated checking with JIRA? For bugs, features, and
improvements we'll want new unit tests. For other changes that don't require
a JIRA report (build files, null checks etc) we won't require new tests.
I'm not sure whether this approach would work if the JIRA report was
resolved/closed without code changes (config issue, classpath problem, etc).


Even if we can't automate the checking with JIRA and svn I think this is a
good rule of thumb. Rather than policing every commit for unit tests we can
just check JIRA reports. It also allows some leeway if you were to commit
unit tests separately from the code changes. If you're reviewing the changes
based primarily on the commit email it might be easier to have them
separate.

On 4/10/07, Kevin Sutter <[EMAIL PROTECTED]> wrote:


Patrick,
I agree with your concern.  Early on, I was at fault a couple of times
with
integrating changes without a corresponding unit testcase.  But, with
"gentle" reminders by the OpenJPA community, I've started to
remember.  So,
as a start, I would challenge everyone to start monitoring the commits and
request appropriate unit tests if none were provided.  Either by replying
to
openjpa-dev or directly to the JIRA issue.  Maybe with enough "public
reminders", we'll start to get the point across.

I do know that in some cases, creating a unit test for a specific bug is
easier said than done.  Since we are pulling OpenJPA into a larger product
(WebSphere in my case), some of the test scenarios that uncover a bug are
quite complicated and involved.  Creating an individual unit testcase to
reproduce the problem is more effort and when time is critical, we
sometimes
go for the quick fix, ensure we don't regress, and integrate the change.
I'm not trying to justify the lack of unit tests for these situations, I'm
just explaining some background on why it happens once in a while.

As far as new "features" being integrated into OpenJPA, there should be no
excuse for lack of unit tests.  We need to provide repeatable testcases
for
these new features.  If users are providing patches for these new
features,
then it's easy to stop the commit until testcases are provided.  If new
features are committed without corresponding testcases, should we back out
the changes?  I know that's kind of extreme, but it would make a
point.  Of
course, then we get into the feature vs bug fix discussion, but if
everyone
works at providing unit tests, then it's a moot point.

Kevin

On 4/9/07, Patrick Linskey <[EMAIL PROTECTED]> wrote:
>
> > It should be part of the commit acceptance process.
>
> I agree that it should be part of the process, but I hope that with
> sufficient discipline and attention, we can avoid having to enforce this
> via automated rules. I definitely make changes that don't merit unit
> tests, such as changes to localized strings, null checks, build file
> changes, etc.
>
> -Patrick
>
> --
> Patrick Linskey
> BEA Systems, Inc.
>
> ___
> Notice:  This email message, together with any attachments, may contain
> information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
> entities,  that may be confidential,  proprietary,  copyrighted  and/or
> legally privileged, and is intended solely for the use of the individual
> or entity named in this message. If you are not the intended recipient,
> and have received this message in error, please immediately return this
> by email and then delete it.
>
> > -Original Message-
> > From: Phill Moran [mailto:[EMAIL PROTECTED]
> > Sent: Monday, April 09, 2007 10:02 AM
> > To: open-jpa-dev@incubator.apache.org
> > Subject: RE: Unit testing
> >
> > +1
> > It should be part of the commit acceptance process. Otherwise
> > OpenJPA will loose
> > out to other ORM tools that will be perceived as less buggy.
> > What is used for coverage monitoring, clover? We should also
> > use checkstyle to
> > give some insight into the code as well
> >
> > Phill
> >
> > -Original Message-
> > From: Patrick Linskey [mailto:[EMAIL PROTECTED]
> > Sent: April 9, 2007 12:51 PM
> > To: open-jpa-dev@incubator.apache.org
> > Subject: Unit testing
> >
> > Hi,
> >
> > I'm a bit concerned about the lack of unit tests being put
> > into OpenJPA as new
> > features are added. I understand that often, creating unit tests are
> > anticlimactic compared to implementing the feature itself,
> > but at least basic
> > happy-path testing of new features is pretty essential if we
> > want to avoid these

Re: Unit testing

2007-04-10 Thread Kevin Sutter

Patrick,
I agree with your concern.  Early on, I was at fault a couple of times with
integrating changes without a corresponding unit testcase.  But, with
"gentle" reminders by the OpenJPA community, I've started to remember.  So,
as a start, I would challenge everyone to start monitoring the commits and
request appropriate unit tests if none were provided.  Either by replying to
openjpa-dev or directly to the JIRA issue.  Maybe with enough "public
reminders", we'll start to get the point across.

I do know that in some cases, creating a unit test for a specific bug is
easier said than done.  Since we are pulling OpenJPA into a larger product
(WebSphere in my case), some of the test scenarios that uncover a bug are
quite complicated and involved.  Creating an individual unit testcase to
reproduce the problem is more effort and when time is critical, we sometimes
go for the quick fix, ensure we don't regress, and integrate the change.
I'm not trying to justify the lack of unit tests for these situations, I'm
just explaining some background on why it happens once in a while.

As far as new "features" being integrated into OpenJPA, there should be no
excuse for lack of unit tests.  We need to provide repeatable testcases for
these new features.  If users are providing patches for these new features,
then it's easy to stop the commit until testcases are provided.  If new
features are committed without corresponding testcases, should we back out
the changes?  I know that's kind of extreme, but it would make a point.  Of
course, then we get into the feature vs bug fix discussion, but if everyone
works at providing unit tests, then it's a moot point.

Kevin

On 4/9/07, Patrick Linskey <[EMAIL PROTECTED]> wrote:


> It should be part of the commit acceptance process.

I agree that it should be part of the process, but I hope that with
sufficient discipline and attention, we can avoid having to enforce this
via automated rules. I definitely make changes that don't merit unit
tests, such as changes to localized strings, null checks, build file
changes, etc.

-Patrick

--
Patrick Linskey
BEA Systems, Inc.

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.

> -Original Message-
> From: Phill Moran [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 09, 2007 10:02 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: RE: Unit testing
>
> +1
> It should be part of the commit acceptance process. Otherwise
> OpenJPA will loose
> out to other ORM tools that will be perceived as less buggy.
> What is used for coverage monitoring, clover? We should also
> use checkstyle to
> give some insight into the code as well
>
> Phill
>
> -Original Message-----
> From: Patrick Linskey [mailto:[EMAIL PROTECTED]
> Sent: April 9, 2007 12:51 PM
> To: open-jpa-dev@incubator.apache.org
> Subject: Unit testing
>
> Hi,
>
> I'm a bit concerned about the lack of unit tests being put
> into OpenJPA as new
> features are added. I understand that often, creating unit tests are
> anticlimactic compared to implementing the feature itself,
> but at least basic
> happy-path testing of new features is pretty essential if we
> want to avoid these
> types of problems. Code inspection is good but, Abe's good
> eyes aside, not as
> reliable as having a unit test that will start failing when a
> feature is broken.
>
> I try to write my test cases first, in a somewhat-modified
> TDD approach.
> I do this because a) I need some sort of harness to
> demonstrate the failure in
> order to isolate and resolve it, and b) I know that
> personally, I'm much more
> likely to write a test while the problem is still interesting
> than after it's
> resolved. In other words, I never (well, rarely) have a
> command-line harness
> that I throw together to demonstrate a problem. I try to
> always use a test case
> instead. This strategy means that the only test-related
> overhead is the effort
> involved to figure out how to programmatically test for failure.
>
> Also, I understand that some things are hard to test. Testing
> SQL or JDBC
> interactions is often percieved to be one of these things. In
> the Kodo codebase,
> we ended up creating various means to get around this; the
> SQLListenerTestCase
> is one suc

RE: Unit testing

2007-04-09 Thread Patrick Linskey
> It should be part of the commit acceptance process.

I agree that it should be part of the process, but I hope that with
sufficient discipline and attention, we can avoid having to enforce this
via automated rules. I definitely make changes that don't merit unit
tests, such as changes to localized strings, null checks, build file
changes, etc.

-Patrick

-- 
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it. 

> -Original Message-
> From: Phill Moran [mailto:[EMAIL PROTECTED] 
> Sent: Monday, April 09, 2007 10:02 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: RE: Unit testing
> 
> +1
> It should be part of the commit acceptance process. Otherwise 
> OpenJPA will loose
> out to other ORM tools that will be perceived as less buggy. 
> What is used for coverage monitoring, clover? We should also 
> use checkstyle to
> give some insight into the code as well
> 
> Phill
> 
> -Original Message-
> From: Patrick Linskey [mailto:[EMAIL PROTECTED] 
> Sent: April 9, 2007 12:51 PM
> To: open-jpa-dev@incubator.apache.org
> Subject: Unit testing
> 
> Hi,
> 
> I'm a bit concerned about the lack of unit tests being put 
> into OpenJPA as new
> features are added. I understand that often, creating unit tests are
> anticlimactic compared to implementing the feature itself, 
> but at least basic
> happy-path testing of new features is pretty essential if we 
> want to avoid these
> types of problems. Code inspection is good but, Abe's good 
> eyes aside, not as
> reliable as having a unit test that will start failing when a 
> feature is broken.
> 
> I try to write my test cases first, in a somewhat-modified 
> TDD approach.
> I do this because a) I need some sort of harness to 
> demonstrate the failure in
> order to isolate and resolve it, and b) I know that 
> personally, I'm much more
> likely to write a test while the problem is still interesting 
> than after it's
> resolved. In other words, I never (well, rarely) have a 
> command-line harness
> that I throw together to demonstrate a problem. I try to 
> always use a test case
> instead. This strategy means that the only test-related 
> overhead is the effort
> involved to figure out how to programmatically test for failure.
> 
> Also, I understand that some things are hard to test. Testing 
> SQL or JDBC
> interactions is often percieved to be one of these things. In 
> the Kodo codebase,
> we ended up creating various means to get around this; the 
> SQLListenerTestCase
> is one such example. It turns out that by extending 
> SQLListenerTestCase, it
> becomes trivial to check how much SQL was written and what 
> the SQL looks like.
> 
> Does anyone else have any thoughts about how to ensure that 
> we develop test
> cases as needed?
> 
> -Patrick
> 
> --
> Patrick Linskey
> BEA Systems, Inc. 
> 
> __
> _
> Notice:  This email message, together with any attachments, 
> may contain
> information  of  BEA Systems,  Inc.,  its subsidiaries  and  
> affiliated
> entities,  that may be confidential,  proprietary,  
> copyrighted  and/or legally
> privileged, and is intended solely for the use of the 
> individual or entity named
> in this message. If you are not the intended recipient, and 
> have received this
> message in error, please immediately return this by email and 
> then delete it. 
> 
> > -Original Message-
> > From: Abe White
> > Sent: Monday, April 09, 2007 8:12 AM
> > To: open-jpa-dev@incubator.apache.org
> > Subject: Re: [jira] Resolved: (OPENJPA-208) NoResultException and 
> > NonUniqueResultException are not thrown when expected
> > 
> > > Went ahead and restored the previous behavior where the QueryImpl 
> > > itself checks for non-uniqueness and throws the expected 
> exception.
> > 
> > That breaks the single result optimization that was added for
> > OPENJPA-168 when getSingleResult() is called.  There was a 
> reason we  
> > moved the validation to the kernel.  The previous code was 
> correct.   
> > You need to use the "hard" way of cre

RE: Unit testing

2007-04-09 Thread Phill Moran
+1
It should be part of the commit acceptance process. Otherwise OpenJPA will loose
out to other ORM tools that will be perceived as less buggy. 
What is used for coverage monitoring, clover? We should also use checkstyle to
give some insight into the code as well

Phill

-Original Message-
From: Patrick Linskey [mailto:[EMAIL PROTECTED] 
Sent: April 9, 2007 12:51 PM
To: open-jpa-dev@incubator.apache.org
Subject: Unit testing

Hi,

I'm a bit concerned about the lack of unit tests being put into OpenJPA as new
features are added. I understand that often, creating unit tests are
anticlimactic compared to implementing the feature itself, but at least basic
happy-path testing of new features is pretty essential if we want to avoid these
types of problems. Code inspection is good but, Abe's good eyes aside, not as
reliable as having a unit test that will start failing when a feature is broken.

I try to write my test cases first, in a somewhat-modified TDD approach.
I do this because a) I need some sort of harness to demonstrate the failure in
order to isolate and resolve it, and b) I know that personally, I'm much more
likely to write a test while the problem is still interesting than after it's
resolved. In other words, I never (well, rarely) have a command-line harness
that I throw together to demonstrate a problem. I try to always use a test case
instead. This strategy means that the only test-related overhead is the effort
involved to figure out how to programmatically test for failure.

Also, I understand that some things are hard to test. Testing SQL or JDBC
interactions is often percieved to be one of these things. In the Kodo codebase,
we ended up creating various means to get around this; the SQLListenerTestCase
is one such example. It turns out that by extending SQLListenerTestCase, it
becomes trivial to check how much SQL was written and what the SQL looks like.

Does anyone else have any thoughts about how to ensure that we develop test
cases as needed?

-Patrick

--
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or legally
privileged, and is intended solely for the use of the individual or entity named
in this message. If you are not the intended recipient, and have received this
message in error, please immediately return this by email and then delete it. 

> -Original Message-
> From: Abe White
> Sent: Monday, April 09, 2007 8:12 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: Re: [jira] Resolved: (OPENJPA-208) NoResultException and 
> NonUniqueResultException are not thrown when expected
> 
> > Went ahead and restored the previous behavior where the QueryImpl 
> > itself checks for non-uniqueness and throws the expected exception.
> 
> That breaks the single result optimization that was added for
> OPENJPA-168 when getSingleResult() is called.  There was a reason we  
> moved the validation to the kernel.  The previous code was correct.   
> You need to use the "hard" way of creating new exception types.
> 
> Notice:  This email message, together with any attachments, may 
> contain information  of  BEA Systems,  Inc.,  its subsidiaries  and  
> affiliated entities,  that may be confidential,  proprietary,  
> copyrighted  and/or legally privileged, and is intended solely for the 
> use of the individual or entity named in this message. If you are not 
> the intended recipient, and have received this message in error, 
> please immediately return this by email and then delete it.
> 

Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or legally
privileged, and is intended solely for the use of the individual or entity named
in this message. If you are not the intended recipient, and have received this
message in error, please immediately return this by email and then delete it.



Unit testing

2007-04-09 Thread Patrick Linskey
Hi,

I'm a bit concerned about the lack of unit tests being put into OpenJPA
as new features are added. I understand that often, creating unit tests
are anticlimactic compared to implementing the feature itself, but at
least basic happy-path testing of new features is pretty essential if we
want to avoid these types of problems. Code inspection is good but,
Abe's good eyes aside, not as reliable as having a unit test that will
start failing when a feature is broken.

I try to write my test cases first, in a somewhat-modified TDD approach.
I do this because a) I need some sort of harness to demonstrate the
failure in order to isolate and resolve it, and b) I know that
personally, I'm much more likely to write a test while the problem is
still interesting than after it's resolved. In other words, I never
(well, rarely) have a command-line harness that I throw together to
demonstrate a problem. I try to always use a test case instead. This
strategy means that the only test-related overhead is the effort
involved to figure out how to programmatically test for failure.

Also, I understand that some things are hard to test. Testing SQL or
JDBC interactions is often percieved to be one of these things. In the
Kodo codebase, we ended up creating various means to get around this;
the SQLListenerTestCase is one such example. It turns out that by
extending SQLListenerTestCase, it becomes trivial to check how much SQL
was written and what the SQL looks like.

Does anyone else have any thoughts about how to ensure that we develop
test cases as needed?

-Patrick

-- 
Patrick Linskey
BEA Systems, Inc. 

___
Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it. 

> -Original Message-
> From: Abe White 
> Sent: Monday, April 09, 2007 8:12 AM
> To: open-jpa-dev@incubator.apache.org
> Subject: Re: [jira] Resolved: (OPENJPA-208) NoResultException 
> and NonUniqueResultException are not thrown when expected
> 
> > Went ahead and restored the previous behavior where the QueryImpl  
> > itself checks for non-uniqueness and throws the expected exception.
> 
> That breaks the single result optimization that was added for  
> OPENJPA-168 when getSingleResult() is called.  There was a reason we  
> moved the validation to the kernel.  The previous code was correct.   
> You need to use the "hard" way of creating new exception types.
> 
> Notice:  This email message, together with any attachments, 
> may contain information  of  BEA Systems,  Inc.,  its 
> subsidiaries  and  affiliated entities,  that may be 
> confidential,  proprietary,  copyrighted  and/or legally 
> privileged, and is intended solely for the use of the 
> individual or entity named in this message. If you are not 
> the intended recipient, and have received this message in 
> error, please immediately return this by email and then delete it.
> 

Notice:  This email message, together with any attachments, may contain 
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated 
entities,  that may be confidential,  proprietary,  copyrighted  and/or legally 
privileged, and is intended solely for the use of the individual or entity 
named in this message. If you are not the intended recipient, and have received 
this message in error, please immediately return this by email and then delete 
it.