On 11/12/2009, at 11:55 AM, Adam Heath wrote:

Scott Gray wrote:
On 11/12/2009, at 6:41 AM, Adam Heath wrote:

Scott Gray wrote:
Hi Adam,

Looking at the results my first impression is that the coverage is
under-reported. For example, the accounting component has quite a few
tests but no coverage is shown at all (except for the test package
itself). Possibly because there is lot of logic in simple methods but
I'm 100% sure java code is also run during the tests.

But still a great start and something that will be immensely useful if
we can up the accuracy a bit.

Well, it doesn't, really.  If you click thru to accounting.test,
you'll see that there aren't really that many tests.  And, upon
further investigation, the lines after the runSync calls aren't run,
due to some exception most likely.  I'm not certian if this is do to
my changes, or if the tests themselves are broken.  I'm running a
plain test run now to check that. Plus, there actually *is* line hits
in accounting.invoice.

The tests seem to be running fine on buildbot
(http://ci.apache.org/waterfall?show=ofbiz-trunk), I'm guessing it's the test run problem that's causing the under reporting. There may not be that many explicit accounting tests (even though it is a lot compared to other components) but a lot of tests also touch accounting indirectly. There is just no way that only 53 lines of java code are being executed in accounting during the full test run. I know for a fact that code is
executed from PaymentGatewayServices, FinAccountPaymentServices,
PaymentWorker, UtilAccounting and a few others during the tests.

I had some other changes in that tree that were causing tests to fail.
I've rerun it now, all current tests pass, and I've uploaded a new
report to http://www.brainfood.com/ofbiz-coverage

Cool thanks, looks more in line with what I was expecting.


Note that framework/base has almost 100% coverage.  But that's a bad
thing, because it's not explicitly testing it; all that code just
happens to be utilized during the rest of the test run.

Of course explicitly testing framework/base would be much better but why is the current 100% coverage a bad thing? I mean implied testing is better than no testing right? If I was to go in and incorrectly modify some of those base methods then there's a good chance some of the higher level tests would fail.


Total coverage increased from 7% to 14%.


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to