Patrick,

I  am not sure  I understood the following 100%:

In essence I need a way to track the accuracy of class implementations. I'm
mainly interested in numerical stuff so the answer can be slightly
different each time, making normal unit tests complicated.

Are you talking about profiling, e.g. measuring the performance of part of
the system as opposed to running a series of tests (which are essentially
methods that either pass or fail?)

If yes, then you may want to be thinking about different tools. Specially,
if you not only want to know how long it took for a big method to run, but
also which other methods it called, where it spent most of its time, and not
forget about the fact that there can be separate threads. In that case you
probably do not want to frame the task as a test which can either pass or
fail. You would be interested in the picture as a whole.

If you only care about the big picture: that is making sure that MethodA
which used to run in 10-12 seconds last release does not take 28 seconds six
months later. This can be framed as a test. You have to keep the same
settings: same hardware, OS, Debug vs. Release build, etc. Otherwise you
would be comparing apples and oranges. It is true, each run would take a
different amount of time to run. You could take a few dozen samples and
figure out what the distribution is. You can then warn if the running time
is outside of two stdev, and fail if it is outside of four sigma. This is
not perfect, as once in a while the test may still fail, but that's the best
you can do when dealing with running time which produces random results.

There are two ways to access the running times:
1) You could create self-contained executable module which runs MbUnit tests

http://www.mertner.com/confluence/display/MbUnit/TestExecutionUsingSelfExecutableTestAssemblies
and then get access to the object model of the results (there is API for it
as well).
I've had some problems with accessing the object model (perhaps it is just
me),
2) so I am dumping the results into an XML file and then parsing that file.
It is not terribly hard in C#.


I really like MbUnit, but I would not use it for elaborate profiling. I used
to be part of a large group where separate groups of people
were working on testing of the software and on on measuring
/comparing/analyzing performance. In my mind those are separate activities.
You could have slow code that does the job right. Conversely, you do not
have to wait until the code is bug-free to start analyzing how
much slower/faster it is. One of the tools out there is dotTrace profiler
from JetBrains.

Hopefully this help.

- Leonid.

On 8/9/07, P. van der Velde <[EMAIL PROTECTED]> wrote:
>
>
> Hi All
>
> This is sort of off-topic but I figured if anybody has ever tried this
> they must be on the MbUnit group :-) In short my problem is that I need
> a verification and validation tool (if those terms mean anything). In
> essence I need a way to track the accuracy of class implementations. I'm
> mainly interested in numerical stuff so the answer can be slightly
> different each time, making normal unit tests complicated. Giving a wide
> difference margin is not really acceptable, and in some cases it's not
> really possible, for instance there are cases where you want a number to
> be as close to zero as possible, but you don't really know how close it
> will be. I'd also like to compare the results of the calculations with
> existing results and be able to store the new results / differences.
> That way you could track accuracy over time.
>
> I'd also like to track the performance of the different implementations
> so that a developer can see if their changes help performance or not.
> Again it would be helpful to be able to store the timing results per
> test per run.
>
> Now I know I could code this all up inside an MbUnit test but there's a
> lot of repetitive code necessary for these tests (file / database
> reading/writing, comparisons, timers etc.). Also I'm not sure how to get
> all the statistics of a unit test from inside the unit test (timing, the
> name of the test, when it's run etc.)
>
> I have thought about writing my own verification framework but before I
> give into the Not-Invented-Here syndrome I'd figure I do some searching
> to see if anybody else has ever done something like this.
>
> So my question to you, has anybody ever seen something like this? Or
> done something like this? Any hints and tips?
>
> Thanks heaps
>
> Patrick
>
>
>
>
> >

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"MbUnit.User" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/MbUnitUser?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to