Hi Leonid
Thanks for your answer! See my answers below.
Leonid Lastovkin wrote:
> Patrick,
>
> I am not sure I understood the following 100%:
>
> In essence I need a way to track the accuracy of class
> implementations. I'm
> mainly interested in numerical stuff so the answer can be slightly
> different each time, making normal unit tests complicated.
>
> Are you talking about profiling, e.g. measuring the performance of
> part of the system as opposed to running a series of tests (which are
> essentially methods that either pass or fail?)
Eh sort of both. I want to automate my verifications. Say you have
something a matrix library. The operations (add, subtract, multiply
etc.) are easy to test and that can be unit tested. The matrix solvers
(they solve Ax = b) however are a different story. So I want to do
something like:
[Verification]
[Source(Reader = typeof(MatrixEquationReader), File="SomeSillyFile.mtx")]
[Compare(typeof(MatrixEquationComparer))]
[Target(Writer=typeof(MatrixEquationWriter),
File="SomeSillyResultFile.xml")]
public void MatrixSolve(Matrix a, Vector b, Vector result)
{
MatrixSolver solver = new Solver();
// Solve the equation Ax = b
Vector internalX = solver.Solve(a, b);
// Compare the results
Verification.Compare(x, internalX);
}
In this case we mark the 'test' as a verification and then provide a
source to read the data from, a comparison class type and a target to
write the results to. Then when this test is executed the testing
framework would create a reader, tell it to read the file (which
contains the matrix and the two vectors and then pass these to the test.
The test would invoke the solver and pass the result and the original
back to the framework for comparison and storage. The framework could
then store all the required information (test id, time taken, result,
original, difference, pass / fail based on the difference etc.). Tests
like these would probably not be run as frequently as unit tests, but
they would still be run regularly.
It should also be possible to add new source files and have the tests
use these without intervention. That way if new test cases come to light
you can immediately add those to the verification tests, without having
to change the source code.
Note that this syntax is not thought about at all and probably won't
work. I just hope it explains what it is that I want.
>
> <snip>
> If you only care about the big picture: that is making sure that
> MethodA which used to run in 10-12 seconds last release does not take
> 28 seconds six months later. This can be framed as a test. You have to
> keep the same settings: same hardware, OS, Debug vs. Release build,
> etc. Otherwise you would be comparing apples and oranges. It is true,
> each run would take a different amount of time to run. You could take
> a few dozen samples and figure out what the distribution is. You can
> then warn if the running time is outside of two stdev, and fail if it
> is outside of four sigma. This is not perfect, as once in a while the
> test may still fail, but that's the best you can do when dealing with
> running time which produces random results.
This sounds sort of like the other thing I'm trying to achieve.
Automated checks on the performance of a method/class etc.
>
> There are two ways to access the running times:
> 1) You could create self-contained executable module which runs MbUnit
> tests
> http://www.mertner.com/confluence/display/MbUnit/TestExecutionUsingSelfExecutableTestAssemblies
> and then get access to the object model of the results (there is API
> for it as well).
> I've had some problems with accessing the object model (perhaps it is
> just me),
> 2) so I am dumping the results into an XML file and then parsing that
> file. It is not terribly hard in C#.
mmmm good ones. I'll have a look at these :-)
>
> I really like MbUnit, but I would not use it for elaborate profiling.
> I used to be part of a large group where separate groups of people
> were working on testing of the software and on on measuring
> /comparing/analyzing performance. In my mind those are separate
> activities.
> You could have slow code that does the job right. Conversely, you do
> not have to wait until the code is bug-free to start analyzing how
> much slower/faster it is. One of the tools out there is dotTrace
> profiler from JetBrains.
I've got the dotTrace profiler, it's a really nice tool :-)
Thanks again for your suggestions, they're very helpful :-)
Regards
Patrick
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"MbUnit.User" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/MbUnitUser?hl=en
-~----------~----~----~----~------~----~------~--~---