Re: [Yade-dev] stability & compatibility between newer and older versions of yade

2019-01-07 Thread Anton Gladky
Hi all,

please do not forget, that Yade has already a rich opportunity
to create semi-unit-tests (yade --test) and nice semi-integration-tests
(yade --check). Sure, one can create unit tests for pure C++-functions
using boost::unit_test, cppunit or googletest etc., but I would propose
first to extend existing tests, not to spread an energy for the connecting
the new framework into the Yade (+new dependencies, +new cmake-change,
+integration overhead).

And if the existing "yade --test" or "yade --check" are really not enough,
one need to analyze the problem deeper and then make a decision for the
connecting of newer frameworks.

Regards

Anton

Am Mo., 7. Jan. 2019 um 18:26 Uhr schrieb Janek Kozicki :
>
> Bruno Chareyre said: (by the date of Sun, 6 Jan 2019 17:08:12 +0100)
>
> > Thanks for raising this major issue. It would be great to populate unit
> > tests indeed.
> >
> > I recently
> > https://github.com/yade/trunk/commit/b5fbefc6463294f580296cb5727dbbfd733fa8a0
> > introduced a regression test for the utils module and I would like
> > to advertise it here. It is testing only one function from utils
> > currently.
>
> Very interesting, thank you!
>
> I have in mind writing testing code in C++, so that's a little different.
>
> > It needs volunteers to expand it (which can be done simply by reproducing
> > the logic of testUserCreatedInteraction() in more functions). If nobody is
> > going to add tests systematically - the ideal case - I would suggest at
> > least that:
> > *when a bug is fixed a unit test is added simultaneously*.
> > Fixing a bug usually gives a fresh vision of the behavior, which makes
> > writing a unit test easier.
> > Ultimately we could even collectively agree that a bug is not fixed if
> > there is no test proving it.
>
> I would go a bit further: assume that current yade version is the
> reference version. Then I would add test to every C++ function and class
> method. Store the results of tests as a reference result. When a bug
> is found, then the reference result would change. Or worse: it would
> turn out that although function is tested, the test did not catch the
> bug. And that would be great incentive to add a test case where this
> bug happened.
>
> And also this would be a lot easier for others: the code testing
> this function is already written, only another set of input
> parameters must be added to test this case where bug appeared.
>
> Yes, I know that it is a crazy amount of work.
>
> > About 1: would be great provided that it doesn't end-up in simply removing
> > examples which do not work.
>
> Definitely not. The main goal is to really fix all examples. This is
> a perfect opportunity for me to see the latest additions to yade! :)
>
> > Classifying examples is also an important point and I would discourage the
> > previous approach of moving failing scripts to a special
> > "examples/not-working" folder since it breaks the classification in
> > subfolders. Better rename them (something like *.py.fail) while keeping
> > them in their original location.
> > It is less clear if/how you intend to implement the "all examples must
> > work" policy. It is difficult to automatize testing of examples since they
> > are very heterogeneous. For instance some examples don't have a O.run() as
> > user is supposed to click "play" instead.
> > If the error happens after playing the error will not be detected. I
> > suspect many other special situations like this one.
>
> Maybe I would be able to implement this idea in following manner:
> run yade on each example with extra flag --test-example or such.
> This flag would mean that O.run() must be invoked anyways. If user
> has to click it, then yade instead does it. Some parts of examples
> would be untestable like interaction with GUI, in such cases a dummy
> function would be called instead (the point is that example.py need
> not be modified, the --test-example flag should take care of that). If
> examples produce some output file, than that is checked too. I am not
> sure how it will turn up. That's just a general idea.
>
>
> > About 2. I support the idea of investigating new techniques yet I don't
> > understand the suggestion very well. My impression is that all plugins are
> > already eligible for unit tests. For instance, testing a function from
> > utils in [1] did not need any change to the utils module itslef. All it
> > needs is to effectively design and write the unit tests for each other
> > function of each other class/module. That's indeed hundreds - if not
> > thousands - of tests.
>
> Well, time for me to learn what boost::unit_tests has to offer ;)
>
> The general idea within the framework is that it would be able to
> print a list of all publicly accessible C++ methods (not necessarily
> all of them being exported to python) which do not have an
> accompanying test.
>
> I don't know how to achieve this now. That's just an idea.
>
> Then using that list we would know the test coverage ;) If this list
> 

Re: [Yade-dev] stability & compatibility between newer and older versions of yade

2019-01-07 Thread Janek Kozicki
Bruno Chareyre said: (by the date of Sun, 6 Jan 2019 17:08:12 +0100)

> Thanks for raising this major issue. It would be great to populate unit
> tests indeed.
> 
> I recently
> https://github.com/yade/trunk/commit/b5fbefc6463294f580296cb5727dbbfd733fa8a0
> introduced a regression test for the utils module and I would like
> to advertise it here. It is testing only one function from utils
> currently.

Very interesting, thank you!

I have in mind writing testing code in C++, so that's a little different.

> It needs volunteers to expand it (which can be done simply by reproducing
> the logic of testUserCreatedInteraction() in more functions). If nobody is
> going to add tests systematically - the ideal case - I would suggest at
> least that:
> *when a bug is fixed a unit test is added simultaneously*.
> Fixing a bug usually gives a fresh vision of the behavior, which makes
> writing a unit test easier.
> Ultimately we could even collectively agree that a bug is not fixed if
> there is no test proving it.

I would go a bit further: assume that current yade version is the
reference version. Then I would add test to every C++ function and class
method. Store the results of tests as a reference result. When a bug
is found, then the reference result would change. Or worse: it would
turn out that although function is tested, the test did not catch the
bug. And that would be great incentive to add a test case where this
bug happened.

And also this would be a lot easier for others: the code testing
this function is already written, only another set of input
parameters must be added to test this case where bug appeared.

Yes, I know that it is a crazy amount of work.

> About 1: would be great provided that it doesn't end-up in simply removing
> examples which do not work.

Definitely not. The main goal is to really fix all examples. This is
a perfect opportunity for me to see the latest additions to yade! :)

> Classifying examples is also an important point and I would discourage the
> previous approach of moving failing scripts to a special
> "examples/not-working" folder since it breaks the classification in
> subfolders. Better rename them (something like *.py.fail) while keeping
> them in their original location.
> It is less clear if/how you intend to implement the "all examples must
> work" policy. It is difficult to automatize testing of examples since they
> are very heterogeneous. For instance some examples don't have a O.run() as
> user is supposed to click "play" instead.
> If the error happens after playing the error will not be detected. I
> suspect many other special situations like this one.

Maybe I would be able to implement this idea in following manner:
run yade on each example with extra flag --test-example or such.
This flag would mean that O.run() must be invoked anyways. If user
has to click it, then yade instead does it. Some parts of examples
would be untestable like interaction with GUI, in such cases a dummy
function would be called instead (the point is that example.py need
not be modified, the --test-example flag should take care of that). If
examples produce some output file, than that is checked too. I am not
sure how it will turn up. That's just a general idea. 

 
> About 2. I support the idea of investigating new techniques yet I don't
> understand the suggestion very well. My impression is that all plugins are
> already eligible for unit tests. For instance, testing a function from
> utils in [1] did not need any change to the utils module itslef. All it
> needs is to effectively design and write the unit tests for each other
> function of each other class/module. That's indeed hundreds - if not
> thousands - of tests.

Well, time for me to learn what boost::unit_tests has to offer ;)

The general idea within the framework is that it would be able to
print a list of all publicly accessible C++ methods (not necessarily
all of them being exported to python) which do not have an
accompanying test.

I don't know how to achieve this now. That's just an idea.

Then using that list we would know the test coverage ;) If this list
would someday become empty then we could say with confidence that we
have 100% test coverage. If someone writes a new public function in
some class, then even without exporting it to python, it would be
caught and printed as a warning, that it has no accompanying test.

Your _Tesselation::VertexHandle _Tesselation::move(…)
should be caught automatically.

I hope that it is possible. Maybe only a slight modification to
YADE_PLUGIN or similar macro would be just enough? I don't know yet.
Or maybe use some code for reading the library objects: it would go
though all functions inside the binary library file, and try{}catch{}
attempt to test them. I know that it is possible to read library
symbols, I need to check how to do that.
In that case each instance of _Tesselation::move(…) for all TT
that ended up in the library file would be caught.

-- 
Janek Kozicki

Re: [Yade-dev] stability & compatibility between newer and older versions of yade

2019-01-06 Thread Bruno Chareyre
Thanks for raising this major issue. It would be great to populate unit
tests indeed.

I recently [1] introduced a regression test for the utils module and I
would like to advertise it here. It is testing only one function from utils
currently.
It needs volunteers to expand it (which can be done simply by reproducing
the logic of testUserCreatedInteraction() in more functions). If nobody is
going to add tests systematically - the ideal case - I would suggest at
least that:
*when a bug is fixed a unit test is added simultaneously*.
Fixing a bug usually gives a fresh vision of the behavior, which makes
writing a unit test easier.
Ultimately we could even collectively agree that a bug is not fixed if
there is no test proving it.

About 1: would be great provided that it doesn't end-up in simply removing
examples which do not work.
Classifying examples is also an important point and I would discourage the
previous approach of moving failing scripts to a special
"examples/not-working" folder since it breaks the classification in
subfolders. Better rename them (something like *.py.fail) while keeping
them in their original location.
It is less clear if/how you intend to implement the "all examples must
work" policy. It is difficult to automatize testing of examples since they
are very heterogeneous. For instance some examples don't have a O.run() as
user is supposed to click "play" instead.
If the error happens after playing the error will not be detected. I
suspect many other special situations like this one.

About 2. I support the idea of investigating new techniques yet I don't
understand the suggestion very well. My impression is that all plugins are
already eligible for unit tests. For instance, testing a function from
utils in [1] did not need any change to the utils module itslef. All it
needs is to effectively design and write the unit tests for each other
function of each other class/module. That's indeed hundreds - if not
thousands - of tests.

Cheers
Bruno

[1]
https://github.com/yade/trunk/commit/b5fbefc6463294f580296cb5727dbbfd733fa8a0



On Sat, 5 Jan 2019 at 15:24, Janek Kozicki  wrote:

> I'd like to touch the issue of compatibility between newer and older
> versions of yade. Some people prefer not to upgrade :)
>
> I would like to tackle this in two steps:
>
> 1. I would create a branch "fixing-examples" with the aim to go
> meticulously through all the examples and make sure that they all
> work. Once they work merge them into develop, then master. And keep
> policy that in master all examples must work.
>
> 2. introduce more detailed unit tests in same way as plugins are declared
> via macro.
> The goal would be that every declared plugin would simultaneously be
> declared as eligible for unit testing. That's hundreds of unit tests
> to be written. But I guess that's the only way to go ensure stability.
> I want to investigate this approach, perhaps use boost::unit_test for that.
> Once each plugin has unit test for each of its methods we would reach 100%
> test coverage :)
>
> thoughts?
> --
> Janek Kozicki
>
> ___
> Mailing list: https://launchpad.net/~yade-dev
> Post to : yade-dev@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~yade-dev
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~yade-dev
Post to : yade-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yade-dev
More help   : https://help.launchpad.net/ListHelp


[Yade-dev] stability & compatibility between newer and older versions of yade

2019-01-05 Thread Janek Kozicki
I'd like to touch the issue of compatibility between newer and older
versions of yade. Some people prefer not to upgrade :)

I would like to tackle this in two steps:

1. I would create a branch "fixing-examples" with the aim to go
meticulously through all the examples and make sure that they all
work. Once they work merge them into develop, then master. And keep
policy that in master all examples must work.

2. introduce more detailed unit tests in same way as plugins are declared via 
macro.
The goal would be that every declared plugin would simultaneously be
declared as eligible for unit testing. That's hundreds of unit tests
to be written. But I guess that's the only way to go ensure stability.
I want to investigate this approach, perhaps use boost::unit_test for that.
Once each plugin has unit test for each of its methods we would reach 100% test 
coverage :)

thoughts?
-- 
Janek Kozicki

___
Mailing list: https://launchpad.net/~yade-dev
Post to : yade-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yade-dev
More help   : https://help.launchpad.net/ListHelp