On Fri, Dec 16, 2016 at 7:54 AM, Stefan Schmidt <ste...@osg.samsung.com> wrote:
> Hello.
>
> On 13/12/16 16:32, Gustavo Sverzut Barbieri wrote:
>> On Tue, Dec 13, 2016 at 11:54 AM, Stefan Schmidt <ste...@osg.samsung.com> 
>> wrote:
>>> Hello.
>>>
>>> On 13/12/16 14:09, Felipe Magno de Almeida wrote:
>>>> On Tue, Dec 13, 2016 at 8:28 AM, Stefan Schmidt <ste...@osg.samsung.com> 
>>>> wrote:
>>>>> Hello.
>>>>
>>>> [snip]
>>>>
>>>>> I fully agree (even have native IPv6 here) but we need to be defensive
>>>>> what we expect to be available in our test suite.
>>>>
>>>> Why? It is testing, I agree we should need to be defensive in the
>>>> implementation and interfaces where EFL might run in stranger
>>>> environments,
>>>
>>> If the implementation would handle this all gracefully the test would
>>> never had been a problem. :)
>>
>> no no... the code gracefully handles that, but the test is explicit
>> "resolve ::1 to ::1 and I expect it to work". The test itself should
>> be conditional and I'm willing to do that, HOWEVER our test bots
>> should have IPv6 so we check that code.
>>
>>
>>
>>> but a test that fails on a machine that doesn't
>>>> handle IPv6 seems fine to me as long as it is rare enough.
>>>
>>> We assume IPv6 now, we assume a working internet connection, we assume
>>> udev for eeze testing, we assume a working dbus setup, in some cases we
>>> assume a file system which supports extended attributes, etc...
>>>
>>> If it is complicated to run make check we will have even less people
>>> running it. It should be the other way around. I guess I could look at
>>> anybody here who contributed a few patches this year and see someone who
>>> broke make check. If it is to complicated or fails for some reason
>>> people will just stop using it.
>>
>> our test suite is unusable the way it is...
>
> While I agree that the test suite needs improvements calling it unusable
> is over the top.

ok, so  improved... particularly for developers to run them... also
something that simplifies to write them, and how to do it properly.

For instance, very few tests uses correct macros in check.h, like
ck_assert_int_eq(), that will highlight expected values and required
values, compared to failif(a == b). Some tests miss granularity, so
they do bunch of tests at once, missing the "unit test" aspect...  and
while we can manually select the test suite to run (calling it
directly), and sometimes the component (ie: ecore_con_suite -h to list
them), it would be very helpful to select individual functions to
test, particularly under development. And some helpers to debug
failing tests (I always need to check for CK_FORK=no and things like
that).

what happens in most of the cases is that the suite is painful to deal
with so it's easier to write the "other required" bit that is the
example and test from there, rather than do it from the test suite...
which is what I did, and some guys follow the same pattern (or just
commit to enlightenment an usage).


>   really it lacks
>> granularity, it takes lots of time and most of the time it looks like
>> it failed since people forget to trap EINA_LOG to capture errors, then
>> you get tons of error messages (although the test reports success,
>> messages confuses users).
>
> In the normal run you see a green PASS or red FAILED. The eina_log
> errors are only in the log file itself.

still, this is for build bot or random users running the tests. Not to
us, the developers... you're going to run the full test suite after
each development step... becomes costly and painful :-/


> What is the way to trap these
> error messages that will come if we do test error cases? IIRC you did
> eina_log initially so you surely have a good solution here. :)

see src/tests/ecore_con/ecore_con_test_efl_net_ip_address.c

but it was done in a similar way for other tests, you simply trap by
providing your own print callback and checking if you got the message
you expected.

and this is the kind of stuff that we could improve as well.. I had to
find in the code other similar usages and redo it... but it's not that
generic that can be used everywhere... so a better solution would be
handy. as well as more "ck_assert*" macros for our stuff, like
"ck_assert_eina_list_str_equal()",
"ck_assert_eina_list_str_contains()", and things like that. Also ways
to run the main loop with an automatic timeout, ecore_con_suite for
failed tests were cumbersome to test as if you didn't get a result
(ie: ecore_con_url test), then it would sit there forever... adding an
ecore_timer_add() + ecore_main_loop_quit() would help, but this could
be automatic.


>> just  the time it takes to run it all is one of the reasons I don't
>> run them that frequently. :-/
>
> You know that you can run the suites individually. When doing ecore_con
> stuff for example you could ignore the suites like eina and ecore which
> take a lot time.

still, I was doing ip_address tests and ecore_con_url and
ecore_con_server tests took a painful amount of time... with lots of
logs on their own... why can't I simply say:

ecore_con_suite Efl_Net_Ip_Address:ecore_test_efl_net_ip_address_ipv4_manual_ok

or something similar to that, so I can run a single test case?


> On a bigger patchset I would still expect that you run
> a full make check before pushing due to all the possible side effects
> the changes could have.
>
> POretty sure the time costs of not running it, letting me find them,
> bisect them, report them back to you and then dealing with me over mail
> is higher compared to running them before pushing a patchset and fixing
> them. :P

:-) that I agree, and sorry for those in the past :-D

-- 
Gustavo Sverzut Barbieri
--------------------------------------
Mobile: +55 (16) 99354-9890

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to