On Sun, Feb 20, 2011 at 8:32 AM, Lionel Orry <lionel.o...@gmail.com> wrote:
> On Sun, Feb 20, 2011 at 3:27 AM, Carsten Haitzler <ras...@rasterman.com> 
> wrote:
>> On Sat, 19 Feb 2011 17:08:33 +0000 Lionel Orry <lionel.o...@gmail.com> said:
>>
>>> I understand your explanation about builds. I know builds are stable
>>> and do work fairly well on a variety of platforms.
>>>
>>> But when I talk about CI, I don't talk specifically about rebuilds,
>>> but I am in the same problematic as you: I think about testing. Build
>>> bots have a wrong name in that they are not restricted to buids, but
>>> my intent was to evaluate the possibilities of CI software in terms of
>>> testing.
>>>
>>> Of course it needs an automated backend, because the build (I should
>>> say the _task_) is machine-controlled. But as long as some tests are
>>> available and are automated (make check is our backend here), the CI
>>> soft can gather the test results, publish them, warn/blame devs who
>>> broke the devs, and also keep track of the results over time and give
>>> us statistics.
>>>
>>> So indeed I was not exactly focused on build, but rather on testing,
>>> given the current status of libs.
>>>
>>> Now, I agree with you on another point: the backend (make check or
>>> whatever more specific app) and the actual TEST CASES are the
>>> important topic to work on. And there's nothing much I can give back
>>> about that, all of you will be much more experienced. I have no
>>> experience in graphical framework testing. So in the meantime, I just
>>> tried to see what we could get from a CI software. You've just
>>> demotivated me so I may give up with this task anyway.
>>
>> oh... oops - hahahah - wasn't meaning to do that. i was meaning to MOTIVATE 
>> you
>> - and others to work on the testing bit. for non-gui things it's easier as 
>> you
>> have software create data (eg eina data struct stuff) and then use eina to do
>> things and check results are as expected - all in code.
>>
>> indeed gui is the big nasty problem. you build somethng you expect to sit
>> around and wait for a user to interact with. this means we need to pretend to
>> be that user from code. and how do we chekc output? screengrab? we need to
>> detect things like choppy rendering (framerate is uneven or drops to 1/2 or 
>> 1/4
>> of what it should be) if the ui transitions in the right way to the final
>> expected state etc. etc. - that's non-trivial. :(
>>
>>> I will think and have a look at what could help in automating gui
>>> interaction. Hope I will eventually bring something you can find
>>> useful.
>>
>> there are things that do this - xrecord and xtest are there for recording 
>> user
>> input and playing it back. but that doesnt handle checking the progress along
>> the way... :)
>>
>>> BTW, I can try to work on the tests themselves as well, but I don't
>>> know where to start. Is there any test-oriented roadmap anywhere?
>>
>> none - other than "get up and do something - make tests". :) scratch your 
>> itch.
>> it sounds like you're the kind of person to whom this kind of testing is
>> important - as you've said. so that'd motivate you - i was hoping :) there is
>> enough of efl that doesnt require interactivity to test that needs testing so
>> that's better than no testing at all (you can test the lower bits of evas
>> easily like have a set of image files (png, jpg, bmp etc.) and make sure its 
>> a
>> large set (very high res, tiny, with and within alpha, weird sizes and wierd
>> versions of the formats) and then test evas in loading the image and getting
>> the image data. compare to known "good" signature. that's easily done. as i
>> said before - eina is easy to test this way. a lot of ecore is too and eet.
>>
>> in fact - i'd suggest picking a small target. let's say eet. and write a
>> comperehensive self-test suite. eet is small and well contained. a test suite
>> (imho) is not just testing that it works right and doesn't crash, but that it
>> also doesn't leak memory, that it's memory footprint doesn't unexpectedly
>> become a lot bigger doing the same thing, that it doesn't suddenyl become a 
>> lot
>> slower etc. etc., so... the test suit tests for correctness AND then also
>> benchmarks. benchmakr speed and memory size during operation (before and 
>> after)
>> etc. etc. and record and then every test records this and maybe puts up 
>> results
>> on a web page - it can even nicely graph the results over time - every 
>> revision
>> of change can have the correctness and then speed + memory benchmark re-run 
>> and
>> if results for speed+memory show a "significant change" maybe a mail is sent 
>> to
>> enlightenment-devel alerting which revision caused it?
>>
>> does that motivate you more? :)
>
> Yes it does. But don't take my words too dramatically, I was only
> talking about motivation about CI evaluation. I still wish I can help
> the EFL get even better.
>
> The guidelines on eet are interesting, and I think someone should
> summarise those points on the wiki, we never know, that may give some
> ideas to volunteers.
>
> At work, I'm getting used to checking memory leaks since we develop
> persistent software that should be able to run for months and treat
> huge amount of data. So I may dig in this for eet for now, using
> valgrind, clang and other tools. I could also see if some kind of
> fuzzy testing can have an interest (if the memory allocation is
> dependent on the input/output data content and not only size).

Oh yes ! Fuzzy testing of eet_data would be great ! Same opinion for
benchmarking memory and cpu consumption. That's what eet test suite is
currently lacking.

In the same kind of idea, we are lacking a tool equivalent to
expedite, but for edje. Something that does run a wide range of edje
file with various layout to evaluate speed and memory consumption. I
made this one a GSoC entry, but if someone want to take on it early,
that's good to !
-- 
Cedric BAIL

------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to