On Sun, Feb 20, 2011 at 3:27 AM, Carsten Haitzler <ras...@rasterman.com> wrote:
> On Sat, 19 Feb 2011 17:08:33 +0000 Lionel Orry <lionel.o...@gmail.com> said:
>
>> I understand your explanation about builds. I know builds are stable
>> and do work fairly well on a variety of platforms.
>>
>> But when I talk about CI, I don't talk specifically about rebuilds,
>> but I am in the same problematic as you: I think about testing. Build
>> bots have a wrong name in that they are not restricted to buids, but
>> my intent was to evaluate the possibilities of CI software in terms of
>> testing.
>>
>> Of course it needs an automated backend, because the build (I should
>> say the _task_) is machine-controlled. But as long as some tests are
>> available and are automated (make check is our backend here), the CI
>> soft can gather the test results, publish them, warn/blame devs who
>> broke the devs, and also keep track of the results over time and give
>> us statistics.
>>
>> So indeed I was not exactly focused on build, but rather on testing,
>> given the current status of libs.
>>
>> Now, I agree with you on another point: the backend (make check or
>> whatever more specific app) and the actual TEST CASES are the
>> important topic to work on. And there's nothing much I can give back
>> about that, all of you will be much more experienced. I have no
>> experience in graphical framework testing. So in the meantime, I just
>> tried to see what we could get from a CI software. You've just
>> demotivated me so I may give up with this task anyway.
>
> oh... oops - hahahah - wasn't meaning to do that. i was meaning to MOTIVATE 
> you
> - and others to work on the testing bit. for non-gui things it's easier as you
> have software create data (eg eina data struct stuff) and then use eina to do
> things and check results are as expected - all in code.
>
> indeed gui is the big nasty problem. you build somethng you expect to sit
> around and wait for a user to interact with. this means we need to pretend to
> be that user from code. and how do we chekc output? screengrab? we need to
> detect things like choppy rendering (framerate is uneven or drops to 1/2 or 
> 1/4
> of what it should be) if the ui transitions in the right way to the final
> expected state etc. etc. - that's non-trivial. :(
>
>> I will think and have a look at what could help in automating gui
>> interaction. Hope I will eventually bring something you can find
>> useful.
>
> there are things that do this - xrecord and xtest are there for recording user
> input and playing it back. but that doesnt handle checking the progress along
> the way... :)
>
>> BTW, I can try to work on the tests themselves as well, but I don't
>> know where to start. Is there any test-oriented roadmap anywhere?
>
> none - other than "get up and do something - make tests". :) scratch your 
> itch.
> it sounds like you're the kind of person to whom this kind of testing is
> important - as you've said. so that'd motivate you - i was hoping :) there is
> enough of efl that doesnt require interactivity to test that needs testing so
> that's better than no testing at all (you can test the lower bits of evas
> easily like have a set of image files (png, jpg, bmp etc.) and make sure its a
> large set (very high res, tiny, with and within alpha, weird sizes and wierd
> versions of the formats) and then test evas in loading the image and getting
> the image data. compare to known "good" signature. that's easily done. as i
> said before - eina is easy to test this way. a lot of ecore is too and eet.
>
> in fact - i'd suggest picking a small target. let's say eet. and write a
> comperehensive self-test suite. eet is small and well contained. a test suite
> (imho) is not just testing that it works right and doesn't crash, but that it
> also doesn't leak memory, that it's memory footprint doesn't unexpectedly
> become a lot bigger doing the same thing, that it doesn't suddenyl become a 
> lot
> slower etc. etc., so... the test suit tests for correctness AND then also
> benchmarks. benchmakr speed and memory size during operation (before and 
> after)
> etc. etc. and record and then every test records this and maybe puts up 
> results
> on a web page - it can even nicely graph the results over time - every 
> revision
> of change can have the correctness and then speed + memory benchmark re-run 
> and
> if results for speed+memory show a "significant change" maybe a mail is sent 
> to
> enlightenment-devel alerting which revision caused it?
>
> does that motivate you more? :)

Yes it does. But don't take my words too dramatically, I was only
talking about motivation about CI evaluation. I still wish I can help
the EFL get even better.

The guidelines on eet are interesting, and I think someone should
summarise those points on the wiki, we never know, that may give some
ideas to volunteers.

At work, I'm getting used to checking memory leaks since we develop
persistent software that should be able to run for months and treat
huge amount of data. So I may dig in this for eet for now, using
valgrind, clang and other tools. I could also see if some kind of
fuzzy testing can have an interest (if the memory allocation is
dependent on the input/output data content and not only size).

I don't have much time this week-end though so I'll get back to you as
so as I can, that may mean one week or so.

regards,
Lionel

>
>> On Sat, Feb 19, 2011 at 2:10 PM, Carsten Haitzler <ras...@rasterman.com>
>> wrote:
>> > On Mon, 14 Feb 2011 12:17:03 +0100 Lionel Orry <lionel.o...@gmail.com> 
>> > said:
>> >
>> > just fyi - CI is one of the lesser worries we have. so let's not make this
>> > more that it is - builds for us are stable and well tested. i rebuild efl
>> > between 1-4 times a day. sometimes much more. between the developers we
>> > have little issue with rebuilds.
>> >
>> > now let's get to the core of this - REBUILDS arent a problem. i rebuild efl
>> > and then some in about 6 mins on my desktop/laptop. i dont use distcc -
>> > nothing beyond the single cpu there. some smart Makefile to allow parallel
>> > builds between libs. so this isn't an issue. making sure things build is
>> > the least of our issues.
>> >
>> > what we need to track is BUGS. when someone introduces a bug - the longer
>> > it is not found, the harder it is to fix later. this means we arent really
>> > about rebuilds. we are about TESTING every change that we can. we have SOME
>> > test suites right now - expedite is an automated one for evas, we have some
>> > for eina as well ans a bit for ecore - but they are mostly very thing and
>> > done test a lot. elementary has a test, but its interactive.
>> >
>> > what we need to do is work on fleshing out tests where they are mostly good
>> > (expedite for example) so they test more or everything - and can automate
>> > the test. then for others create tests ANd find ways to automate them -
>> > increasingly we will need to find a way to automate gui interaction as
>> > thats a huge amount of what we do - and then verifying that the results of
>> > the interaction (logical and display) are "right".
>> >
>> > this here has nothing to do with build bots, hosts, jenkins etc. but
>> > requires building and improving other infra and tools. we should be doing
>> > that long before we care about the infra to run those tests.
>> >
>> > so... who is volunteering to work on the tests? :)
>> >
>> >> On Sat, Feb 12, 2011 at 2:32 AM, Ravenlock <ravenl...@ravenlock.us> wrote:
>> >> >
>> >> > Can FreeBSD users play too, with the majority of ya'll running linux?
>> >> >
>> >>
>> >> In the case of Buildbot and Jenkins, the concept of master/slave is
>> >> central and should definitely be considered seriously. That means not
>> >> only a build server, but distributed build over a build farm of
>> >> computers with different OSes.
>> >>
>> >> I know Jenkins better so I'll talk a little bit about that.
>> >>
>> >> Say the master is a Linux box. Fine, it runs the java container and
>> >> Jenkins application as the server machine, and it can also be used as
>> >> a build machine by following a task to try to build a library or
>> >> application, or generate doxygen, or get code coverage / unit testing
>> >> / whatever a script or a plugin can do for us.
>> >>
>> >> But this master can also trigger a build on a distant machine, should
>> >> it be running any Unix (in that case, ssh connection between the 2
>> >> machines is the easiest and painless way, with a dedicated
>> >> 'hudson_slave' user for example for security) or even Windows
>> >> (commands issued via JLNP or a windows service).
>> >>
>> >> One of the machines could be running a 'BSD OS.
>> >>
>> >> Going further: we only want one machine because the other ones are
>> >> owned by the devs, they bite and don't want a distant robot to execute
>> >> commands and take up their CPU.
>> >>
>> >> Fine again. Let's virtualize. Jenkins is able to connect, via its
>> >> plugins, to various virtual machine interfaces (vmware, virtualbox).
>> >> QEmu is easy to automate so no plugin needed. So we can simply run
>> >> different virtual machines on the server, on demand, when a build is
>> >> triggered on them. It is transparent from the Jenkins interface, the
>> >> virtual machines appear like other physical nodes.
>> >>
>> >> So, many possibilities. FreeBSD users will have their playground too,
>> >> don't worry. :)
>> >>
>> >> Lionel
>> >>
>> >> ------------------------------------------------------------------------------
>> >> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
>> >> Pinpoint memory and threading errors before they happen.
>> >> Find and fix more than 250 security defects in the development cycle.
>> >> Locate bottlenecks in serial and parallel code that limit performance.
>> >> http://p.sf.net/sfu/intel-dev2devfeb
>> >> _______________________________________________
>> >> enlightenment-devel mailing list
>> >> enlightenment-devel@lists.sourceforge.net
>> >> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>> >>
>> >
>> >
>> > --
>> > ------------- Codito, ergo sum - "I code, therefore I am" --------------
>> > The Rasterman (Carsten Haitzler)    ras...@rasterman.com
>> >
>> >
>>
>
>
> --
> ------------- Codito, ergo sum - "I code, therefore I am" --------------
> The Rasterman (Carsten Haitzler)    ras...@rasterman.com
>
>

------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to