Seems, it is not so easy to define the proper test set...
Let's define target to run the integration test. It may be:
1) we want to be sure that all were integrated correctly?
2) or we want to guaranty the 'product quality' for build?
3) some other?

If target is 1) than we should run minimal tests set (seems, classlib unit
tests over tested VM will enough) on one platform.
If target is 2) than each developer should run all known/defined tests over
all platforms. Seems, is no time for development any more. Everyone will do
the release engineering (RE) work.

So we have 2 questions here:
1) the small list of integration test should be defined. It may be subset of
API unit tests collected as 1 or 2 tests from each API area just to be sure
that all things were integrated successfully.
2) the RE procedure should be defined. Who is responsible to build the HDK
and place it to download page? What tests should be run before it? How often
it should be doing?
It is not so obvious as 1). The procedure may be defined, for example, as:
- one of committers prepare binary form of HDK and test it on one platform.

- if all tests passed he placed it to download somewhere and
- other people test it on other platform.
- if all tests passed the binaries are promoted and placed to the
'official' download page.

Thanks, Vladimir

PS. To run some scenario tests actually not integration but functional
testing.

On 6/26/06, Salikh Zakirov <[EMAIL PROTECTED]> wrote:

Alexey Petrenko wrote:
> Some checks before commits are defenetly good...
>
> 2006/6/23, Andrey Chernyshev < [EMAIL PROTECTED]>:
>> We may probably also need to define the list of
>> platforms/configurations covered by this procedure.
> I'm not sure that I get your idea correctly.
> Do you suggest to ask every developer to make some checks on different
> platforms and software configurations?
> If so... Yes, it is good for product stability.
> But it will be nearly impossible because very small number of
> developers have access to different platforms and software
> configurations...

First and foremost question is *what* to run as integration tests,
rather than on what platforms. I think we need to define what use cases
we care for in the form of integration tests.
The more conveniently the integration tests are packaged, the higher is
the probability of anyone running them.
The good example is the "smoke tests" included with DRLVM: they can be
built and run
with a single command 'build.bat test' ('build.sh test' on Linux).

Once the integration test set is defined, we can think of platform
coverage.
BuildBot [1] could be the way to interested parties to contribute CPU
cycles
to verify Harmony quality.

[1] http://buildbot.sourceforge.net/

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to