On 1/27/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote:
>
>
> Mikhail Loenko wrote:
> > On 1/27/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote:
> >>> If the test can be configured by a few people only who works on that
> >>> specific area and those people are aware of those tests why not just
> >>> print a log when the test is skipped?
> >> Because the same set of people that will be bothered by separate suites
> >> will have the same reaction to skipped tests.
> >
> > By 'skipped' in this context I meant that the test did not verify the code 
> > but
> > reported the 'passed' status. That set of people will not even notice
> > these tests.
>
> I don't think you want that either.  I think you want the test to not be
> run.

Sorry that my message was not quite clear.

What I did mean is:

The test is always launched. If it detects that the real config does not allow
do all the checks, then the test either returns special status (i.e.
'passed but did
not do all the checks') to the framework or prints a log message like
'warning: the test was skipped'

Then a 'regular user'  will see that the test passed, while 'advanced' one
will be able to check what happen.

Thanks,
Mikhail

>
> We're going to automate this whole thing, and we may want to know when a
> given test broke.  If it's falsely reporting "pass", we'll get confused.
>
> geir
>
> >
> > Thanks,
> > Mikhail
> >
> >> This is why I advocate making a "separate tree" for the system tests -
> >> make it clear that they are not the general unit tests...
> >>
> >>> It would not disturb most of the people because the test will pass in 
> >>> 'bad'
> >>> environment. But those, who know about these tests will sometimes grep
> >>> logs to validate configuration.
> >> IMO, there's too much special information there, too much config.  I'm a
> >> simple person, and like things clean and simple.  I don't like to mix
> >> concerns when possible, and here's a place where it's definitely
> >> possible to separate cleanly.
> >>
> >> I don't see the downside.
> >>
> >> geir
> >>
> >>> Thanks,
> >>> Mikhail
> >>>
> >>>
> >>>> Alternatively, they could be
> >>>> included as part of a general test suite but be purposely skipped over at
> >>>> test execution time using a
> >>>> test exclusion list understood by the test runner.
> >>>>
> >>>>
> >>>> Best regards,
> >>>> George
> >>>> ________________________________________
> >>>> George C. Harley
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Tim Ellison <[EMAIL PROTECTED]>
> >>>> 27/01/2006 08:53
> >>>> Please respond to
> >>>> harmony-dev@incubator.apache.org
> >>>>
> >>>>
> >>>> To
> >>>> harmony-dev@incubator.apache.org
> >>>> cc
> >>>>
> >>>> Subject
> >>>> Re: [testing] code for exotic configurations
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Anton Avtamonov wrote:
> >>>>>> Note that I could create my own provider and test with it, but what I
> >>>> would
> >>>>>> really want is to test how my EncryptedPrivateKeyInfo works with
> >>>>>> AlgorithmParameters from real provider as well as how my other classes
> >>>> work
> >>>>>> with real implementations of crypto Engines.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Mikhail.
> >>>>>>
> >>>>> Hi Mikhail,
> >>>>> There are 'system' and 'unit' tests. Traditionally, unit tests are of
> >>>>> developer-level. Each unit test is intended to test just a limited
> >>>>> piece of functionality separately from other sub-systems (test for one
> >>>>> fucntion, test for one class, etc). Such tests must create a desired
> >>>>> environment over the testing fucntionality and run the scenario in the
> >>>>> predefined conditions. Unit tests usually able to cover all scenarios
> >>>>> (execution paths) for the tested parts of fucntionality.
> >>>>>
> >>>>> What are you talking about looks like 'system' testing. Such tests
> >>>>> usually run on the real environment and test the most often scenarious
> >>>>> (the reduntant set, all scenarios usually cannot be covered). Such
> >>>>> testing is not concentrated on the particular fucntionality, but
> >>>>> covers the work of the whole system.
> >>>>> A sample is: "run some demo application on some particular platform,
> >>>>> with some particular providers installed and perform some operations".
> >>>>>
> >>>>> I think currently we should focus on 'unit' test approach since it is
> >>>>> more applicable during the development (so my advise is to revert your
> >>>>> tests to install 'test' providers with the desired behavior as George
> >>>>> proposed).
> >>>>> However we should think about 'system' scenarios which can be run on
> >>>>> the later stage and act as 'verification' of proper work of the entire
> >>>>> system.
> >>>> I agree with all this.  The unit tests are one style of test for
> >>>> establishing the correctness of the code.  As you point out the unit
> >>>> tests typically require a well-defined environment in which to run, and
> >>>> it becomes a judgment-call as to whether a particular test's
> >>>> environmental requirements are 'reasonable' or not.
> >>>>
> >>>> For example, you can reasonably expect all developers to have an
> >>>> environment to run unit tests that has enough RAM and a writable disk
> >>>> etc. such that if those things do not exist the tests will simply fail.
> >>>>  However, you may decide it is unreasonable to expect the environment to
> >>>> include a populated LDAP server, or a carefully configured RMI server.
> >>>> If you were to call that environment unreasonable then testing JNDI and
> >>>> RMI would likely involve mock objects etc. to get good unit tests.
> >>>>
> >>>> Of course, as you point out, once you are passing the unit tests you
> >>>> also need the 'system' tests to ensure the code works in a real
> >>>> environment.  Usage scenarios based on the bigger system are good, as is
> >>>> running the bigger system's test suite on our runtime.
> >>>>
> >>>> Regards,
> >>>> Tim
> >>>>
> >>>>
> >>>>> --
> >>>>> Anton Avtamonov,
> >>>>> Intel Middleware Products Division
> >>>>>
> >>>> --
> >>>>
> >>>> Tim Ellison ([EMAIL PROTECTED])
> >>>> IBM Java technology centre, UK.
> >>>>
> >>>>
> >>>>
> >>>
> >
> >
>

Reply via email to