Hi Geir,

> This is why I advocate making a "separate tree" for the system tests - make it clear that they are not the general unit tests...

Right. But rather than being a separate directory tree in the source repository that separate tree could be realised as a logical JUnit test suite grouping.

So, for instance, say we have a bunch of "general" test cases. We would add them into an executable JUnit test suite called say, TestSuiteGeneral. When TestSuiteGeneral gets run then only the test cases that we specifically added to that suite are executed. There is no "skipping" of tests because we only ran those tests that we had previously configured (in the TestSuiteGeneral code) to run.

Any "special" tests that either need specialist configuration or else should not be run by just anyone could be programatically grouped into a separate executable JUnit test suite. When that suite gets run it *only* executes our group of special tests.

Just like we can let Junit tell us about what passed, failed etc, so we can use existing JUnit practices to put our test cases into whatever runtime grouping we like. That's clean and simple.

Best regards,
George



Geir Magnusson Jr wrote:


Mikhail Loenko wrote:
On 1/27/06, George Harley1 <[EMAIL PROTECTED]> wrote:
But because we live in a less than ideal world there will, no doubt, be
some tests that will demand an
environment that is impossible or at the very least difficult to mock up
for the majority of developers/testers.

I'm absolutely agree that we are neither living in the ideal world nor trying
to make it ideal :)

So until we got a 'system' test suite why should we weaken existing tests?

One solution could be to segregate those tests into a separate test suite
(available for all but primarily
for those working in the niche area that demands the special environment).

Moving this kind of tests would affect many people: they will see
separate suites,
try, ask questions...

If the test can be configured by a few people only who works on that
specific area and those people are aware of those tests why not just
print a log when the test is skipped?

Because the same set of people that will be bothered by separate suites will have the same reaction to skipped tests.

This is why I advocate making a "separate tree" for the system tests - make it clear that they are not the general unit tests... Geir Magnusson Jr wrote:


Mikhail Loenko wrote:
On 1/27/06, George Harley1 <[EMAIL PROTECTED]> wrote:
But because we live in a less than ideal world there will, no doubt, be
some tests that will demand an
environment that is impossible or at the very least difficult to mock up
for the majority of developers/testers.

I'm absolutely agree that we are neither living in the ideal world nor trying
to make it ideal :)

So until we got a 'system' test suite why should we weaken existing tests?

One solution could be to segregate those tests into a separate test suite
(available for all but primarily
for those working in the niche area that demands the special environment).

Moving this kind of tests would affect many people: they will see
separate suites,
try, ask questions...

If the test can be configured by a few people only who works on that
specific area and those people are aware of those tests why not just
print a log when the test is skipped?

Because the same set of people that will be bothered by separate suites will have the same reaction to skipped tests.

This is why I advocate making a "separate tree" for the system tests - make it clear that they are not the general unit tests...


It would not disturb most of the people because the test will pass in 'bad'
environment. But those, who know about these tests will sometimes grep
logs to validate configuration.

IMO, there's too much special information there, too much config. I'm a simple person, and like things clean and simple. I don't like to mix concerns when possible, and here's a place where it's definitely possible to separate cleanly.

I don't see the downside.

geir


Thanks,
Mikhail


Alternatively, they could be
included as part of a general test suite but be purposely skipped over at
test execution time using a
test exclusion list understood by the test runner.


Best regards,
George
________________________________________
George C. Harley





Tim Ellison <[EMAIL PROTECTED]>
27/01/2006 08:53
Please respond to
harmony-dev@incubator.apache.org


To
harmony-dev@incubator.apache.org
cc

Subject
Re: [testing] code for exotic configurations






Anton Avtamonov wrote:
Note that I could create my own provider and test with it, but what I
would
really want is to test how my EncryptedPrivateKeyInfo works with
AlgorithmParameters from real provider as well as how my other classes
work
with real implementations of crypto Engines.

Thanks,
Mikhail.

Hi Mikhail,
There are 'system' and 'unit' tests. Traditionally, unit tests are of
developer-level. Each unit test is intended to test just a limited
piece of functionality separately from other sub-systems (test for one
fucntion, test for one class, etc). Such tests must create a desired
environment over the testing fucntionality and run the scenario in the
predefined conditions. Unit tests usually able to cover all scenarios
(execution paths) for the tested parts of fucntionality.

What are you talking about looks like 'system' testing. Such tests
usually run on the real environment and test the most often scenarious
(the reduntant set, all scenarios usually cannot be covered). Such
testing is not concentrated on the particular fucntionality, but
covers the work of the whole system.
A sample is: "run some demo application on some particular platform,
with some particular providers installed and perform some operations".

I think currently we should focus on 'unit' test approach since it is
more applicable during the development (so my advise is to revert your
tests to install 'test' providers with the desired behavior as George
proposed).
However we should think about 'system' scenarios which can be run on
the later stage and act as 'verification' of proper work of the entire
system.
I agree with all this.  The unit tests are one style of test for
establishing the correctness of the code.  As you point out the unit
tests typically require a well-defined environment in which to run, and
it becomes a judgment-call as to whether a particular test's
environmental requirements are 'reasonable' or not.

For example, you can reasonably expect all developers to have an
environment to run unit tests that has enough RAM and a writable disk
etc. such that if those things do not exist the tests will simply fail. However, you may decide it is unreasonable to expect the environment to
include a populated LDAP server, or a carefully configured RMI server.
If you were to call that environment unreasonable then testing JNDI and
RMI would likely involve mock objects etc. to get good unit tests.

Of course, as you point out, once you are passing the unit tests you
also need the 'system' tests to ensure the code works in a real
environment. Usage scenarios based on the bigger system are good, as is
running the bigger system's test suite on our runtime.

Regards,
Tim


--
Anton Avtamonov,
Intel Middleware Products Division

--

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.










It would not disturb most of the people because the test will pass in 'bad'
environment. But those, who know about these tests will sometimes grep
logs to validate configuration.

IMO, there's too much special information there, too much config. I'm a simple person, and like things clean and simple. I don't like to mix concerns when possible, and here's a place where it's definitely possible to separate cleanly.

I don't see the downside.

geir
Geir Magnusson Jr wrote:


Mikhail Loenko wrote:
On 1/27/06, George Harley1 <[EMAIL PROTECTED]> wrote:
But because we live in a less than ideal world there will, no doubt, be
some tests that will demand an
environment that is impossible or at the very least difficult to mock up
for the majority of developers/testers.

I'm absolutely agree that we are neither living in the ideal world nor trying
to make it ideal :)

So until we got a 'system' test suite why should we weaken existing tests?

One solution could be to segregate those tests into a separate test suite
(available for all but primarily
for those working in the niche area that demands the special environment).

Moving this kind of tests would affect many people: they will see
separate suites,
try, ask questions...

If the test can be configured by a few people only who works on that
specific area and those people are aware of those tests why not just
print a log when the test is skipped?

Because the same set of people that will be bothered by separate suites will have the same reaction to skipped tests.

This is why I advocate making a "separate tree" for the system tests - make it clear that they are not the general unit tests...


It would not disturb most of the people because the test will pass in 'bad'
environment. But those, who know about these tests will sometimes grep
logs to validate configuration.

IMO, there's too much special information there, too much config. I'm a simple person, and like things clean and simple. I don't like to mix concerns when possible, and here's a place where it's definitely possible to separate cleanly.

I don't see the downside.

geir


Thanks,
Mikhail


Alternatively, they could be
included as part of a general test suite but be purposely skipped over at
test execution time using a
test exclusion list understood by the test runner.


Best regards,
George
________________________________________
George C. Harley





Tim Ellison <[EMAIL PROTECTED]>
27/01/2006 08:53
Please respond to
harmony-dev@incubator.apache.org


To
harmony-dev@incubator.apache.org
cc

Subject
Re: [testing] code for exotic configurations






Anton Avtamonov wrote:
Note that I could create my own provider and test with it, but what I
would
really want is to test how my EncryptedPrivateKeyInfo works with
AlgorithmParameters from real provider as well as how my other classes
work
with real implementations of crypto Engines.

Thanks,
Mikhail.

Hi Mikhail,
There are 'system' and 'unit' tests. Traditionally, unit tests are of
developer-level. Each unit test is intended to test just a limited
piece of functionality separately from other sub-systems (test for one
fucntion, test for one class, etc). Such tests must create a desired
environment over the testing fucntionality and run the scenario in the
predefined conditions. Unit tests usually able to cover all scenarios
(execution paths) for the tested parts of fucntionality.

What are you talking about looks like 'system' testing. Such tests
usually run on the real environment and test the most often scenarious
(the reduntant set, all scenarios usually cannot be covered). Such
testing is not concentrated on the particular fucntionality, but
covers the work of the whole system.
A sample is: "run some demo application on some particular platform,
with some particular providers installed and perform some operations".

I think currently we should focus on 'unit' test approach since it is
more applicable during the development (so my advise is to revert your
tests to install 'test' providers with the desired behavior as George
proposed).
However we should think about 'system' scenarios which can be run on
the later stage and act as 'verification' of proper work of the entire
system.
I agree with all this.  The unit tests are one style of test for
establishing the correctness of the code.  As you point out the unit
tests typically require a well-defined environment in which to run, and
it becomes a judgment-call as to whether a particular test's
environmental requirements are 'reasonable' or not.

For example, you can reasonably expect all developers to have an
environment to run unit tests that has enough RAM and a writable disk
etc. such that if those things do not exist the tests will simply fail. However, you may decide it is unreasonable to expect the environment to
include a populated LDAP server, or a carefully configured RMI server.
If you were to call that environment unreasonable then testing JNDI and
RMI would likely involve mock objects etc. to get good unit tests.

Of course, as you point out, once you are passing the unit tests you
also need the 'system' tests to ensure the code works in a real
environment. Usage scenarios based on the bigger system are good, as is
running the bigger system's test suite on our runtime.

Regards,
Tim


--
Anton Avtamonov,
Intel Middleware Products Division

--

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.









Thanks,
Mikhail


Alternatively, they could be
included as part of a general test suite but be purposely skipped over at
test execution time using a
test exclusion list understood by the test runner.


Best regards,
George
________________________________________
George C. Harley





Tim Ellison <[EMAIL PROTECTED]>
27/01/2006 08:53
Please respond to
harmony-dev@incubator.apache.org


To
harmony-dev@incubator.apache.org
cc

Subject
Re: [testing] code for exotic configurations






Anton Avtamonov wrote:
Note that I could create my own provider and test with it, but what I
would
really want is to test how my EncryptedPrivateKeyInfo works with
AlgorithmParameters from real provider as well as how my other classes
work
with real implementations of crypto Engines.

Thanks,
Mikhail.

Hi Mikhail,
There are 'system' and 'unit' tests. Traditionally, unit tests are of
developer-level. Each unit test is intended to test just a limited
piece of functionality separately from other sub-systems (test for one
fucntion, test for one class, etc). Such tests must create a desired
environment over the testing fucntionality and run the scenario in the
predefined conditions. Unit tests usually able to cover all scenarios
(execution paths) for the tested parts of fucntionality.

What are you talking about looks like 'system' testing. Such tests
usually run on the real environment and test the most often scenarious
(the reduntant set, all scenarios usually cannot be covered). Such
testing is not concentrated on the particular fucntionality, but
covers the work of the whole system.
A sample is: "run some demo application on some particular platform,
with some particular providers installed and perform some operations".

I think currently we should focus on 'unit' test approach since it is
more applicable during the development (so my advise is to revert your
tests to install 'test' providers with the desired behavior as George
proposed).
However we should think about 'system' scenarios which can be run on
the later stage and act as 'verification' of proper work of the entire
system.
I agree with all this.  The unit tests are one style of test for
establishing the correctness of the code.  As you point out the unit
tests typically require a well-defined environment in which to run, and
it becomes a judgment-call as to whether a particular test's
environmental requirements are 'reasonable' or not.

For example, you can reasonably expect all developers to have an
environment to run unit tests that has enough RAM and a writable disk
etc. such that if those things do not exist the tests will simply fail.
However, you may decide it is unreasonable to expect the environment to
include a populated LDAP server, or a carefully configured RMI server.
If you were to call that environment unreasonable then testing JNDI and
RMI would likely involve mock objects etc. to get good unit tests.

Of course, as you point out, once you are passing the unit tests you
also need the 'system' tests to ensure the code works in a real
environment. Usage scenarios based on the bigger system are good, as is
running the bigger system's test suite on our runtime.

Regards,
Tim


--
Anton Avtamonov,
Intel Middleware Products Division

--

Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.







Reply via email to