Mathias Bauer wrote:
Hi Thorsten,
[... snip ...]
And also again: let's stop this discussion (and especially allegations
obviously caused by misunderstandings) until we have got information
about what the tests cover exactly and if this is what we want.
Do you think it makes sense to break
Hi Bernd,
Bernd Eilers wrote:
Why not mix it?
because these kind of tests need to be done by the developer itself. If
the developer wants to enhance these tests he can do it without changing
processes.
The regression tests, where we have talked about, are being done by
OTHER people - not
Hi Bjoern!
Thank you for your comments.
Bjoern Milcke wrote:
P.S.: I must admit that I don't know how much of the infrastructure I
used is also available to non-Sun-employees, but I think the idea is to
make everything available to everybody in the long run.
The solution which has to be
Hi!
Bernd Eilers wrote:
Why not mix it?
I forgot something:
As described in the tool support for life-cycle testing in Software
Test Automation, Mark Fewster Dorothy Graham, Addison Wesley, 1999,
ISBN 0-201-33140-3, on page 7, the tool support is available for
testing in every stage of
Jörg Jahnke wrote:
Hi,
thanks for the many replies so far. As far as I see it, there are three
major concerns:
- The regression tests might take too long to run,
- the regression tests might be too cumbersome to execute,
- the findings of the regression tests might not justify the
Joerg Sievers wrote:
2.2.0
http://wiki.services.openoffice.org/wiki/OOoRelease1AutomationTestMatrix
#i71529 - Crash while pasting OLE in Calc
#i70517 - Office process does not end after exit (was i71766)
#i71882 - Crash while search into Starsuite Help - Fixed in OOE680m6
#i71891 - Crash
Thorsten Ziehm wrote:
Hi Stefan,
the same for automated GUI testing with TestTool. They can run in
parallel on different machines.
I also think that the API tests could be changed easily to run in
parallel on one machine, something that is important if you see the
growing use of multi core
Hi Thorsten,
Thorsten Ziehm wrote:
as you are directly answering to me I assume that your mail should also
address me but you completely missed my point. Though my hope was that I
had made my standpoint clear it seems that I failed to do so. I hope I
get it through this time. I don't want to
Hi Mathias,
Mathias Bauer schrieb:
So again: I want to get a good compromise between effort (hours to run)
and result (coverage of code that is known to be prone to regressions).
So before I can agree to any regression testing I must know if this
testing really investigates the parts of the
Hi Jörg,
Jörg Jahnke wrote:
I agree that a good compromise between effort and result should be
found. But you seem to miss the point that the planned tests are not
meant to do Unit Testing but are on a different level of testing, which
is System Testing (up into the area of Integration
Jörg Jahnke wrote:
Hi Mathias,
Mathias Bauer schrieb:
So again: I want to get a good compromise between effort (hours to run)
and result (coverage of code that is known to be prone to regressions).
So before I can agree to any regression testing I must know if this
testing really
Hi Mathias,
Mathias Bauer schrieb:
I agree that a good compromise between effort and result should be
found. But you seem to miss the point that the planned tests are not
meant to do Unit Testing but are on a different level of testing, which
is System Testing (up into the area of Integration
Hi Mathias and Martin,
most of you want to find regressions in less than an hour. These tooling
doesn't exists for a complex program like OpenOffice.org. Christoph
wrote that all API-tests will run more than 4 hours. And API testing is
one of the quickest tests which exists.
I want the same as
Hi Eike,
Eike Rathke wrote:
Maybe that's part of how the problem was perceived: discussions
_internal to Sun_. Or was Rene involved? Did he even know there was
a discussion ongoing?
- internal to Sun - I thought, that was the meaning of some mailings
here in this threat. If I'm wrong, sorry.
Hi,
thanks for the many replies so far. As far as I see it, there are three
major concerns:
- The regression tests might take too long to run,
- the regression tests might be too cumbersome to execute,
- the findings of the regression tests might not justify the efforts to
run them.
Hi Stefan,
the same for automated GUI testing with TestTool. They can run in
parallel on different machines.
Thorsten
Stefan Zimmermann wrote:
... and that is exactly what Christoph wrote...
The UNO-API test will be a distributed test. Means, that the whole API
is splitted into small
Hi there!
Jörg Jahnke wrote:
Hi Hennes,
Hennes Rohling schrieb:
...
But don't make everything mandatory. If I change a string in the setup
or change platform dependend code for systemintegration I don't want
to do a mandatory test that tests whether all dialogs in the Calc
still work.
Hi Martin,
Martin Hollmichel wrote:
Do we have some statistics in which areas we have what amount of
regressions ?
Yes
* for releases and only
* only for show stopper-tasks
All these issues have been introduced in child-workspaces (CWS) and have
been found by automated GUI tests. It would
Hi!
Stefan Zimmermann wrote:
That information can't be right. API tests are highly parallelized and
should be able to complete in 30min to one hour.
But these kind of tests we didn't discussed in this thread. Sorry.
We have talked about the third kind of tests (after unit- and
API-tests).
Hi all,
Hi,
thanks for the many replies so far. As far as I see it, there are three
major concerns:
- The regression tests might take too long to run,
- the regression tests might be too cumbersome to execute,
- the findings of the regression tests might not justify the efforts to
run them.
Hi Bernd,
Analyse code coverage of each and every test and than compare to modules
added to the CWS and than when running the tests automatically just run
those which cover modules added to the CWS? We would just need a table
in some database somewhere where individual tests are assigned to a
Joerg Sievers wrote:
Hi!
Hi there!
Stefan Zimmermann wrote:
That information can't be right. API tests are highly parallelized and
should be able to complete in 30min to one hour.
But these kind of tests we didn't discussed in this thread. Sorry.
We have talked about the third kind
Jörg Jahnke wrote:
But I agree that a proper selection of tests is a good idea. Perhaps a
user should be able to call e.g. dmake regressiontests -run:sw,basic
to execute special tests for the writer and the basic.
You misunderstood me. I took that for granted. But I also think that
before
Thorsten Ziehm wrote:
We will never find all regressions with TestTool or any other tooling
or human testing. This has nothing to do with 'release testing' or so.
We do not have test cases which identify problems in displaying
documents or something similar. Perhaps intensive usage of
Hi Frank,
Frank Schönheit - Sun Microsystems Germany wrote:
Hi Joerg,
We haven't identified the tests. The requirements was that they should
be rock solid and we have given those RESOURCE and MAIN FUNCTIONALITY
tests to customers and they were able to deal with them.
It makes sense to use
Mathias Bauer wrote:
[...]
Anything else would be insane. I took that for granted. But I also want
to believe that running several hours of tests for e.g. automatic styles
would be worth the effort. This is a good example where I suspect that
possible regressions would stay unnoticed by
Hi Hennes,
[...]
What you wrote is an argument against automated regression tests on CWSs
. If we are not able to detect regression on whatever workspace
(MWS/CWS), we don't even need to think about it.
I do not understand your points here. I give an example where all test
mechanism we have
Hi Frank,
Frank Schönheit - Sun Microsystems Germany schrieb:
Hi Joerg,
Do you agree to make regression testing with the testttol BEFORE you
(the developer) give your work to the QA to get CWSes faster integrated?
You won't have to maintain the testing code neither do you have to learn
the
Hi Rene,
Rene Engelhard schrieb:
Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?
Best example currently: cws freetypettg. tiny *security*
Hi!
Thorsten Behrens wrote:
I'm really not sure we're all still on the same page here. I would
hope that QA runs something quite similar to the suggested minimal set
of tests on _every_ CWS anyway - so this has nothing to do with the
tests being mandatory or run more often - it's about *when*
Mathias Bauer [EMAIL PROTECTED] writes:
I'm not sure if you understood my concern. Let me put it simple: what
makes us think that the current tests we are talking about that AFAIK
have been used in QA for testing the master for quite some time will
help to find regressions that currently stay
Hi Rüdiger,
Rüdiger Timm schrieb:
Jörg Jahnke wrote:
Hi Hennes,
Hennes Rohling schrieb:
...
But don't make everything mandatory. If I change a string in the
setup or change platform dependend code for systemintegration I don't
want to do a mandatory test that tests whether all dialogs in
Hi Mathias,
Mathias Bauer schrieb:
Jörg Jahnke wrote:
But I agree that a proper selection of tests is a good idea. Perhaps a
user should be able to call e.g. dmake regressiontests -run:sw,basic
to execute special tests for the writer and the basic.
You misunderstood me. I took that for
Hi Rene,
I do not want to discuss CWSes here in detail?
On the way to OOo 2.3 we integrated more than 180 CWS in the last 3
months and at the end we will be near 400. Perhaps 10% of them do not
need mandatory automated tests for 4-8 hours. But in some cases the
developer and the QA persons do
Hi Frank,
Hmm. Even today I have sometimes 3 or more CWS' to handle in parallel.
If the life time of a CWS becomes longer, it will become more difficult
to keep track of what you're doing. If a test fails after three days,
but meantime you started another project/CWS which you cannot leave
Hi Mathias,
Mathias Bauer schrieb:
So I do not think, that it make sense to discuss only the 'release
testing' mode. In the past the regressions were integrated before the
QA started with switching in this mode.
I'm not sure if you understood my concern. Let me put it simple: what
makes us
Hi Rene,
Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?
Best example currently: cws freetypettg. tiny *security* patch.
(As the freetype
Joerg Sievers [EMAIL PROTECTED] writes:
Thorsten Behrens wrote:
yes, I definitely think it's worth it. But please make it run
automagically and unattended - just like the smoketest does.
Do you agree on these two sentences?
The first step in introducing a process is that it should run
Hi!
Thorsten Behrens wrote:
So, what are you referring to?
Let us collect constraints about the idea itselves and not about the
tool or how to use it. That's the third step to introduce something.
Cu,
Jogi
http://qa.openoffice.org/qatesttool
Jörg Jahnke wrote:
See above. I think we are talking about different levels of testing.
Of course. And I'm still waiting to become convinced that the level of
testing that you suggest or more exactly the way it is implemented has
enough value to judge the effort.
So let's see how the tests
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Thorsten Ziehm wrote:
Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?
Best example
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[ better late than never... ]
Hi,
Frank Schönheit - Sun Microsystems Germany wrote:
Why is it a serious hurdle to wait let's say 3 days? For me this is not
so obvious.
Imagine your frustration what happens if the test fails after 2 days and
Joerg Sievers wrote:
- if a resource file is broken or missing the office will crash if you
open the affected dialog; that's what these resource tests are doing:
Open all dialogs once, click on every button and leave all dialogs
with Cancel
- a list of business cards, a list of colors, a
Martin Hollmichel wrote:
Do we have some statistics in which areas we have what amount of
regressions ?
For example I would think that regression caused by broken resources
doesn't occur that much any more, are also easy to find by broad
testing. On the other hand I could image that
Hi Thorsten,
On Friday, 2007-06-01 12:22:25 +0200, Thorsten Ziehm wrote:
6 days from RfQA to QA approval (running tests?), now we are on the 8th
and miss the release date because rc3 will only be uploaded today/monday
(why do we need a rc3 anyway?) and keep our users one week more with open
Hi Frank, all,
Frank Schönheit - Sun Microsystems Germany wrote:
[...]
I'm not voting against tests which finish in a reasonable time frame
(and fulfill other requirements said in the thread), but 3 days is quite
a lot of time ...
Given all the other requirements and granted all the other
Hi Christoph,
Christoph Neumann wrote:
Hi,
Thorsten Ziehm schrieb:
Hi Mathias,
Do you think it's worth it?
I think it's not primarly the matter of running the regression-suite
before QA approval but to have a small set of meaningful regression
tests available ?
Exactly, and I would
Hi Jörg,
[...]
Ause just informed me about another solution that might remove the need
to have the test run on every CWS i.e. we wouldn't need to have the
tests mandatory. His idea is to run the tests on the Master Workspace
prior to announcing the CWS as ready for CWS use. If a test fails
Hi Rüdiger,
Rüdiger Timm schrieb:
Ause just informed me about another solution that might remove the
need to have the test run on every CWS i.e. we wouldn't need to have
the tests mandatory. His idea is to run the tests on the Master
Workspace prior to announcing the CWS as ready for CWS
Martin Hollmichel wrote:
Jörg Jahnke schrieb:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore
Hi Mathias,
Mathias Bauer wrote:
Martin Hollmichel wrote:
Jörg Jahnke schrieb:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
Thorsten Ziehm wrote:
There is something else that should be thought-provoking: AFAIK most or
nearly all discovered regressions we had on the master in the last
releases haven't been found by the existing automated tests. They have
been found by manual testing of users. So what makes us think
Thorsten Ziehm [EMAIL PROTECTED] writes:
That way a developer could get an _optional_ means at hand of doing
regression tests, with no obligation to always run these tests. If
the developer feels that he should run the tests, then he could do
so and invest the (machine) time. If he thinks
Hi Mathias,
Mathias Bauer schrieb:
...
I know that each and every test I can make is able to find bugs and also
regressions. This is a correct but nevertheless trivial statement that
of course noone would deny. But for the same reason why QA nowadays does
not execute each and every test we
Hi Mathias,
Mathias Bauer wrote:
[...]
You are right, there were not found all regressions in Master by the
automated tests. But some of them were found, when some more tests are
mandatory. In the past only 2 smaller tests are mandatory for approving
a CWS. Many testers run more than these
Hi!
Oliver Specht - Sun Germany -Hamburg wrote:
Do you have such tests? Those that are able to find more regressions
than they overlook? Those that run only several hours not weeks like the
current ones?
Yes.
Cu,
Jogi
http://qa.openoffice.org/qatesttool
Hi Frank,
Frank Schönheit - Sun Microsystems Germany wrote:
That'd be inacceptable for required tests.
Yes.
Tests are only useful if you are able to track down the problem with a
reasonable effort. If the outcome of the test is foo failed, but it
takes you hours to just identify what foo
Hi!
Mathias Bauer wrote:
Exactly, and I would prefer to have regression tests based on the API or
complex test framework and not based on the GUI testtool. We shouldn't
raise even more barriers to contribution.
Validating two different pairs of shoes here. API testing is a step in
front of
Hi Martin,
Martin Hollmichel wrote:
1. Test should be repoducible and generate easy to read and unambigious
logs with clear error codes.
clear is very ambigious :-) For those who have read the manual it is
like reading English language for others it is very unclear.
The Test
Hi Jörg
Jörg Jahnke wrote:
I agree that we shouldn't raise additional barriers that keep people
from contributing code. So the question might be how to do more testing
without adding a discernable barrier.
that's why I haven't started the game ever
You aksed for a go to put the ALREADY
Hi Jörg,
Jörg Jahnke wrote:
Efficiency is important. Thus my insisting on first discussing and
selecting the tests
and then deciding how to deal with them. If you think that the 45 test
cases identified by the QA team are a proper selection we should have a
closer look on them and identify
Thorsten Ziehm wrote:
Hi Mathias,
Mathias Bauer wrote:
[...]
You are right, there were not found all regressions in Master by the
automated tests. But some of them were found, when some more tests are
mandatory. In the past only 2 smaller tests are mandatory for approving
a CWS. Many testers
Hi Hennes,
ever seen that page?
http://qa.openoffice.org/ooQAReloaded/AutomationTeamsite/ooQA-TeamAutomationTestlist.html
We will collect the tests which have the Category 1. And we are able to
see that sw has been changed and not sc
Again, that wasn't the suggestion Joerg Jahnke
Hi Hennes,
Hennes Rohling schrieb:
...
But don't make everything mandatory. If I change a string in the setup
or change platform dependend code for systemintegration I don't want to
do a mandatory test that tests whether all dialogs in the Calc still work.
- Hennes
The problem with not
Hi Joerg,
Do you agree to make regression testing with the testttol BEFORE you
(the developer) give your work to the QA to get CWSes faster integrated?
You won't have to maintain the testing code neither do you have to learn
the script language or debugging the test code
Hmm? Do you
Hi Ingrid,
Given all the other requirements and granted all the other concerns,
back to the pure time question. It's still not obvious to me what the
problem is with for example 3 days.
Is it really only the psychological thing?
Not only, but probably also.
Or is there more?
Hmm. Even
Hi Joerg,
We haven't identified the tests. The requirements was that they should
be rock solid and we have given those RESOURCE and MAIN FUNCTIONALITY
tests to customers and they were able to deal with them.
It makes sense to use firstly less than these ~45 and have a look if we
stop
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the hardware
being used, and therefore cost time and hardware-resources.
Do you think
On 5/30/07, Jörg Jahnke [EMAIL PROTECTED] wrote:
Hi,
Hi Jörg,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the hardware
being used, and
Jörg Jahnke schrieb:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore cost time and
Jörg Jahnke [EMAIL PROTECTED] writes:
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a
CWS. These tests would probably run several hours, depending on the
hardware being used, and therefore cost time and
Hi Jörg,
Jörg Jahnke wrote:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the hardware
being used, and therefore cost time and
Ingrid Halama wrote:
Hi Jörg,
Jörg Jahnke wrote:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore
Hi Oliver,
Do you have such tests? Those that are able to find more regressions
than they overlook?
Hmm? How do you measure *this*? If they find regressions, that's good.
Every test will overlook some regressions.
Those that run only several hours not weeks like the
current ones?
That's
Hi Martin,
Martin Hollmichel schrieb:
Jörg Jahnke schrieb:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and
Hi,
Oliver Specht - Sun Germany -Hamburg schrieb:
...
Hi,
before we all start dreaming:
Do you have such tests? Those that are able to find more regressions
than they overlook? Those that run only several hours not weeks like the
current ones?
A limited set of tests exists which could be
it's not primarly the matter of running the regression-suite before
QA approval but to have a small set of meaningful regression tests
available ?
The problem with such tests not being mandatory is that, sooner or
later, some tests would break. That again would lead to a state where
the
Hi Frank,
[...]
Those that run only several hours not weeks like the
current ones?
That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.
Why is it a serious hurdle to
Hi Martin,
Martin Hollmichel schrieb:
...
I still think that making a test mandatory is not the first step in the
process. I would like to name these requirements with this priorities:
1. Test should be repoducible and generate easy to read and unambigious
logs with clear error codes.
Hi Jörg,
On Wednesday, 2007-05-30 11:37:47 +0200, Jörg Jahnke wrote:
And I want to repeat Thorstens wish: Please make it as easy as the
performance test - a direct button in the html page of the CWS.
The plan is to have an _easy_ way of running the tests. Whether a button
in the EIS
Martin Hollmichel wrote:
Jörg Jahnke schrieb:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore
Jörg Jahnke wrote:
Hi,
Ingrid Halama schrieb:
Hi Jörg,
Jörg Jahnke wrote:
Hi,
one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
Hi Frank at all,
Frank Schönheit - Sun Microsystems Germany wrote:
Hi Oliver,
Do you have such tests? Those that are able to find more regressions
than they overlook?
Hmm? How do you measure *this*? If they find regressions, that's good.
Every test will overlook some regressions.
Those
Hi Martin,
[...]
I still think that making a test mandatory is not the first step in the
process. I would like to name these requirements with this priorities:
1. Test should be repoducible and generate easy to read and unambigious
logs with clear error codes.
done, with the planned
Hi Mathias,
Do you think it's worth it?
I think it's not primarly the matter of running the regression-suite
before QA approval but to have a small set of meaningful regression
tests available ?
Exactly, and I would prefer to have regression tests based on the API or
complex test framework
Hi,
Mathias Bauer schrieb:
Exactly, and I would prefer to have regression tests based on the API or
complex test framework and not based on the GUI testtool. We shouldn't
raise even more barriers to contribution.
Ciao,
Mathias
I agree that we shouldn't raise additional barriers that keep
Thorsten Ziehm wrote:
Why 1 hour? Why not one night or 24 hours or so? It is only machine
power and resources you need for it.
ok, I'd also be ok with half an hour or 4 hours.
The problem with longer testruns is, that you will have to deal with
more task in parallel the longer automated
Hi Eike,
Eike Rathke schrieb:
The plan is to have an _easy_ way of running the tests. Whether a button
in the EIS application will be the best way I cannot say, but it seems
to be a good example.
So far the performance tests can't be run for CWSs that were not created
on Sun Hamburg servers.
Mathias Bauer [EMAIL PROTECTED] writes:
And I want to repeat Thorstens wish: Please make it as easy as the
performance test - a direct button in the html page of the CWS.
The plan is to have an _easy_ way of running the tests. Whether a button
in the EIS application will be the best
Frank Schönheit - Sun Microsystems Germany [EMAIL PROTECTED] writes:
That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.
Generally, no. For a normal CWS, cycle time
Hi Ingrid,
Why is it a serious hurdle to wait let's say 3 days? For me this is not
so obvious.
Imagine your frustration what happens if the test fails after 2 days and
20 hours ... Or the turnaround times you have when the test fails there,
you fix it, and the test fails again an hour later.
Eike Rathke [EMAIL PROTECTED] writes:
With tests that need 3 days to complete you can be sure that almost no
CWS owner from outside Sun will run these tests. If the tests would be
mandatory you would end up with a situation where Sun engineers would
mirror community CWSs and create install
Hi Thorsten,
That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.
Generally, no. For a normal CWS, cycle time in QA is weeks, so this
really does not add significant
Hi Frank,
Frank Schönheit - Sun Microsystems Germany wrote:
Hi Ingrid,
Why is it a serious hurdle to wait let's say 3 days? For me this is not
so obvious.
Imagine your frustration what happens if the test fails after 2 days and
20 hours ... Or the turnaround times you have when the test
Frank Schönheit - Sun Microsystems Germany [EMAIL PROTECTED] writes:
Generally, no. For a normal CWS, cycle time in QA is weeks, so this
really does not add significant overhead.
I call the difference between 3 weeks and 4 weeks significant. Also,
there are more than enough CWS where your
Hi Frank,
Frank Schönheit - Sun Microsystems Germany wrote:
Hi Ingrid,
Why is it a serious hurdle to wait let's say 3 days? For me this is not
so obvious.
Imagine your frustration what happens if the test fails after 2 days and
20 hours ... Or the turnaround times you have when the test
Hi Frank,
Frank Schönheit - Sun Microsystems Germany wrote:
Hi Thorsten,
That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.
Generally, no. For a normal CWS, cycle
Hi,
the reason why the Wiki page speaks of mandatory tests I have mentioned
in a previous mail:
Jörg Jahnke schrieb:
The problem with such tests not being mandatory is that, sooner or
later, some tests would break. That again would lead to a state where
the user of the tests could not be
Jörg Jahnke wrote:
Hi,
the reason why the Wiki page speaks of mandatory tests I have mentioned
in a previous mail:
Jörg Jahnke schrieb:
The problem with such tests not being mandatory is that, sooner or
later, some tests would break. That again would lead to a state where
the user of
Jörg Jahnke wrote:
Hi,
the reason why the Wiki page speaks of mandatory tests I have mentioned
in a previous mail:
Jörg Jahnke schrieb:
The problem with such tests not being mandatory is that, sooner or
later, some tests would break. That again would lead to a state where
the user of
1 - 100 of 105 matches
Mail list logo