Hi Vito,
Vito Smolej schrieb:
Hi Andre:
The list below contains ~25 tests. The number of BAS files by my last
count is 270+. Can you comment on the difference? Like "an error in
the 25 is a show stopper" or "the others are in the P3 to P5
category"? Is there a second tier of test files among those 270+ minus
25 that one should tackle after the top 25? Some of the tests are like
[EMAIL PROTECTED] - the help.bas cost me 4 hours (see issue 80303) and I ran
it for the same reason Sir Hillary climbed Mt Everest - because it was
there. It was not the first time alright, but possibly it was the
last, if you give me a good reason to stop doing it;)
The 25 tests are called "update tests" because they verify that the
resources have been correctly updated. Only the update tests are written
in such a manner that they can create screenshots to help translation
find dialogs with missing translations, cut off strings etc.
The other tests are grouped into level1 and level2 tests where each test
group digs a little deeper into a feature/functionality. Most tests we
have belong to the level1 category. Because the number of tests is quite
high we have also separated them into subprojects like "help" or
"filedlg". This is intended to help the QA-member selecting relevant
tests for a CWS - if the filedlg is changed you would then like to run
the filedlg tests.
Any test may yield a problem that can be considered a showstopper. It
depends on the severity. A crash that has a high impact and is likely to
strike frequently is always worth a p1, otherwise - if it does not break
large areas of the app - it is a p2. P3 is the standard for most issues.
Nasty, but not deadly. P4, P5 are reserved for minor annoyances - read
the exact definitions on OOo QA pages/wiki.
The help.bas file cannot be accelerated as it touches every entry in the
help index and actually loads each page - a test that frequently finds
problems with resources, missing pages, memory leaks etc. This is one
of the tests that usually run over night. Running the extras-tests takes
even longer.
Wolfram Garten - Sun Germany - eham02 - QA - QA-Engineer schrieb:
In OOo 2.1 also non-Sun QA members supported the test runs and added
them to the matrix [1]. Feel free to enter your results into the new
matrix if you are familar with running a complete modul of automated
tests on a language and platform of your choice.
I'd like to do this .. but I would need the list of .bas files that
build a complete test.
Best way to provide such a list was to commit it to qa/qatesttool/script/
We have now created a CVS branch called "ooo230" which contains the
tests - and only those - that are relevant for the upcoming release.
If you want to fill the matrix, grab a platform, a language and a
project (which corresponds to one or two CVS modules). Impress and draw
share the "graphics" module whereas spreadsheet is a combination of
"calc" and the newer "spreadsheet" module.
The rest is simple: You want to test Slovenian on Windows for the
framework module: Just run all test within the framework (from the
ooo230 branch!), if they complete ok, you're done, update wiki. If some
tests fail on first attempt: Kill the user layer and run them again. If
they still fail: Write a bug, if it is a showstopper: Add it to the wiki
page.
On any UNIX(like) OS you could do a find . -name '*.bas' from within the
module, the resulting list is what you are looking for. Framework has
these lists ready to use below tools/profile and scripts are available
from the scripts module.
I hope this clarifies a few things. If i missed something please ask,
i'm famous for stupid answers.
Skotti
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]