Re: [dev] amount of stopper / regressions for 3.1 release

2009-04-03 Thread Mathias Bauer
Matthias B. wrote:

 On Fri, Apr 3, 2009 at 8:51 AM, Juergen Schmidt juergen.schm...@sun.com 
 wrote:
 First of all thanks for all the issue reports. 131 issues over which period?
 
 Our oldest issue is from May 17, 2005, so that would be almost exactly 4 
 years.
 
 You heavily use OO and of course the API.
 Probably for your use-cases every single reported bug was a showstopper.
 
 No. We can work around things that have never worked. It's the
 regressions and incompatibilities that are the problem, because they
 mean that a new OOo version fails where an old OOo version has worked.
 This messes with our deployment and updating plans, because it either
 forces an update of our extension on our customers or it prevents an
 update of OOo. In an organization with several 1000 desktops managed
 by 2 dozen independent IT departments, some of which are not exactly
 fans of this whole MS Office - OOo migration of which our extension
 is a major part, this is a real bitch.

What about the following:

Finding regressions early requires some test cases. We could need some
help creating them. So if you have some APIs your application is relying
on, could you write some Java test cases for them (let them do whatever
you see fit) and contribute them to OOo? This would require that they
can be used in our qadevooo-runner, but AFAIK that shouldn't be a problem.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-04-02 Thread Maximilian Odendahl

Hi,


for 3.1. The result is that OOo 3.1 will be the worst OOo ever from
our POV. It's so broken that it's impossible to use with our app and
our organization (that's over 9000 OOo installations) will have to
completely skip 3.1 and pray for 3.2 to be better.


can you name some of these regressions which prevent you in using 3.1?

Best regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-04-01 Thread Matthias B.
On Fri, Mar 13, 2009 at 12:30 PM, Rich r...@hq.vsaa.lv wrote:
 seeing as my bug made the list, i'll chime in.
 i've reported quite some bugs, absolute majority being found in real life
 usage. but i'm not the casual user, as i run dev snapshots most of the time,
 which increases chances of stumbling upon bugs.

 i'd like to think i'm not the only one who does that ;),

No, you aren't. We're in the same boat. And our app uses 10% of the
UNO API (measured by X* interfaces used), so there's a significant
chance that a regression will hit us.

 i think you already have guessed where i'm heading - stability and usability
 (as in fit for purpose) of dev snapshots. if dev snapshot has a critical
 problem that prevents me from using it i'll regress to last stable
 version, and, quite likely, will stay with it for a while.

I wholeheartely agree. This was a big issue with 3.1. OOo was so
broken for a while that it was  unusable and we simply stopped
building OOo since we saw no point to it. When we got back into
testing (OOO310_m6) we found several regressions but we were too late
for 3.1. The result is that OOo 3.1 will be the worst OOo ever from
our POV. It's so broken that it's impossible to use with our app and
our organization (that's over 9000 OOo installations) will have to
completely skip 3.1 and pray for 3.2 to be better.

 that means i won't find some other bug that will prevent somebody else from
 using a dev build and finding yet another bug and so on ;)

Exactly. Because we have to skip 3.1 due to its brokenness,  there may
be other regressions that we won't be able to find until 3.2 and we
may end up having to skip that release also, which will prevent us
from finding more bugs, and so on...

It's absolutely no fun to maintain a complex app that interfaces with
OOo. There are so many regressions and backwards-incompatible changes
that about every second MINOR version release of OOo breaks our app
and requires us to change stuff and/or introduce workarounds.

There needs to be more stability and fewer regressions!

Matthias

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-23 Thread Ingrid Halama

Hi André,

André Schnabel wrote:

Hi Ingrid,

Ingrid Halama schrieb:

Mathias Bauer wrote:


The problem is that the usual test runs obviously don't find the bugs
  
That is not obvious to me. Too often the mandatory tests haven't been 
run. And if tests do not find an important problem, hey then the 
tests should be improved.


See my post at d...@qa ( 
http://qa.openoffice.org/servlets/ReadMsg?list=devmsgNo=11964 ).


Our toolset for automated tests even reports all ok if the testttool 
correctly identified a stopper bug. Many other bugs cannot be found by 
the testtool (e.g. visual problems like issue 99662 ).


I think, the benefits of automated testing are overstimated (or the 
costs are underestimated).
Convwatch is a tool that does test visual problems quite nicely already. 
It loads a set of documents with a CWS and creates 'screenshots' ( 
postscript-printouts ). The same is done with the corresponding master 
on which the CWS is based upon. Then the 'screenshots' from the CWS are 
compared with the 'screenshots' from the master and it is detected 
automatically if the CWS would introduce visual changes. With this test 
and a good collection of test documents you can test a major part of the 
office automatically: 'load and display'. I would like to see a simple 
button within each CWS html frontend to start this test and a simple 
color indicator that says whether the test has been passed successfully.





that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master 
trunk would give us much more freedom. Now we always need to react on 
show stoppers and react and react and uh then the release time line 
is on risk. All that, because the bugs are already in the product. If 
you instead detect the bugs before they are integrated into the 
product you can keep cool, refuse the bad CWS and thus not the  
release is on risk but only the single bad CWS.



The point is *if* you detect the bugs in the CWS. At the moment we 
obviuosly do not identify enough critical issues while CWS testing 
(even if the mandatory tests are done).



I assume you are talking about the VCL-Testtool-tests. But we have more 
tests - we have UNO-API-tests, we have performancetests and we have 
convwatch. And we have even much more VCL-Tests than those that are 
mandatory. So there is much room for improvement even without writing 
any new test.
I am missing a stimulation for good behaviour in this plans. There 
are people who do the feature design, who do the developing work, who 
do the testing, who create the automatic test, who do the 
documetnation and after all these people have done their work and 
lets assume they have done it good and without show stoppers, after 
all this there comes someone else and says, oh no, I do not think 
that I want to have this for this release, there are other things 
that I want to have more and in the sum I guess that it might be to 
much for the next release? Where is the stimulation for good 
behaviour here? 


The problem is, that we first need to prove that good behaviour is 
really helpfull and prevents critical bugs in the master. I'm all for 
promoting good behaviour - and in most cases I would like to see more 
people who follow the rules (like publishing specs correctly at the 
specs website).
Well so lets try convwatch as mandatory test for the next release to 
prove whether it helps.


But we must be allowed to review our processes and identify the parts 
that are not so helpfull.
Fully agreed. In addition it must be allowed to identify parts where 
helpful things are missing.


There is none, instead it is a stimulation to push in the changes 
quickly into the product and skip careful testing.


If this is some stimulation: thanks for all the chart specs that are 
correctly linked at the specs website. These are very helpfull to 
speed up testing, get an idea about the new functionality and in many 
cases arethe only resource to get our translations correct.
Thanks André! ;-) . It is nice to hear that that part of the work has a 
real benefit.
My idea is that the integration of a CWS could be and should be the 
gratification and stimulation for good behavior.


Ingrid


André



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-22 Thread André Schnabel

Hi Ingrid,

Ingrid Halama schrieb:

Mathias Bauer wrote:


The problem is that the usual test runs obviously don't find the bugs
  
That is not obvious to me. Too often the mandatory tests haven't been 
run. And if tests do not find an important problem, hey then the tests 
should be improved.


See my post at d...@qa ( 
http://qa.openoffice.org/servlets/ReadMsg?list=devmsgNo=11964 ).


Our toolset for automated tests even reports all ok if the testttool 
correctly identified a stopper bug. Many other bugs cannot be found by 
the testtool (e.g. visual problems like issue 99662 ).


I think, the benefits of automated testing are overstimated (or the 
costs are underestimated).





that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master 
trunk would give us much more freedom. Now we always need to react on 
show stoppers and react and react and uh then the release time line is 
on risk. All that, because the bugs are already in the product. If you 
instead detect the bugs before they are integrated into the product 
you can keep cool, refuse the bad CWS and thus not the  release is on 
risk but only the single bad CWS.



The point is *if* you detect the bugs in the CWS. At the moment we 
obviuosly do not identify enough critical issues while CWS testing (even 
if the mandatory tests are done).



I am missing a stimulation for good behaviour in this plans. There are 
people who do the feature design, who do the developing work, who do 
the testing, who create the automatic test, who do the documetnation 
and after all these people have done their work and lets assume they 
have done it good and without show stoppers, after all this there 
comes someone else and says, oh no, I do not think that I want to have 
this for this release, there are other things that I want to have more 
and in the sum I guess that it might be to much for the next release? 
Where is the stimulation for good behaviour here? 


The problem is, that we first need to prove that good behaviour is 
really helpfull and prevents critical bugs in the master. I'm all for 
promoting good behaviour - and in most cases I would like to see more 
people who follow the rules (like publishing specs correctly at the 
specs website).


But we must be allowed to review our processes and identify the parts 
that are not so helpfull.


There is none, instead it is a stimulation to push in the changes 
quickly into the product and skip careful testing.


If this is some stimulation: thanks for all the chart specs that are 
correctly linked at the specs website. These are very helpfull to speed 
up testing, get an idea about the new functionality and in many cases 
arethe only resource to get our translations correct.


André

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-20 Thread André Schnabel

Hi again,

Andre Schnabel schrieb:


BTW, since there was the request for an automatic solution: anybody 
knows what became of the mirrorbrain offering (on d...@tools IIRC)?




Yes .. I'm in contact with Christoph Noack on this, to provide at least a test 
environment.
  
Sorry, I mixed up the tools. You may ask Florian Effenberger, what the 
status is.


André

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread Rich

On 2009.03.17. 19:56, Mathias Bauer wrote:

Hi,

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems wrote:


Hi all,

Thorsten Ziehm wrote:

...
IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:


thinking about time when bugs are found i got reminded about one issue - 
actually getting the dev/testing builds.
i'll admit ignorance about how these things are handled, but that might 
also help to understand how many possible testers would see it.
from my point of view, dev snapshots aren't always available fast 
enough, and quite often in dev series only every other build is 
available to public. of course, this means less testing and testing 
later in the development process.


for example, currently supposedly OOO310_m6 is available, public 
download - m5


supposedly DEV300_m43 is available, public download - m41

ideally, dev snapshouts would be built  distributed automatically as 
soon as possible, and all of them. otherwise testers of development 
versions will either always come in later in the cycle, or only 
dedicated qa involved people will do testing, who know where/how to get 
a more recent dev snapshot - but that probably wouldn't quite qualify as 
 a sufficient real world usage...



ISSUE   INTRODUCED IN   FOUND IN
i99822  DEV300m2 (2008-03-12)   OOO310m3 (2009-02-26)

i99876  DEV300m30 (2008-08-25)  OOO310m3

i99665  DEV300m39 (2009-01-16)  OOO310m3

i100043 OOO310m1OOO310m4 (2009-03-04)

i100014 OOO310m2OOO310m4

i100132 DEV300m38 (2008-12-22)  OOO310m4

i100035 SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.


I'm currently investigating the blocker list. Here's a rough summary:
from 77 issues 42 have been introduced before feature freeze (IIRC that
was in DEV300_m39). 9 issues are regression on the OOO310 code line, the
others still are not classified, but I'm working on it. I will follow up
with the complete results later.

Interestingly a considerable amount of showstoppers have been present
already in 3.0.1, some in 3.0 and a very few even in older versions.
Some of them even weren't regressions at all.

An interesting problem is

http://www.openoffice.org/issues/show_bug.cgi?id=100235

This issue was introduced very early (DEV300_m26 or so), but couldn't be
found before we had localized builds.

Regards,
Mathias

--
 Rich

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread Thorsten Ziehm

Hi Rich,

Rich wrote:

On 2009.03.17. 19:56, Mathias Bauer wrote:

Hi,

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems wrote:


Hi all,

Thorsten Ziehm wrote:

...
IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my 
experience.

Some data about the show stoppers, which I have fixed in the last days:


thinking about time when bugs are found i got reminded about one issue - 
actually getting the dev/testing builds.
i'll admit ignorance about how these things are handled, but that might 
also help to understand how many possible testers would see it.
from my point of view, dev snapshots aren't always available fast 
enough, and quite often in dev series only every other build is 
available to public. of course, this means less testing and testing 
later in the development process.


for example, currently supposedly OOO310_m6 is available, public 
download - m5


The milestones are uploaded as soon as possible. When the milestone
is ready for use the upload started. The mass of the uploaded files
and the delivering time to the mirror network take more than 24 hours.
So in the current situation (build for different platforms in different
languages) need such time.

I do not know, how this can be fastened for all OOo users? Do you have
any idea?

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread sophie gautier
Hi,
Rich wrote:
 On 2009.03.17. 19:56, Mathias Bauer wrote:
 Hi,

 Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems wrote:

 Hi all,

 Thorsten Ziehm wrote:
 ...
 IMHO, we do not find critical problems (show stoppers) in DEV builds
 very early, only half of them are found early according to my
 experience.
 Some data about the show stoppers, which I have fixed in the last days:
 
 thinking about time when bugs are found i got reminded about one issue -
 actually getting the dev/testing builds.
 i'll admit ignorance about how these things are handled, but that might
 also help to understand how many possible testers would see it.
 from my point of view, dev snapshots aren't always available fast
 enough, and quite often in dev series only every other build is
 available to public. of course, this means less testing and testing
 later in the development process.
 
 for example, currently supposedly OOO310_m6 is available, public
 download - m5
 
 supposedly DEV300_m43 is available, public download - m41
 
 ideally, dev snapshouts would be built  distributed automatically as
 soon as possible, and all of them. otherwise testers of development
 versions will either always come in later in the cycle, or only
 dedicated qa involved people will do testing, who know where/how to get
 a more recent dev snapshot - but that probably wouldn't quite qualify as
  a sufficient real world usage...

I fully agree with you.

Kind regards
Sophie


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread Thorsten Behrens
Hi Thorsten,

you wrote:
 I do not know, how this can be fastened for all OOo users? Do you have
 any idea?

I would guess a handful of servers can easily cope with the demand
a dev snapshot generates, at least within the first 24 hours. No
need to have the same requirements there as for a release build.

BTW, since there was the request for an automatic solution: anybody 
knows what became of the mirrorbrain offering (on d...@tools IIRC)?

Cheers,

-- Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread Andre Schnabel
Hi,

 Original-Nachricht 
 Von: Thorsten Behrens t...@openoffice.org
 
 BTW, since there was the request for an automatic solution: anybody 
 knows what became of the mirrorbrain offering (on d...@tools IIRC)?


Yes .. I'm in contact with Christoph Noack on this, to provide at least a test 
environment.

(Ok, actually Christoph sent me some mails but I was not able to answer yet)

André
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger01

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-17 Thread Mathias Bauer
Hi,

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems wrote:

 Hi all,
 
 Thorsten Ziehm wrote:
 Hi Mathias,
 
 Mathias Bauer schrieb:
 Ingrid Halama wrote:

 This is not sufficient. Heavy code restructurings and cleanups are 
 not bound to the feature freeze date, 
 Perhaps they should? And at least as far as it concerns me they are.

 but have a great potential to introduce regressions also. I think the 
 show-stopper phase must be extended in relation to the feature-phase 
 *and* the normal-bug-fixing-phase.

 Furthermore what does it help to simply let different people do the 
 nominations while the criteria are not clear? So I would like to 
 suggest a criterion: In the last four weeks before the feature freeze 
 only those (but all those) CWSses get nominated that have a complete 
 set of required tests run successfully. Same for the last four weeks 
 before end of normal-bug-fixing-phase. We could start with the tests 
 that are there already and develop them further.

 The problem is that the usual test runs obviously don't find the bugs
 that now bite us, most of them have been found by users or testers
 *working* with the program. Adding more CWS test runs and so shortening
 the time for real-life testing will not help us but make things worse.
 
 The problem is a bit more complex. The testers and test script writers
 do not have any time for writing new test cases for new functionality,
 they do not have time to check fixed issues in master, they do not have
 time to check code changes in a CWS as much as they should and at the
 end you are right, they do not have the time for real-life testing.
 
 But at the last point I want to relativize a little bit. The QA 
 community and the L10N testers find critical problems in DEV build very
 early. Most of the regressions which were reported in the past days on
 the releases list, are regressions in the very past builds. Some of the
 issues weren't identified very early by Sun employees, because they have
 to look in a lot of issues these days to identify the show stoppers.
 
 
 IMHO, we do not find critical problems (show stoppers) in DEV builds 
 very early, only half of them are found early according to my experience.
 Some data about the show stoppers, which I have fixed in the last days:
 
 ISSUE INTRODUCED IN   FOUND IN
 i99822DEV300m2 (2008-03-12)   OOO310m3 (2009-02-26)
 
 i99876DEV300m30 (2008-08-25)  OOO310m3
 
 i99665DEV300m39 (2009-01-16)  OOO310m3
 
 i100043   OOO310m1OOO310m4 (2009-03-04)
 
 i100014   OOO310m2OOO310m4
 
 i100132   DEV300m38 (2008-12-22)  OOO310m4
 
 i100035   SRCm248 (2008-02-21)OOO310m4
 This issue is special, because it was a memory problem, that by accident 
 was not detected. Thus, it should not be counted in this statistic.
 
 Looking at this concrete data, I personally can say that we find more or 
 less half of the show stoppers early.

I'm currently investigating the blocker list. Here's a rough summary:
from 77 issues 42 have been introduced before feature freeze (IIRC that
was in DEV300_m39). 9 issues are regression on the OOO310 code line, the
others still are not classified, but I'm working on it. I will follow up
with the complete results later.

Interestingly a considerable amount of showstoppers have been present
already in 3.0.1, some in 3.0 and a very few even in older versions.
Some of them even weren't regressions at all.

An interesting problem is

http://www.openoffice.org/issues/show_bug.cgi?id=100235

This issue was introduced very early (DEV300_m26 or so), but couldn't be
found before we had localized builds.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Frank,

if a CWS is needed and is important it is possible to have it in 2 days.
1 night for automated testing and 1 day for checking the fixed. That
this isn't possible anymore when 30-40 CWS are ready for QA at one day,
this is correct!

So your experience are correct.

Thorsten


Frank Schönheit - Sun Microsystems Germany wrote:

Hi Thorsten,


The time to master isn't a problem currently, I think.


That's not remotely my experience.

See dba31g
(http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=7708OpenOnly=falseSection=History)
for a recent example of a CWS which needed 36 days from ready for QA
to integrated state (and add a few more days for the milestone to be
finished).

A few more?
dba31a: 26 days
dba31b: 42 days
dba31e: 26 days
dba31f: 13 days
dba31h: 23 days
mysql1: 17 days (and that one was really small)
rptfix04: 9 days (though this number is misleading for other reasons)

dba32a is currently in QA - for 9 days already (which admittedly is also
somewhat misleading, since a regression was fixed meanwhile without
resetting the CWS to new).

Okay, there were also:
fixdbares: 2 days
dba31i: 7 days


Don't get me wrong, that's not remotely QA's alone responsibility.
Especially the first OOO310 milestones had a lot of delay between CWSes
being approved and being integrated.

But: time-to-master *is* a problem. At least for the majority of CWSes
which I participated in, over the last months.

Ciao
Frank



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello,

Thorsten Ziehm schrieb:

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.

Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix
CWS can be 'approved by QA' in 2 days. But when the master is broken,
you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!


Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
aren't able to build OOo on their own.


As we talked in many other threads. Where are the problems for testing
a CWS from a BuildBot? You can check fixed, you can test functionality
etc. You are perhaps right, that for automated testing with VCLTestTool
doesn't show always the same results. But for this the QA team from Sun
is working on TestBots. So it isn't needed to adjust the test scripts
for all test environment we will find in the community.
What do you want more?

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Behrens
Thorsten Ziehm wrote:
 As we talked in many other threads. Where are the problems for testing
 a CWS from a BuildBot? You can check fixed, you can test functionality
 etc. You are perhaps right, that for automated testing with VCLTestTool
 doesn't show always the same results. But for this the QA team from Sun
 is working on TestBots. So it isn't needed to adjust the test scripts
 for all test environment we will find in the community.
 What do you want more?

Builds that reliably represents master builds, I guess. What sense
does QAing random builds make? Exaggerating a bit, Mechtilde could
then even use a ooo-build-based version for CWS QA, as buildbots 
also tend to patch a few things (to make the build succeed,
usually).

Also pointed out before: prerequisite for a testbot is a buildbot
that builds Hamburg-compatible CWS. So you need those anyway, why
going two steps at once instead of one after another?

Cheers,

-- Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Mechtilde
Hello Thorsten, *,

Thorsten Ziehm schrieb:
 Hi Mechtilde,
 

 As we talked in many other threads. Where are the problems for testing
 a CWS from a BuildBot? 

1. Because it doesn't build like OOO310_m5 also after this build was
announced ready for use. There was a patch missing.

You can check fixed, you can test functionality
 etc. 

No I can't because some times there are functionality missing I need in
my daily work. So I can't test a CWS build in my daily work.

I know that his special problem is solved in the meantime so I have to
do more tests on the buildbots build if it is now possible to use it.

In general it would be better the differences between the buildbot and
the Sun environment will be documented.

Then it is easier to decide where the problems come from.

You are perhaps right, that for automated testing with VCLTestTool
 doesn't show always the same results. But for this the QA team from Sun
 is working on TestBots. So it isn't needed to adjust the test scripts
 for all test environment we will find in the community.

Then it si ot possible for the Community to do automated tests for CWSes
which come from the community because nobody can evaluate the results in
a normal time.
For example:  In Quaste I can see that OOO310_m5 under Linux schows 14
errors and 27 warnings and the tests isn't finished yet.

So indepently how many errors I found in a build of the buildbot I have
to take much time ime to evalutate these errors. so I caan't test any
CWS build because then I don't know if the errors comes from the CWS
build or from the system around.

 What do you want more?

see above

I want to bein the position to test CWS Build (esp. coming from the
Community which can't not all tested by Sun) and to decide the Build has
no errors. all things are ok.

The last example therefore is the CWS SQlSyntaxhighlightning.

If there are nore things to clarify I suggest to do a short talk in
German maybe on IRC.

My capacity to describe it in English are limited

Kind Regards

Mechtilde


---
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello Thorsten, *,


[...]


Then it si ot possible for the Community to do automated tests for CWSes
which come from the community because nobody can evaluate the results in
a normal time.
For example:  In Quaste I can see that OOO310_m5 under Linux schows 14
errors and 27 warnings and the tests isn't finished yet.

So indepently how many errors I found in a build of the buildbot I have
to take much time ime to evalutate these errors. so I caan't test any
CWS build because then I don't know if the errors comes from the CWS
build or from the system around.


When you check the results with QUASTe it doesn't will take so much time
since last week. QUASTe has a new feature, that you can check on one
page the differences between Master and CWS. On this page only the
differences are shown, this is different to the past.

I should ask Helge to promote this feature in a blog or so.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello Thorsten, *

Thorsten Ziehm schrieb:

Hi Mechtilde,




For example:  In Quaste I can see that OOO310_m5 under Linux schows 14
errors and 27 warnings and the tests isn't finished yet.

So indepently how many errors I found in a build of the buildbot I have
to take much time ime to evalutate these errors. so I caan't test any
CWS build because then I don't know if the errors comes from the CWS
build or from the system around.

When you check the results with QUASTe it doesn't will take so much time
since last week. QUASTe has a new feature, that you can check on one
page the differences between Master and CWS. On this page only the
differences are shown, this is different to the past.


And this only works if Master *and* CWS use the same environment.


The comparison show you the differences between the test run in a Sun
environment and your test run. So this can also be used for information,
if your test environment show the same test results.


Or do I have the possibility to checkin as the result of a master build
via buildbot as the results of a CWS build of a buildbot?


When you want to participate in this project, we can also integrate such
mechanism. It isn't possible to get on all BuildBots a Master test run.
If you want to make this, QUASTe can be increased to handle such
informations. It will be also possible for those platforms and
environments which aren't supported by Sun (QA), e.g. 64-bit Linux or 
Deb-Packages or 


Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Martin,

  From my perspective one reason for the high amount of regression is the 
 high amount of integrated child workspaces short before a feature 
 freeze. In the moment the ITeam (the QA representative) does the 
 nomination before feature freeze. As an immediate action (for the 
 upcoming 3.2 release) from my side I will limit this freedom until 4 
 weeks before feature freeze, in the last 4 weeks before feature freeze, 

In my opinion, it's strictly necessary then to have parallel development
branches earlier than we have today. That is, if there are a lot of
CWSes coming in, but not approved/nominated for the next release, then
we should *not* pile them, but instead have a different branch to
integrate them into. Else, the quality problems will be shifted to
post-release only.

An yes, extending the various phases we have - feature implementation,
bug fixing, release canditates -, as suggested by Ingrid, would help, too.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:

Ingrid Halama wrote:

This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.

but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.


The problem is that the usual test runs obviously don't find the bugs
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at the
end you are right, they do not have the time for real-life testing.

But at the last point I want to relativize a little bit. The QA 
community and the L10N testers find critical problems in DEV build very

early. Most of the regressions which were reported in the past days on
the releases list, are regressions in the very past builds. Some of the
issues weren't identified very early by Sun employees, because they have
to look in a lot of issues these days to identify the show stoppers.

So the QA project has a big problem with the mass of integrations.
Because they cannot check every new functionality on regular base,
because they do not find the time to write the corresponding test cases
for VCLTestTool and they do not find the time, to check if the
functionality is correctly integrated in the master build.


I think we need to

- stop with larger code changes (not only features) much earlier before
the release. We should not plan for finishing the work right before the
feature freeze date, if something that is not critical for the release
is at risk we better move it to the next release *early* (instead of
desperately trying to keep the schedule) to free time and space for
other things that are considered as critical or very important for the
release.


+1


- make sure that all CWS, especially the bigger ones, get integrated as
fast as possible to allow for more real-life testing. This includes that
no such CWS should lie around for weeks because there is still so much
time to test it as the feature freeze is still 2 months away. This will
require reliable arrangements between development and QA.


+1


- reduce the number of bug fixes we put into the micro releases to free
QA resources to get the CWS with larger changes worked on when
development finished the work. This self-limitation will need a lot of
discipline of everybody involved (including myself, I know ;-)).


+1


Ah, and whatever we do, we should write down why we are doing it, so
that we can present it to everybody who blames us for moving his/her
favorite feature to the next release. ;-)


+1

Regards,
  Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Martin Hollmichel

Mathias Bauer wrote:

Ingrid Halama wrote:

  
This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 


Perhaps they should? And at least as far as it concerns me they are.
  
yes, I also consider large amount or new, move or restructured code as a 
feature and had erroneously the expectation that this is already common 
sense. If all agree we should add this to the Feature Freeze criteria 
(http://wiki.services.openoffice.org/wiki/Feature_freeze)


Martin


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Martin Hollmichel wrote:

Mathias Bauer wrote:

Ingrid Halama wrote:

 
This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.
  
yes, I also consider large amount or new, move or restructured code as 
a feature and had erroneously the expectation that this is already 
common sense. If all agree we should add this to the Feature Freeze 
criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
Two problems here. The worst one is that you cannot control that this 
new rule is applied. Who decides that a code change is too huge to risk 
it for the next release in two months or so? You won't count lines, 
don't you - that would be stupid. Those who are willing to act carefully 
are doing that already I am convinced. And those who are not acting 
carefully you cannot control really with this new rule. So introducing 
this new rule will basically change nothing.
The second problem is that sometimes bad bugs even detected later in the 
phase need bigger code changes. In my opinion only very experienced 
developers are able to make serious estimations whether the fix is worth 
the risk or not. So what to do now? Should we make a rule 'Let a very 
experienced developer check your code'? Sometimes I wish that but I am 
not sure that it would scale and - how would you organize and control 
such a rule? We have a similar rule for the show stopper phase (let you 
change be reviewed by a second developer), but even that rule is 
violated often I am convinced.
So I would like to see mandatory automatic tests that detect whether the 
important user scenarios still work properly, whether files are still 
rendered as they should, whether the performance of the office has not 
significantly decreased,  . We have a lot tests already even if 
there is much room for improvement. In principle some of the tests are 
mandatory already, but this rule gets ignored very often.
The good thing is that a violation of this rule could be detected 
relative easily and thus used as a criterion for nomination.


Ingrid


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Oliver,

thanks for the data.

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems schrieb:

IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:

ISSUEINTRODUCED INFOUND IN
i99822DEV300m2 (2008-03-12)OOO310m3 (2009-02-26)

i99876DEV300m30 (2008-08-25)OOO310m3

i99665DEV300m39 (2009-01-16)OOO310m3

i100043OOO310m1OOO310m4 (2009-03-04)

i100014OOO310m2OOO310m4

i100132DEV300m38 (2008-12-22)OOO310m4

i100035SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.


Half of them is in my opinion a good quote. But this doesn't mean, that
we do not have to improve it. And one point is, that features has to be
checks more often in the master. Perhaps with automated testing or with
regular manual testing or real life testing. But this costs resources
and this is the critical point. Most of the QA community is also part of
the L10N community. This mean they are working on translation, when OOo
run into a critical phase like Code Freeze, where most real-life testing
is needed.

So it isn't easy to fix. Therefore I think 50% is a good quote under the
knowing circumstances.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Joachim Lingner
As Thorsten pointed out, we are NOT capable of covering the QA for our 
product completely NOR are we able to extend QA to new features (no time 
for writing new tests, etc.) We also know, that this is not because we 
are lazy ...
As a matter of fact, many issues are reported by the community, at least 
the critical ones which often promote to stoppers. IMO, we should 
therefore promote the QA community, so there will be more volunteers 
(who maybe also develop tests) and extend the time span between 
feature/code freeze and the actual release date.


Jochen

Martin Hollmichel schrieb:

*Hi,

so far we have got reported almost 40 regression as stopper for 3.1 
release, see query

http://tinyurl.com slash cgsm3y .

for 3.0 ( **http://tinyurl.com slash ahkosf ) we had 27 of these issues, 
for 2.4 (**http://tinyurl.com slash c86n3u** ) we had 23.


we are obviously getting worse and I would like to know about the 
reasons for this. They are too much issues for me to evaluate the root 
cause for every single issue so I would like ask the project and qa 
leads to do an analysis for the root causes and to come with suggestions 
for avoiding them in the future.


additionally there might be other ideas or suggestions on how to detect 
and fix those issues earlier in our release process.


 From my perspective one reason for the high amount of regression is the 
high amount of integrated child workspaces short before a feature 
freeze. In the moment the ITeam (the QA representative) does the 
nomination before feature freeze. As an immediate action (for the 
upcoming 3.2 release) from my side I will limit this freedom until 4 
weeks before feature freeze, in the last 4 weeks before feature freeze, 
I or other members from the release status meeting will do the 
nomination of the cws for the upcoming release or decide to postpone it 
to the then coming release.

**
Martin
*

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Andre Schnabel
Hi,

 Original-Nachricht 
 Von: Ingrid Halama ingrid.hal...@sun.com
...

 So I would like to see mandatory automatic tests that detect whether the 
 important user scenarios still work properly, whether files are still 
 rendered as they should, whether the performance of the office has not 
 significantly decreased,  . We have a lot tests already even if 
 there is much room for improvement. In principle some of the tests are 
 mandatory already, but this rule gets ignored very often.


The problem with this rule is, that there is only a very limited set of people 
who can follow this rule. E.g. running automated test and retting reliable 
results is almost restricted to the Sun QA Team in Hamburg at the moment. So - 
no matter what rules we define - for the moment we either have to break them or 
we will delay the integration of CWSes. If we delay the integration, we will 
delay public testing. If we delay public testing, we will find critical errors 
(that cannot be identified by automatic testing) even later.

I know, I still have to write some more complete report about automatic 
testing.  :(

But as I suggested in another thread, I did some comparisions with automated 
testing on a OOO310m1 build from sund and one from a buildbot. 
The good thing is, that there are not many differences (buildbot had about 3 
errors and 10 warnings more). 
The bad thing is, that I hat a total of 190 Errors in release and required 
tests. 

I did not yet have the time to analyze what happened. But these results are not 
usable. (And I still would say, I know how to get good results from the 
testtool).

André
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger01

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether the 
important user scenarios still work properly, whether files are still 
rendered as they should, whether the performance of the office has not 
significantly decreased,  . We have a lot tests already even if 
there is much room for improvement. In principle some of the tests are 
mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].

On the other side the CWS policies [2] are that code changes can be
integrated and approved by code review only. Only the CWS owner and
the QA representative must be different. This was introduced to lower
the barrier for external developers. If you think, that this is a
reason for lower quality in the product, perhaps this policy has to be
discussed.

Thorsten

[1] : http://quaste.services.openoffice.org/
  Use 'search' for CWSs which are integrated or use the CWS-listbox
  for CWSs which are in work.
[2] : http://wiki.services.openoffice.org/wiki/CWS_Policies

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mathias et. al.,

 The problem is ...

Seeing many different explanations in this thread, and suggested
solutions ... I wonder if we should collect some data about the concrete
regressions, before we start speculating 'bout the larger picture.

Oliver's table with the introduced in and found in was a good start,
I think we should have this for nearly all of the regressions, ideally
together with a root cause.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems

Hi all,

Thorsten Ziehm wrote:

Hi Mathias,

Mathias Bauer schrieb:

Ingrid Halama wrote:

This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.

but have a great potential to introduce regressions also. I think the 
show-stopper phase must be extended in relation to the feature-phase 
*and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to 
suggest a criterion: In the last four weeks before the feature freeze 
only those (but all those) CWSses get nominated that have a complete 
set of required tests run successfully. Same for the last four weeks 
before end of normal-bug-fixing-phase. We could start with the tests 
that are there already and develop them further.


The problem is that the usual test runs obviously don't find the bugs
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at the
end you are right, they do not have the time for real-life testing.

But at the last point I want to relativize a little bit. The QA 
community and the L10N testers find critical problems in DEV build very

early. Most of the regressions which were reported in the past days on
the releases list, are regressions in the very past builds. Some of the
issues weren't identified very early by Sun employees, because they have
to look in a lot of issues these days to identify the show stoppers.



IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:

ISSUE   INTRODUCED IN   FOUND IN
i99822  DEV300m2 (2008-03-12)   OOO310m3 (2009-02-26)

i99876  DEV300m30 (2008-08-25)  OOO310m3

i99665  DEV300m39 (2009-01-16)  OOO310m3

i100043 OOO310m1OOO310m4 (2009-03-04)

i100014 OOO310m2OOO310m4

i100132 DEV300m38 (2008-12-22)  OOO310m4

i100035 SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.



Just my 2 cents,
Oliver.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Jochen,

Joachim Lingner schrieb:
As Thorsten pointed out, we are NOT capable of covering the QA for our 
product completely NOR are we able to extend QA to new features (no time 
for writing new tests, etc.) We also know, that this is not because we 
are lazy ...
As a matter of fact, many issues are reported by the community, at least 
the critical ones which often promote to stoppers. IMO, we should 
therefore promote the QA community, so there will be more volunteers 
(who maybe also develop tests) and extend the time span between 
feature/code freeze and the actual release date.


There is one critical point. When you extend the time for testing and QA
between Feature Freeze and Release date, you bind the QA community to
one release (Code line) and who should do QA work on the next release,
where the developers work on and create their CWSes?

I talked very often with Martin about extending the time buffers
between FeatureFreeze, CodeFreeze, Translation Handover ... and
it isn't easy to find a good choice for all teams.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 Two problems here. The worst one is that you cannot control that this 
 new rule is applied. Who decides that a code change is too huge to risk 
 it for the next release in two months or so? You won't count lines, 
 don't you - that would be stupid. Those who are willing to act carefully 
 are doing that already I am convinced. And those who are not acting 
 carefully you cannot control really with this new rule. So introducing 
 this new rule will basically change nothing.

I beg to disagree. Of course, as you point out, there cannot be a
definite rule of what change is too big in which release phase. But
alone raising the awareness that large code changes are Bad (TM) after
feature freeze might help.
And if the analysis of current show stoppers reveal that a significant
mount is caused by late big code changes, this is a good argument I'd say.

So, let's not call this rule as it if you don't follow it, you'll be
shot. I'd consider it a guideline which every reasonable developer
would usually follow.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Mathias Bauer wrote:

Ingrid Halama wrote:

  
This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 


Perhaps they should? And at least as far as it concerns me they are.

  
but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.



The problem is that the usual test runs obviously don't find the bugs
  
That is not obvious to me. Too often the mandatory tests haven't been 
run. And if tests do not find an important problem, hey then the tests 
should be improved.

that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master trunk 
would give us much more freedom. Now we always need to react on show 
stoppers and react and react and uh then the release time line is on 
risk. All that, because the bugs are already in the product. If you 
instead detect the bugs before they are integrated into the product you 
can keep cool, refuse the bad CWS and thus not the  release is on risk 
but only the single bad CWS.

I think we need to

- stop with larger code changes (not only features) much earlier before
the release. We should not plan for finishing the work right before the
feature freeze date, if something that is not critical for the release
is at risk we better move it to the next release *early* (instead of
desperately trying to keep the schedule) to free time and space for
other things that are considered as critical or very important for the
release.

- make sure that all CWS, especially the bigger ones, get integrated as
fast as possible to allow for more real-life testing. This includes that
no such CWS should lie around for weeks because there is still so much
time to test it as the feature freeze is still 2 months away. This will
require reliable arrangements between development and QA.

- reduce the number of bug fixes we put into the micro releases to free
QA resources to get the CWS with larger changes worked on when
development finished the work. This self-limitation will need a lot of
discipline of everybody involved (including myself, I know ;-)).

Ah, and whatever we do, we should write down why we are doing it, so
that we can present it to everybody who blames us for moving his/her
favorite feature to the next release. ;-)
  
I am missing a stimulation for good behaviour in this plans. There are 
people who do the feature design, who do the developing work, who do the 
testing, who create the automatic test, who do the documetnation and 
after all these people have done their work and lets assume they have 
done it good and without show stoppers, after all this there comes 
someone else and says, oh no, I do not think that I want to have this 
for this release, there are other things that I want to have more and in 
the sum I guess that it might be to much for the next release? Where is 
the stimulation for good behaviour here? There is none, instead it is a 
stimulation to push in the changes quickly into the product and skip 
careful testing.
Come on, we will loose the good and careful people if there is no 
stimulation for good behaviour.


Ingrid

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,


 The problem is a bit more complex. The testers and test script writers
 do not have any time for writing new test cases for new functionality,
 they do not have time to check fixed issues in master, they do not have
 time to check code changes in a CWS as much as they should and at the
 end you are right, they do not have the time for real-life testing.

That statement frightens me - way too many they do not have time, for
my taste.

Is there any chance to change this? Or have we already reached the point
where the daily effort to keep QA running on the current (insufficient)
level prevents us from investing the effort to make QA more efficient?

For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
task?

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Thorsten Ziehm wrote:

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether 
the important user scenarios still work properly, whether files are 
still rendered as they should, whether the performance of the office 
has not significantly decreased,  . We have a lot tests already 
even if there is much room for improvement. In principle some of the 
tests are mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].
There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.

Ingrid


On the other side the CWS policies [2] are that code changes can be
integrated and approved by code review only. Only the CWS owner and
the QA representative must be different. This was introduced to lower
the barrier for external developers. If you think, that this is a
reason for lower quality in the product, perhaps this policy has to be
discussed.

Thorsten

[1] : http://quaste.services.openoffice.org/
  Use 'search' for CWSs which are integrated or use the CWS-listbox
  for CWSs which are in work.
[2] : http://wiki.services.openoffice.org/wiki/CWS_Policies

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org




--
Sun Microsystems GmbH
Nagelsweg 55, D-20097 Hamburg
Sitz der Gesellschaft: Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 that now bite us, most of them have been found by users or testers
 *working* with the program. Adding more CWS test runs and so shortening
 the time for real-life testing will not help us but make things worse.
   
 I don't agree. Preventing the integration of bugs earlier in the 
 production phase especially before the integration into the master trunk 
 would give us much more freedom. Now we always need to react on show 
 stoppers and react and react and uh then the release time line is on 
 risk. All that, because the bugs are already in the product. If you 
 instead detect the bugs before they are integrated into the product you 
 can keep cool, refuse the bad CWS and thus not the  release is on risk 
 but only the single bad CWS.

Hmmm ... difficult.

On the one hand, I agree (and this is what you can read in every QA
handbook) that finding bugs earlier reduces the overall costs.

On the other hand, I suppose (! - that's an interesting facet to find
out when analyzing the current show stoppers: found by whom?) that in
fact the majority of problems are found during real-life usage. And
nobody will use a CWS in real life. So, getting the CWS into the MWS
early has its advantages, too.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,

There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.


it would be really nice to have all these test also in a cygwin 
enviromnent usable for external contributors as well


Regards
Max


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 There are more than the VCLTestTool tests. We have the performance tests 
 and the UNO API test and the convwatch test. All those are in the 
 responsibility of the developers. I think only convwatch is not mandatory.

As far as I know, confwatch is mandatory, too. In theory, at least. In
practice, I doubt anybody is running it, given its reliability.

Which brings me to a very favorite topic of mine: We urgently need to
possibility to run all kind of automated tests (testtool tests,
confwatch tests, UNO API tests, performance tests, complex test cases -
more, anybody?) in an easy way. Currently, this is *a lot* of manual
work, and not remotely reliably (some test infrastructures are
semi-permanently broken, and some tests produce different results on
subsequent runs, which effectively makes them useless).

As a consequence, a lot of those tests are not run all the time, and the
bugs they could reveal are found too late.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:

Hi Thorsten,

[...]


For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
task?


Writing good test scripts isn't an easy tasks you are right. This is
status for all software products. Writing test code costs more time
than writing other code. Try it out with UNIT tests ;-)

So it's the same for automated testing with VCLTestTool for OOo. But
the problem here is, that the QA team leads for an application are
often the same person who have to write the test scripts. They have
to check the new incoming issues, working in iTeams, working on
verifying issues in CWSs ...

When you have the time to concentrate on writing test scripts only,
you can create hundred lines of code per day. But the high workload
on the persons leads to hundreds lines of code per months only.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should 


Maybe it is an idea to change the resolved fixed - verified process? It 
is a waste of time in about 99% of all case probably. The developer 
tests the issue before handing over the CWS and then sets it to 
Resolved, so there is a pretty small chance that the issue itself is not 
really fixed. And remember, it will be again checked for setting the 
issue to closed anyway.


This would free time for real-life testing and testing the involved 
area, instead of the specific issue. IMO, this would help to find 
showstoppers a lot earlier.


Best regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


I'd prefer a button in EIS run all those tests, resulting in an
overview page, showing you red or green for both the overall status
and every single test. Only then, when you need to manually run a red
test to debug and fix it, you're required to do this on your console -
so, I think this could be the second step.


this would be even better of course :-)

I just thought there is a higher chance of getting support for cygwin in 
the near time than having these automated tests in EIS.


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,

 Writing good test scripts isn't an easy tasks you are right. This is
 status for all software products. Writing test code costs more time
 than writing other code. Try it out with UNIT tests ;-)

I know for sure. Writing complex test cases for my UNO API
implementations usually takes the same time than the implementation
took, or even more. Usually, but not always, I think it's worth it :)

Okay, so let me make this more explicit: I see a ... remote possibility
that our *tooling* for writing tests - namely the testtool - has strong
usability deficiencies, and thus costs too much time for fighting the
testtool/infrastructure, which could be better spent in the actual test.

I might be wrong on that, since I seldom come in touch with testtool.
But whenever I do, I feel it hard to believe we live in the 21st century ...

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should 


Maybe it is an idea to change the resolved fixed - verified process? It 
is a waste of time in about 99% of all case probably. The developer 
tests the issue before handing over the CWS and then sets it to 
Resolved, so there is a pretty small chance that the issue itself is not 
really fixed. And remember, it will be again checked for setting the 
issue to closed anyway.


Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(

Therefore in my opinion it isn't good to change the handling of 
'resolved/fixed'-'verified' status.


But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 I just thought there is a higher chance of getting support for cygwin in 
 the near time than having these automated tests in EIS.

As far as I know, there's a group working on this. It would still leave
us with the reliability problem (sometimes a test simply gives you bogus
results, and the solution is to re-run it), but it would be a
tremendous step forward.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:

Thorsten Ziehm wrote:

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether 
the important user scenarios still work properly, whether files are 
still rendered as they should, whether the performance of the office 
has not significantly decreased,  . We have a lot tests already 
even if there is much room for improvement. In principle some of the 
tests are mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].
There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.

Ingrid


OK, you are right. I from my perspective looks often only on VCLTestTool
tests instead of the whole stack of tools we have.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? 


of course in regards to process violation, nothing would change. I am 
talking about e.g crashing issues. If the developer tried it and it does 
 not crash anymore, QA should not have to test the scenario again and 
waste time on reproducing the issue again(and again when closing the issue)



But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.


so I guess we have an agreement to reduce some workload of QA in this 
Resolved-Verified-Closed chain. In which exact location of it would need 
to be discussed some more I guess.


Best regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Rich

On 2009.03.13. 12:08, Frank Schönheit - Sun Microsystems Germany wrote:

Hi Ingrid,


that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master trunk 
would give us much more freedom. Now we always need to react on show 
stoppers and react and react and uh then the release time line is on 
risk. All that, because the bugs are already in the product. If you 
instead detect the bugs before they are integrated into the product you 
can keep cool, refuse the bad CWS and thus not the  release is on risk 
but only the single bad CWS.


Hmmm ... difficult.

On the one hand, I agree (and this is what you can read in every QA
handbook) that finding bugs earlier reduces the overall costs.

On the other hand, I suppose (! - that's an interesting facet to find
out when analyzing the current show stoppers: found by whom?) that in
fact the majority of problems are found during real-life usage. And
nobody will use a CWS in real life. So, getting the CWS into the MWS
early has its advantages, too.


seeing as my bug made the list, i'll chime in.
i've reported quite some bugs, absolute majority being found in real 
life usage. but i'm not the casual user, as i run dev snapshots most of 
the time, which increases chances of stumbling upon bugs.


i'd like to think i'm not the only one who does that ;), so i think this 
is the group that would find most of the stoppers in real life scenarios 
(that slipped past automated testing). important factor would be to get 
as many users (and developers !) as possible doing the same.
i think you already have guessed where i'm heading - stability and 
usability (as in fit for purpose) of dev snapshots. if dev snapshot 
has a critical problem that prevents me from using it i'll regress to 
last stable version, and, quite likely, will stay with it for a while.
that means i won't find some other bug that will prevent somebody else 
from using a dev build and finding yet another bug and so on ;)


many opensource projects tend to keep trunk relatively stable to 
increase the proportion of users why stay on trunk, and use trunk 
themselves. thus breakages are discovered sooner.


summary - while release early, release often is very important, stable 
dev snapshots are as important.



Ciao
Frank

--
 Rich

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 Do you know, how often a CWS returns back to development because of
 broken functionality, not fixed issues or process violation? 
 
 of course in regards to process violation, nothing would change. I am 
 talking about e.g crashing issues. If the developer tried it and it does 
   not crash anymore, QA should not have to test the scenario again and 
 waste time on reproducing the issue again(and again when closing the issue)

Difficult to draw the line: Which issues need verification, which don't?

Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.

Also, IMO good QA engineers tend to not only blindly verify the concrete
issue is fixed, but think about what they saw and did. Sometimes, this
leads to additional issues, or discussions whether the new behaviour is
really intended and Good (TM), and so on. At least this is my experience
with DBA's QA, and I would not like to miss that, since it finally also
improves the product.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Christian Lohmaier
Hi Thorsten,

On Fri, Mar 13, 2009 at 11:23 AM, Thorsten Ziehm thorsten.zi...@sun.com wrote:
 [...]
 But to check the issues in Master = verified - closed could be
 discussed. Here the numbers are really 99% I think. Nearly all
 issues which are fixed in CWS are fixed in Master too.

Maybe in the cvs days. Now with svn there have been a couple of failed
integrations, quite a number of changes that were reverted by other
cws.
So I don't have trust in that number. And this makes it really hard to
check when your fix was integrated in mXX, but later gets reverted by
integration of another cws in mXX+4.

ciao
Christian

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Rich,

 summary - while release early, release often is very important, stable 
 dev snapshots are as important.

Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before integration) which ensure that it
doesn't break anything. Okay, enough laughing.

Finally, that's exactly the problem here: Too many serious bugs slip
Dev/QA's attention in the CWS, and are found on trunk only. If we fix
that, trunk will automatically be much more stable. Easy to say, hard to do.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.


yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 yes, this is true, so would you say we could skip the step from going 
 from verified to closed, doing this verification again?

I'd say this gives the least pain/loss.

(Though my experience with dba31g, whose issues were not fixed at all in
the milestone which the CWS was integrated into, lets me be careful
here. On the other hand, this was not discovered by the verify in
master and close step, so ...)

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Christian,

 Maybe in the cvs days. Now with svn there have been a couple of failed
 integrations, quite a number of changes that were reverted by other
 cws.

Using the verified-closed step to find broken
tooling/SVN/integrations/builds sounds weird, doesn't it? So, this
shouldn't be a permanent argument (though it might be a good one at the
moment), but only hold until the root causes are fixed.

 So I don't have trust in that number. And this makes it really hard to
 check when your fix was integrated in mXX, but later gets reverted by
 integration of another cws in mXX+4.

You never can prevent that. If you verify-close the issue in mXX+2, you
won't find the reversal in mXX+4, anyway.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,

yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


I'd say this gives the least pain/loss.


by freeing time for other stuff for QA at the same time.

So maybe this idea can be discussed by the QA Leads as a possible change 
of process for the future?


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello,

Thorsten Ziehm schrieb:
 Hi Max,
 
 Maximilian Odendahl schrieb:
 Hi,


 
 Do you know, how often a CWS returns back to development because of
 broken functionality, not fixed issues or process violation? Its
 up to 25-30% of all CWSs. You can check this in EIS. The data is
 stable over the past years. :-(

Can you tell me the Path  how I can find this information in the EIS?

Regards

Mechtilde


-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.


yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


We cannot get free time here anymore. It isn't mandatory anymore for the
Sun QA team to check the fixes in Master. So we skipped this nearly one
year ago - but we didn't changed the policy!

But we tried to organized QA issue-hunting days, where these issues are
addressed. With more or less results :-(

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde schrieb:

Hello,

Thorsten Ziehm schrieb:

Hi Max,

Maximilian Odendahl schrieb:

Hi,




Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(


Can you tell me the Path  how I can find this information in the EIS?


Childworkspace / Search
When you searched for CWSs in this query you can find a button 'status 
change statistics' at the bottom of the results. On this page you

can see, how often a CWS toggled between the states of a CWS.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm
 all feature have
the same priority or they have all no priority.

My resume is, that we are as good/worst as in the past. Perhaps
this time the release of OOo 3.0.1 and OOo 3.1 feature freeze
in parallel and the change to SubVersion (SCM) make it a little bit
worser.

We have to increase many things to get a better. But it isn't one
single task we can address, I think.

Thorsten


 Original-Nachricht 
Betreff: [dev] amount of stopper / regressions for 3.1 release
Datum: 11 Mar 2009 15:56:11 +0100
Von: Martin Hollmichel martin.hollmic...@sun.com
Antwort an: dev@openoffice.org
Organisation: StarOffice / Sun Microsystems, Inc.
An: Mathias Bauer mathias.ba...@sun.com, dev@openoffice.org
Newsgruppen: openoffice.dev

*Hi,

so far we have got reported almost 40 regression as stopper for 3.1
release, see query
http://tinyurl.com slash cgsm3y .

for 3.0 ( **http://tinyurl.com slash ahkosf ) we had 27 of these issues,
for 2.4 (**http://tinyurl.com slash c86n3u** ) we had 23.

we are obviously getting worse and I would like to know about the
reasons for this. They are too much issues for me to evaluate the root
cause for every single issue so I would like ask the project and qa
leads to do an analysis for the root causes and to come with suggestions
for avoiding them in the future.

additionally there might be other ideas or suggestions on how to detect
and fix those issues earlier in our release process.

From my perspective one reason for the high amount of regression is the
high amount of integrated child workspaces short before a feature
freeze. In the moment the ITeam (the QA representative) does the
nomination before feature freeze. As an immediate action (for the
upcoming 3.2 release) from my side I will limit this freedom until 4
weeks before feature freeze, in the last 4 weeks before feature freeze,
I or other members from the release status meeting will do the
nomination of the cws for the upcoming release or decide to postpone it
to the then coming release.
**
Martin
*

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Rich

On 2009.03.13. 12:37, Frank Schönheit - Sun Microsystems Germany wrote:

Hi Rich,

summary - while release early, release often is very important, stable 
dev snapshots are as important.


Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before integration) which ensure that it
doesn't break anything. Okay, enough laughing.


haha. almost got me there :)


Finally, that's exactly the problem here: Too many serious bugs slip
Dev/QA's attention in the CWS, and are found on trunk only. If we fix
that, trunk will automatically be much more stable. Easy to say, hard to do.


when i wrote serious problems i meant something similar to a recent 
issue in a dev build where simply accessing tools-options crashed oo.org.


i believe that reducing the amount of such simple to find problems would 
be a notable step forward, and what's important - these tests should be 
relatively easy to automate.
i'll admit ignorance on current automated testing processes, but stuff 
that would walk through all menu options (with some advanced custom 
paths), a repository of testdocs in various formats etcetc.


this, of course, correlates with testing process discussions on this 
thread, which means more work creating tests - but i'm trying to hilight 
simple problems that somehow slip past this process. i mean, if crash on 
opening options slipped past it, there aren't enough of these simple 
checks which, i hope, would take less developer (or even qa) time to 
develop.


if test creation is too complex and there are ways to simplify it, that 
could be made a priority. such a change would improve long term quality 
and reduce workload, so it seems to be worth concentrating on.


obviously, that's just a small improvement, seen through my lens, 
which should be viewed in context with other things already mentioned.



Ciao
Frank

--
 Rich

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread ksp

Joachim Lingner ?:
As a matter of fact, many issues are reported by the community, at 
least the critical ones which often promote to stoppers. IMO, we 
should therefore promote the QA community, so there will be more 
volunteers (who maybe also develop tests) and extend the time span 
between feature/code freeze and the actual release date.


Jochen


Promoting QA in community is not enough - you have to retain people. In 
order to retain people project needs to fix their issues, which inspires 
people to use milestones in daily work.
Many of developers do not know how great it feels when issue you are 
interested in gets fixed relatively quick. Developers also do not know 
that it sucks when fix for your issue is delayed and delayed.
I think many users would rather have faster fixes than more stable 
milestone (you always can go to prev release/milestone).


Just my two copecks.
WBR,
KP.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hello KP,

 Promoting QA in community is not enough - you have to retain people. In 
 order to retain people project needs to fix their issues, which inspires 
 people to use milestones in daily work.
 Many of developers do not know how great it feels when issue you are 
 interested in gets fixed relatively quick. Developers also do not know 
 that it sucks when fix for your issue is delayed and delayed.

I'd claim a lot of developers, if not most, know how this feels - in
both ways. It's just that we have much more issues than developers, and
developers have only a pretty small amount of time for fixing for
retaining people. In combination, this might look like developers do
not care for the issues reported by other people, but it's just not as
easy as this.

 I think many users would rather have faster fixes than more stable 
 milestone (you always can go to prev release/milestone).

Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would be bad for OOo's overall
reputation.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Ingrid Halama wrote:

 Martin Hollmichel wrote:
 Mathias Bauer wrote:
 Ingrid Halama wrote:

  
 This is not sufficient. Heavy code restructurings and cleanups are 
 not bound to the feature freeze date, 
 Perhaps they should? And at least as far as it concerns me they are.
   
 yes, I also consider large amount or new, move or restructured code as 
 a feature and had erroneously the expectation that this is already 
 common sense. If all agree we should add this to the Feature Freeze 
 criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
 Two problems here. The worst one is that you cannot control that this 
 new rule is applied. 
Well, of course you can control that - at least after the fact if that
breaks. :-)

Of course you can't enforce this rule, as is true for most rules I know.
Bank robbery is strictly forbidden, but people still do it. But be sure,
if it wasn't forbidden, much more people would do it. It's common sense
that having rules that work towards the goal is good, even if you can't
enforce them always.

 Who decides that a code change is too huge to risk 
 it for the next release in two months or so? You won't count lines, 
 don't you - that would be stupid. Those who are willing to act carefully 
 are doing that already I am convinced. And those who are not acting 
 carefully you cannot control really with this new rule. So introducing 
 this new rule will basically change nothing.

This is a matter of how teams work. In general I would give everybody
the credit of being able to judge whether his work imposes a huge risk
on the product or not. If a team member repeatedly showed him/herself as
unable to do that, the team should address that in a way the team sees fit.

The idea of being bound to (trapped into?) a rules set that can and must
be enforeced all the time is not my understanding of how we should work
together. Many problems can be solved or at least reduced by appealing
to the good forces in each of us, our skills, our will to do the right
thing and our pride about our work. Sometimes it needs some reminders,
and rules come in handy here. But you never can't enforce all rules you
make in a free community, perhaps you can do that in prison. The
community as a whole must take care for the value of rules, each member
by following them and all together by reminding others and taking action
in cases where the rules have been violated.

 The second problem is that sometimes bad bugs even detected later in the 
 phase need bigger code changes. In my opinion only very experienced 
 developers are able to make serious estimations whether the fix is worth 
 the risk or not. So what to do now? Should we make a rule 'Let a very 
 experienced developer check your code'? Sometimes I wish that but I am 
 not sure that it would scale and - how would you organize and control 
 such a rule? We have a similar rule for the show stopper phase (let you 
 change be reviewed by a second developer), but even that rule is 
 violated often I am convinced.

You tell us that we don't live in a perfect world and that indeed some
rules don't apply always. Yes, true, but at least for me that isn't
really new. Of course we must do a risk assessment in some cases that -
due to other measures we have taken - hopefully will happen less often
in the future. And risk assessment itself is risky (developers like
recursions ;-)).

But in case we are unsure, we could move a bugfix that looks too risky
but isn't a showstopper to the next release. Instead of asking if
someone can judge whether a fix is too risky, let's put it the other way
around: something is too risky if you can't say with good confidence
that it's not. Does that need an experienced developer? I would expect
that it's easier for the less experienced developer to answer that question!

But anyway, we will fail here at times, sure. No problem for me, if the
number of failures drops to an acceptable level. We don't need a panacea
and I doubt that we will ever find one.

 So I would like to see mandatory automatic tests that detect whether the 
 important user scenarios still work properly, whether files are still 
 rendered as they should, whether the performance of the office has not 
 significantly decreased,  . We have a lot tests already even if 
 there is much room for improvement. In principle some of the tests are 
 mandatory already, but this rule gets ignored very often.
 The good thing is that a violation of this rule could be detected 
 relative easily and thus used as a criterion for nomination.

More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread ksp

Frank Schönheit - Sun Microsystems Germany пишет:
I think many users would rather have faster fixes than more stable 
milestone (you always can go to prev release/milestone).



Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would be bad for OOo's overall
reputation.
  
What I mean to say is that we could sacrifice quality of snapshots to 
bring in features faster and to motivate QA volunteers to test in real 
life (fast-paced development is yet another usage motivator). Besides, 
it is questionable what is worse for reputation - having 2-3-4-5 y/old 
usability defects or bugs versus regressions.


WBR,
K. Palagin.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Mathias Bauer wrote:

Ingrid Halama wrote

Martin Hollmichel wrote:


Mathias Bauer wrote:
  

Ingrid Halama wrote:

 

This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 
  

Perhaps they should? And at least as far as it concerns me they are.
  

yes, I also consider large amount or new, move or restructured code as 
a feature and had erroneously the expectation that this is already 
common sense. If all agree we should add this to the Feature Freeze 
criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
  
Two problems here. The worst one is that you cannot control that this 
new rule is applied. 


Well, of course you can control that - at least after the fact if that
breaks. :-)

Of course you can't enforce this rule, as is true for most rules I know.
Bank robbery is strictly forbidden, but people still do it. But be sure,
if it wasn't forbidden, much more people would do it. It's common sense
that having rules that work towards the goal is good, even if you can't
enforce them always.

  
Who decides that a code change is too huge to risk 
it for the next release in two months or so? You won't count lines, 
don't you - that would be stupid. Those who are willing to act carefully 
are doing that already I am convinced. And those who are not acting 
carefully you cannot control really with this new rule. So introducing 
this new rule will basically change nothing.



This is a matter of how teams work. In general I would give everybody
the credit of being able to judge whether his work imposes a huge risk
on the product or not.

Doesn't the current situation show that this is absolutely not the case?

If a team member repeatedly showed him/herself as
unable to do that, the team should address that in a way the team sees fit.
  
Hm, I would prefer to give the team member a chance to avoid his 
repeated failures and to allow and to ask him to check his changes himself.

Oh yes - that could be done by automatic tests - how cool!

The idea of being bound to (trapped into?) a rules set that can and must
be enforeced all the time is not my understanding of how we should work
together. Many problems can be solved or at least reduced by appealing
to the good forces in each of us, our skills, our will to do the right
thing and our pride about our work. Sometimes it needs some reminders,
  
The 'careful forces' are not very strong at the moment. And I doubt that 
some nice reminders will bring a significant change in behavior here.
But no problem, if we significantly enlarge the stopper phase we can 
live with the current behavior also.



and rules come in handy here. But you never can't enforce all rules you
make in a free community, perhaps you can do that in prison. The
community as a whole must take care for the value of rules, each member
by following them and all together by reminding others and taking action
in cases where the rules have been violated.
  
What are those actions that are taken currently if someone has brought 
to much risk to the master?


  
The second problem is that sometimes bad bugs even detected later in the 
phase need bigger code changes. In my opinion only very experienced 
developers are able to make serious estimations whether the fix is worth 
the risk or not. So what to do now? Should we make a rule 'Let a very 
experienced developer check your code'? Sometimes I wish that but I am 
not sure that it would scale and - how would you organize and control 
such a rule? We have a similar rule for the show stopper phase (let you 
change be reviewed by a second developer), but even that rule is 
violated often I am convinced.



You tell us that we don't live in a perfect world and that indeed some
rules don't apply always. Yes, true, but at least for me that isn't
really new. Of course we must do a risk assessment in some cases that -
due to other measures we have taken - hopefully will happen less often
in the future. And risk assessment itself is risky (developers like
recursions ;-)).

But in case we are unsure, we could move a bugfix that looks too risky
but isn't a showstopper to the next release. Instead of asking if
  
So who is 'we' in this case? Is it the developer and tester who know 
their own module best? Or is it some board of extra privileged people 
far away from the concrete bug?
If you believe in the good forces within all of us, then give all of us 
the freedom to decide whether fixes go in!
If you don't believe in the good forces then their must be clear 
criteria why and when fixes or features will not make it into the release.
Anything else has the smell of arbitrary regime - or is the english term 
despotism? Sorry for not being a native speaker.

someone can judge whether a fix is too risky, let's put it the other way
around: something is too risky if you can't say with good confidence
that it's not. Does that need an experienced developer? 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.


Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix 
CWS can be 'approved by QA' in 2 days. But when the master is broken,

you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello,

Thorsten Ziehm schrieb:
 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!
 
 The time to master isn't a problem currently, I think. A general bugfix
 CWS can be 'approved by QA' in 2 days. But when the master is broken,
 you do not know, is the bug in the CWS or in the master. It takes longer
 for checking the problems and this is what we have now. Reduce the time
 to master will come, when the general quality of the master and the CWSs
 is better.
 
 So more testing on CWS is also welcome!

Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
aren't able to build OOo on their own.

This is nearly impossible to do it for people outside from Sun.

The same concerns to external CWSes.

If we should discuss it more detailed we can do it in a separated thread.

Kind regards


Mechtilde

-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,

 The time to master isn't a problem currently, I think.

That's not remotely my experience.

See dba31g
(http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=7708OpenOnly=falseSection=History)
for a recent example of a CWS which needed 36 days from ready for QA
to integrated state (and add a few more days for the milestone to be
finished).

A few more?
dba31a: 26 days
dba31b: 42 days
dba31e: 26 days
dba31f: 13 days
dba31h: 23 days
mysql1: 17 days (and that one was really small)
rptfix04: 9 days (though this number is misleading for other reasons)

dba32a is currently in QA - for 9 days already (which admittedly is also
somewhat misleading, since a regression was fixed meanwhile without
resetting the CWS to new).

Okay, there were also:
fixdbares: 2 days
dba31i: 7 days


Don't get me wrong, that's not remotely QA's alone responsibility.
Especially the first OOO310 milestones had a lot of delay between CWSes
being approved and being integrated.

But: time-to-master *is* a problem. At least for the majority of CWSes
which I participated in, over the last months.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mechtilde,


 So more testing on CWS is also welcome!
 
 Yes Full ACK to last sentence.
 And this is not only a task for the Sun people. The persons who are
 interested at a CWS must be able to test a CWS. And this also if they
 aren't able to build OOo on their own.

I think especially we in the DBA team have good experiences with
providing CWS snapshots at qa-upload. I am really glad that our
community (including you!) does intensive CWS QA, and this way also
found bugs pretty early.

However, as good as this worked out for us, I am unsure whether this
would scale. I suppose if *every* developer would upload his/her CWS,
then people would pick a few only, and the majority would be still be
untested.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Thorsten Ziehm wrote:

 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!

I don't see a lot of sense in making tests mandatory just because we
have them. If a test probably can help to find problems in areas where
we know that we have them, fine. So when tests are defined it's
necessary to see which problems they can catch and if that's what we need.

I had a look on the regressions that I can judge - some of them might
have been found with convwatch, for most of them I have serious doubts
that any test we have would have found them. It's still working with the
product that is necessary.

So until now I fail to see which tests could help us further without
burning a lot of time.

There's one excecption. I'm a big fan of convwatch and so often I asked
for a *reliable* test environment that is easily configurable for
arbitrary documents.  So I still welcome if that could be accelerated.
But even these tests shouldn't be mandatory for every CWS.

 The time to master isn't a problem currently, I think. A general bugfix 
 CWS can be 'approved by QA' in 2 days. 
But that is not time to master. It still can take a week or so on
average until it's available in the master (even slower around feature
freeze, faster in the last phase of a release cycle). If we had tests
that let me believe that they could find more regressions early instead
of just holding CWS back from approval and burning CPU time, I would
welcome them.

But before that we should at least be able to have these tests run on
several masters without failure. This is not true for many tests now (as
a quick glance on QUASTE shows). These may be bugs in the master (that
have to be fixed), bugs in the tests (that also have to be fixed) or -
and that would be the most unfortunate reason - something else.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hello Kirill,

 Uhm, I doubt that. What you're saying here is that we should sacrifice
 quality to more fixes. I believe this would be bad for OOo's overall
 reputation.
   
 What I mean to say is that we could sacrifice quality of snapshots to 
 bring in features faster and to motivate QA volunteers to test in real 
 life (fast-paced development is yet another usage motivator). Besides, 
 it is questionable what is worse for reputation - having 2-3-4-5 y/old 
 usability defects or bugs versus regressions.

But bringing CWSes faster into the master would not yield the
developer's output. In other words, we would not be able to fix only one
more bug by that. At the opposite, I would assume that if we reduce the
snapshot quality in the way you propose it, then developers would be
able to fix *less* issues, since they would need to do more regression
fixing, which is the more expensive the later in the release cycle it
happens.

Ciao
Frank


-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Regina Henschel

Hi all,

Thorsten Ziehm schrieb:

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.


Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix 
CWS can be 'approved by QA' in 2 days. But when the master is broken,

you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!



I second the idea of more CWS testing. Remember the new chart module. 
There we had a CWS for testing and a lot of bugs were found before the 
CWS was integrated in master. There is no need to spread CWS builds with 
the mirrors net, but a single server witch holds the builds for Windows, 
Linux and Mac is sufficient. You can free the space after the CWS is 
integrated.


For most of the testing people outside, it is impossible to build CWSs 
for there own, but I am sure many of them would test a CWS. They will 
not test all, but those which concerns that area of OOo, which they 
often use or that CWS, which contains a fix for their feature wish. In 
addition, to help people to decide whether they want to test a CWS, it 
is necessary to give a good, not to short description, what purpose a 
CWS has.


kind regards
Regina

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mathias,

 I don't see a lot of sense in making tests mandatory just because we
 have them. If a test probably can help to find problems in areas where
 we know that we have them, fine. So when tests are defined it's
 necessary to see which problems they can catch and if that's what we need.
 
 I had a look on the regressions that I can judge - some of them might
 have been found with convwatch, for most of them I have serious doubts
 that any test we have would have found them. It's still working with the
 product that is necessary.
 
 So until now I fail to see which tests could help us further without
 burning a lot of time.

Quite true ...

A personal goal I set for the 3.1 release was to write complex test
cases for (most of) the show stoppers found in the DBA project. Since we
regularly run our complex tests on all CWSes, at least those concrete
stoppers would not have much chances to re-appear. (And as it is usually
the case with complex test cases, they also cover related areas pretty
well.)
Unfortunately, we didn't have too many 3.1 stoppers so far :), so I am
not sure whether it will help. But it's worth a try, /me thinks.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:
 Hi Mechtilde,
 
 
 So more testing on CWS is also welcome!
 Yes Full ACK to last sentence.
 And this is not only a task for the Sun people. The persons who are
 interested at a CWS must be able to test a CWS. And this also if they
 aren't able to build OOo on their own.
 

 
 However, as good as this worked out for us, I am unsure whether this
 would scale. I suppose if *every* developer would upload his/her CWS,
 then people would pick a few only, and the majority would be still be
 untested.

I don't think that the developer have to upload each CWS build. I prefer
that the possible tester are able to pick up the CWS builds they want
beside the normal test scenario.

I don't want to inflate this thread with the discussion about the
buildbots. ;-)

Regards

Mechtilde


-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mechtilde,

 I don't think that the developer have to upload each CWS build. I prefer
 that the possible tester are able to pick up the CWS builds they want
 beside the normal test scenario.

Ah, you're right, that would be most helpful ...

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Hi Ingrid,

please calm down, no reason to become upset.

Ingrid Halama wrote:

 This is a matter of how teams work. In general I would give everybody
 the credit of being able to judge whether his work imposes a huge risk
 on the product or not.
 Doesn't the current situation show that this is absolutely not the case?

No, this is just a situation that tells us that something goes wrong,
but it does not mean that everybody in the team is throwing garbage into
the master. So I would like to address this something, but not
exaggerate and put everybody under a general suspicion.

 If a team member repeatedly showed him/herself as
 unable to do that, the team should address that in a way the team sees fit.
   
 Hm, I would prefer to give the team member a chance to avoid his 
 repeated failures and to allow and to ask him to check his changes himself.

Of course! But that's something different than urging all developers to
do a fixed number of mandatory tests on each and every CWS. There should
be a QA person for every CWS. I would consider it enough to leave it up
to QA and development members of each CWS to find out if additional
testing (and which tests) can help or not. It seems that currently this
can be improved ;-), so let's work on that, let's strengthen the good
forces. But a general rule for all CWS - sorry, that's too much.

There is something that I heard from everybody who knows something about
 quality (not only software): the best way to achieve a better product
quality is making the people better that create the product. Replacing
the necessary learning process by dictating the people what they have to
do will not suffice. Give the people the tools they need, but don't
force them to use the only tools you have.

 The idea of being bound to (trapped into?) a rules set that can and must
 be enforeced all the time is not my understanding of how we should work
 together. Many problems can be solved or at least reduced by appealing
 to the good forces in each of us, our skills, our will to do the right
 thing and our pride about our work. Sometimes it needs some reminders,
   
 The 'careful forces' are not very strong at the moment. And I doubt that 
 some nice reminders will bring a significant change in behavior here.
 But no problem, if we significantly enlarge the stopper phase we can 
 live with the current behavior also.

I don't want to share your miserable picture of your colleagues. So I
won't answer to your cynic remarks about them.

 and rules come in handy here. But you never can't enforce all rules you
 make in a free community, perhaps you can do that in prison. The
 community as a whole must take care for the value of rules, each member
 by following them and all together by reminding others and taking action
 in cases where the rules have been violated.
   
 What are those actions that are taken currently if someone has brought 
 to much risk to the master?

This is an individual consideration that each responsible person needs
to find out for her/himself. I hope that you don't want to discuss how
to deal with other people's performance in public.

 But in case we are unsure, we could move a bugfix that looks too risky
 but isn't a showstopper to the next release. Instead of asking if
   
 So who is 'we' in this case? Is it the developer and tester who know 
 their own module best? Or is it some board of extra privileged people 
 far away from the concrete bug?
 If you believe in the good forces within all of us, then give all of us 
 the freedom to decide whether fixes go in!

I'm not sure if I understand, but probably you are mixing things. What
goes into a release first depends on priority/severity/importance,
something that usually is not decided on by a single person. Once
something is nominated to get into the release, I still see the
reservation that the code change is too much effort or seems to be too
risky at the time of integration. This indeed is something that the
developer should judge, either with consulting others or not. The final
decision always has to take more into account than just the risk. But
judging the risk indeed should be the task of the developer. And if the
developer can't confirm that it's not a huge risk, (s)he better should
assume it is.

 If you don't believe in the good forces then their must be clear 
 criteria why and when fixes or features will not make it into the release.
 Anything else has the smell of arbitrary regime - or is the english term 
 despotism? Sorry for not being a native speaker.

I think you are on the wrong track. I can't make any sense of that.
Perhaps a consequence of the mixture I tried to sort out in the last
paragraph?

 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
   
 Maybe we could try that at least? At the moment we are in the vicious 
 cycle of  another 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Regina Henschel wrote:

 Hi all,
 
 Thorsten Ziehm schrieb:
 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!
 
 The time to master isn't a problem currently, I think. A general bugfix 
 CWS can be 'approved by QA' in 2 days. But when the master is broken,
 you do not know, is the bug in the CWS or in the master. It takes longer
 for checking the problems and this is what we have now. Reduce the time
 to master will come, when the general quality of the master and the CWSs
 is better.
 
 So more testing on CWS is also welcome!
 
 
 I second the idea of more CWS testing. Remember the new chart module. 
 There we had a CWS for testing and a lot of bugs were found before the 
 CWS was integrated in master. There is no need to spread CWS builds with 
 the mirrors net, but a single server witch holds the builds for Windows, 
 Linux and Mac is sufficient. You can free the space after the CWS is 
 integrated.

The problem is that only a few CWS get so much interest that people jump
on the CWS testing. My experience with providing CWS builds and asking
interested users for testing rather is that nobody did it. And even if
people tested them, they usually didn't use the CWS for serious work,
they just had a look on the new features. Serious work still is the best
regression testing.

It's always a good idea to ask users for testing CWS. But if you don't
get any help, it's often better to integrate the CWS early, as this
enhances the probability that people find bugs in passing. Having CWS
lying around for weeks and months (as happened especially for the huge
ones in the past) definitely is bad.

We always ask developers to have huge or risky changes ready early in
the release cycle, so that they can be seen by users. Real life testing
done by human beings is the best way to iron out the remaining
hard-to-find quirks. But unfortunately early in the release cycle
quite often means that CWS get in conflict with those of the micro
release, and usually the latter get higher priority, both in QA and
integration. Perhaps this should be changed?

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-12 Thread Ingrid Halama

Hi Martin, all,

Martin Hollmichel wrote:

*Hi,

so far we have got reported almost 40 regression as stopper for 3.1 
release, see query

http://tinyurl.com slash cgsm3y .

for 3.0 ( **http://tinyurl.com slash ahkosf ) we had 27 of these issues, 
for 2.4 (**http://tinyurl.com slash c86n3u** ) we had 23.


we are obviously getting worse and I would like to know about the 
reasons for this. They are too much issues for me to evaluate the root 
cause for every single issue so I would like ask the project and qa 
leads to do an analysis for the root causes and to come with suggestions 
for avoiding them in the future.




My experience is that we now have more less experienced people working 
on the code, who do not have the deep knowledge of the big feature set 
of the office. Breaking something you do not know about then happens 
quite easily.
I would suggest to extend the show-stopper-fixing-phase in relation to 
the feature- and normal-bug-fixing-phase to balance the growing community.


additionally there might be other ideas or suggestions on how to detect 
and fix those issues earlier in our release process.


 From my perspective one reason for the high amount of regression is the 
high amount of integrated child workspaces short before a feature 
freeze. In the moment the ITeam (the QA representative) does the 
nomination before feature freeze. As an immediate action (for the 
upcoming 3.2 release) from my side I will limit this freedom until 4 
weeks before feature freeze, in the last 4 weeks before feature freeze, 
I or other members from the release status meeting will do the 
nomination of the cws for the upcoming release or decide to postpone it 
to the then coming release.


This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.


Ingrid

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] amount of stopper / regressions for 3.1 release

2009-03-11 Thread Martin Hollmichel

*Hi,

so far we have got reported almost 40 regression as stopper for 3.1 
release, see query

http://tinyurl.com slash cgsm3y .

for 3.0 ( **http://tinyurl.com slash ahkosf ) we had 27 of these issues, 
for 2.4 (**http://tinyurl.com slash c86n3u** ) we had 23.


we are obviously getting worse and I would like to know about the 
reasons for this. They are too much issues for me to evaluate the root 
cause for every single issue so I would like ask the project and qa 
leads to do an analysis for the root causes and to come with suggestions 
for avoiding them in the future.


additionally there might be other ideas or suggestions on how to detect 
and fix those issues earlier in our release process.


From my perspective one reason for the high amount of regression is the 
high amount of integrated child workspaces short before a feature 
freeze. In the moment the ITeam (the QA representative) does the 
nomination before feature freeze. As an immediate action (for the 
upcoming 3.2 release) from my side I will limit this freedom until 4 
weeks before feature freeze, in the last 4 weeks before feature freeze, 
I or other members from the release status meeting will do the 
nomination of the cws for the upcoming release or decide to postpone it 
to the then coming release.

**
Martin
*

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org