Hi Mathias,

        Once again thank you for your thought provoking mail.

On Mon, 2006-10-30 at 23:57 +0100, Mathias Bauer wrote:
> You mix up some things here. Nobody said that we need a spec for each
> and every "tiny ergonomic fix". We need them for new features - e.g. a
> quickstarter on Linux. :-P

        I refer you to the Sun rubric ** emphasis added.

http://wiki.services.openoffice.org/wiki/Category:Specification

        "I Want to Change Something in OpenOffice.org - Do I Have to
         Write a Software Specification? 
         **In general the answer is YES**. This applies to:
         Features, Enhancements, **Defects**"

        Now, of course we could explicitly exclude more things from this, which
would be good - but AFAICS - at least as of now, each tiny ergonomic
change requires -at-least- a round-trip to the team lead.

> So we have parts in the code that are unit-testable (and we have
> tests for it) but most code unfortunately is bound to some
> vcl/sfx/svx/etc. stuff that makes unit testing impossible.

        Great - so, what pieces of code have functioning unit tests ? whenever
I hack on a module I like to try and find these tests, I poke in
'workben' and I see very frequently stale/un-buildable/un-runable code,
then I poke in qa/ and eg. in configmgr/qa/unoapi I see a makefile.mk I
'dmake' that, something happens and it barfs:

Exception in thread "main" java.lang.NoClassDefFoundError:
org/openoffice/Runner
dmake:  Error code 1, while making 'ALLTAR'

        it appears broken out of the box.

        I would -Love- to have a good, standardised unit testing framework in
place to add tests to, and let us re-factor code more aggressively with
confidence. However - I just don't see anything here.

> The long duration of the tests is indeed a problem.

        Yep, and something we need to fix of course; hopefully some of Noel's
work on the performance of StarBasic's may help here.

> We are currently investigating how we can get faster tests, one
> direction we are looking into is avoiding or at least reducing the
> idle/sleeping times.

        Yep - of course, understanding why these idle times are there would be
good I guess.

>  Other ideas are welcome. My very personal opinion
> is that we should have more API (code) based tests and less GUI testtool
> based ones but I know that there are other opinions. Must be discussed.

        Well; of course from a 'community' perspective - I'm well up for adding
in-source, in-CWS, standardized, fast unit-tests. eg. reading the calc
'R1C1' work recently, I was itching to write a test-suite, that would
simply exercise this piece of the calc code & parse 100 formulae of
varying complexity and validate that the results were correct.

        Unfortunately it's -really- difficult to do that.

> There's nothing wrong with doing unit and API tests in Java.

        As long as they are easy to run with some standard command, I don't
much care what they're written in.

> And UNO components don't make anything more complicated

        Au contraire - if you have built all of OO.o up to 'sc' (eg.) and you
now want to write a very small, very fast unit test to exercise just the
formula parsing piece - you have a nightmare. Somehow, you have to get
OO.o alive enough to actually start, bootstrap etc. you need to build a
custom .rdb file [ we have ugly unsustainable hacks in 'workben'
directories around the place to do some of this ]. You can't even read a
file in without having an huge types.rdb, services.rdb, a ton of paths
set right and a big piece of boilerplate code etc. etc. :-) AFAICS it's
a huge pain.

        Of course - if there was an existing small/light/simple infrastructure
that tests could be easily added to, then I for one would write more
unit tests: this gap has frustrated me on the 2/3 times I've actually
wanted to sit down & write a block of test code. [ And really, arguably,
people should be writing the torture tests as they write the code ].

        The other problem with UNO is you can only test what is exposed via.
UNO, and that is not enough to torture the internals often.

> Or is there anything you find more complicated in unit testing of UNO
> components? Then please give an example.

        It's possible my horrific past experience of UNO bootstrapping is now
obsolete :-) if so, wonderful. I'm looking for a solution that doesn't
require "installation", can be run in the source tree very simply with
'dmake check' (eg.) and will run through a list of test modules, build
them, execute them, and report their output; and wrt. VCL - having a
live X/GUI connection, at least for now is just fine. Preferably being
able to automate this fully (on each CWS before it's nominated) would be
ideal.

> You again mix things here. This is no longer true for fixes in 2.0. And
> nobody asks for specs for bug fixes. Please give examples where a bug
> fix was not integrated because a spec was missing.

        It depends what you see as a bug. I see something being unusable as a
serious bug, because I care about usability; OTOH you might call fixing
that 'a feature' :-) Ultimately, the end-user sees it as a bug, and the
customer must be always right - surely ?-)

> The problem is that you don't know this beforehand - and in the past one
> of our biggest problems was that we introduced code changes and later on
> the master wasn't usable for weeks.

        Ok - I hate being clobbered with this -simply-appalling- practise of
people randomly committing API breakage, and then expecting others to go
around cleaning it up. AFAICS the vast majority of the breakage you
refer to was related to this, and yes - this was (to my mind) a totally
stupid way to work :-) [ hopefully we don't need to argue that
through ].

        On the other hand, it's possible to keep the code building and running
at all times, and yet commit lots of code changes; other projects manage
to do this quite successfully. And obviously it's impossible to improve
the quality without changing the code, in some cases really quite a
lot :-)

> > OO.o should be of a lower quality in order
> > to get great testing & feedback to improve the StarOffice quality; at
> > least that is how I would structure it. Indeed - I am surprised that
> > OO.o and StarOffice releases are ~concurrent (or that StarOffice
> > sometimes leads) - that seems to me to be a recipe for poorer quality in
> > StarOffice.
> 
> I don't understand how you come to that crazy conclusion. Step back and
> think about Thorsten words with an open mind and not with the explicit
> will to use them against him.

        Nah - I don't have that will - but, I do hear (unsubstantiated) rumours
of customer critical fixes getting rushed into the tree without the same
level of process rigour that we all 'enjoy' ;-) [ perhaps that is now a
defunct practise, I hope so ].

        Wrt. StarOffice / OpenOffice quality - there is certainly a possible
strategy that makes sense here: to have somewhat different quality goals
for StarOffice vs. OpenOffice. This is the approach with the Fedora/RHEL
and OpenSUSE/SLE products - whereby the community product has a
different balance: getting cool/trendy new features (with some
instability) while the 'Enterprise' product has more stability and
(perhaps) fewer features, or perhaps just arrives slightly later.

> And please accept that "throw everything in and fix the bugs later"
> can't be the way to go. Been there, done that, felt the pain. Don't
> want to be there again.

        Is that what I'm suggesting ? ultimately not; however - I suspect that
this is a case of the pendulum swinging way too far the other way.
AFAICS what Sun experienced before was simply brain-damaged :-) I'm not
suggesting committing stuff to HEAD that is -known- and -expected- to
break other people, that is clearly folly. However, accelerating the
pace of committing -fixes- and improvements is surely beneficial.

> Of course every rule can be changed if good arguments and (IMHO even
> more important!) good alternatives are presented.

        Sure, so the suggestion of using the wiki for specs is positive - it
removes a biggish barrier to entry, and (perhaps) makes it more useful.
Some other constructive suggestions might be these:
        
        * duplicate as little state as possible:
                + no i18n data, no screenshots in specs
                + [ unless vital to illuminate some point ]

        * understand -precisely- what is required in a spec. and
          require no more than the minimum.
                + what is required is from my POV not well understood

        * focus the saving in (very scarce) developer / QA resource on
          other things: a good unit test framework eg.

        * re-direct some spec. process time in favour of peer code
          review - it will yield a higher quality, and ultimately
          better trained, more careful coders.

        * clearly specify the cases where full specifications are not
          required, and aim to extend this list

        * ensure that the spec. process is filled with shortish 
          timeouts to handle people that don't respond

        * make every possible step asynchronous so there are as few
          round-trips as humanly possible

        * encourage stake-holders to be more responsive (IRC is a
          good tool here) so they can be consulted.

        * separate the 'team' aspect from the spec. working with an
          inter-disciplinary team is to generate -Design Requirements-
          and perhaps some UI design data is a useful thing, but
          there is no need to over-formalise that process.

> OTOH we also read in this thread that apparently our current rules and
> procedures created unsatisfactory results in some cases: CWS didn't get
> integrated because of build or platform problems, exaggerated demand for
> specifications, missing responsiveness in QA or UserExperience, missing
> support of code owners etc. Point taken. So we must take this more
> serious as it apparently has happened. Expect to hear something about
> this in the very near future.

        Great. What I'd really like to hear is a clear articulation of the
goals of the spec. process; what were the Design Requirements here ?
also, what was the process of introducing the process ? apparently it
just appeared suddenly, mandated from above, is that the case ?

        Ultimately though - wrt. the exasperated tone here - I can believe it's
a problem, but can this be a surprise ? we have talked the issues over
of streamlining this process in detail at the last 2 ESC meetings,
reaching consensus each time on the same approach - once in March this
year and again at OOoCon. Unfortunately as far as is humanly discernable
from outside of Sun - progress in this area has totally stalled shortly
afterwards. Now, it's great that you're responding & helping to unwind
some of these problems.

        Thanks,

                Michael.

-- 
 [EMAIL PROTECTED]  <><, Pseudo Engineer, itinerant idiot


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to