Warly wrote:

>Mandrake Linux is based on a fixed date releasing due to market
>constraints (distributor/supplier schedules). As a consequence the
>release date was scheduled 3 months ago. On this date we have a 
>very little margin of 2 or 3 days, but not more.
>
>So the datum is: "release date is March the 15th".
>
>Now we need to do it in this timeframe (and we are the 17th), and
>we have no other choice.
>
>In a few days 8.2 updates will be released, and them you will have
>the real stable and polished distro you want.
>
>Debian has no real realease date, as a consequence doing a stable
>release is just a piece of cake, anybody can do it.
>
>Maybe our model is not good.
>
>Thinking of a better model, I am not sure what I could choose. No
>deadline means no hurry, and "if the last moments didn't exist,
>nothing good would be done". 
>
>Moreover if we wait too much, new versions of nearly all the tools in
>the distro will be released, and their respective authors will not
>care anymore fixing bugs in the old version we would have included.
>
>I do think that our model is not so bad, and that we are reaching a
>good compromise between cutting edge and stability, but /this/
>compromise _is needed_.
>
There is an alternative model, but it requires a high initial investment.

Bugs in packaging are usually monor but some of those can cause 
significant problems in Mandrake tools...  An improper script to set up 
dotfiles for defaults in the security script, or a change in the rpm 
tool and how it treats the formation of such files under update, scould 
render the Mandtrake Upgrade tool useless from distro to distro.  In 
this area, a bot that recognizes such scripts in the packagoing, and 
someone that examines the results as a duty could help a lot.

Testing, per se, is not a solution to better quality.  Microsoft throws 
much money at it, shadows every programmer with a designated tester. 
 They even have scholarly types who move pins on maps from alpha to beta 
to gamma and pigeonhole all the test categories.  Their results should 
speak for themselves.  

The real effort needs to go into planning tools and programs, not just 
the generalities, but the details.  This needs to be done by developers 
who are trained to interact constructively.  The training alone is a 
killer.  I am talking about a bill of 3 weeks with no production, just 
training.

And that is only half the bill.  The other half is putting the training 
to use.  It would mean that bug-barriers and unanticipated cases count 
would drop dramatically.  

As other posters suggest, that requires communication, and that means 
internal communication, among developers, using the best conference 
tools available, since even Mandrake developers are scattered a bit.

Obviously, any such model will have to wait until Mandrake turns the 
corner on profitability, but this model is far less expensive that the 
Microsoft one or the lockstep of "alpha", "beta" releases.

Basically the release schema would look like this:

Developers Training-->Input Objectives for Next release->Developers 
confer to design tools to meet objectibves->

-->First programming on _new_ tools with no GUI work

\
  \
    \-> Testers input on hardware and software unanticipated cases for tools
------> Developers work on other projects while reports collect, 
probably other tools or
GUI-ing tools that seem to work well

\
  \
    \->Testers ask for other packages to test, seeming to have found 
most of the cases
------>Developers work on fixes and release the secondary projects they 
were working on

\
  \
    \->Continue the case mismatch stuff
----->Release second round of non-GUI scratch editions of tools

\
  \
    \->Offer brainstorming ideas about making it user-friendly and 
setting up the GUI
         (no arguing--no bad ideas-go for quantity of ideas collect and 
group them later)
         (do not discuss ideas or comment on them--building on them is OK)
-----> Conference by best tools available with testers and sort out the 
current best ideas
          for the design.

\
  \
    \->Finish testing second round, review the product 
specifications/goals from the
         Brainstorming/multivote process to get a good idea what to test 
for on the GUI
----->Finish on minor bugs on the non-GUI versions of the tools--Go for 
first-edition GUI

\
  \
    \->Test with brainlessness and abandon...  Do everything wrong that 
can be imagined for
         a total newbie or a windows convert.  (Even include installing 
to hdc and then
         removing it and moving hdc to hda and see how the software 
handles it)
         Don't laugh.  There is an actual support case where that happened.
----->Release some tools and continue work on others.  Continue 
conferences on each
        tool and make sure that internally the knowledge of all is 
shared in making these
        tools.

Now I know there is some loss in time and effort with coding the action 
separately from the interface or in fact coding a text as well as a 
graphic interface, but consider that both can be offered as features....

And the other time loss is the training. The conferencing is part of the 
cost of the product and should not be considered a loss.  The expulsion 
of bugs is te payoff, as well as better-designed software.  If we had 
thought up front about broken connections and tested for them while 
developing Software manager, it would be at its current state 2 
generations ago.  That would have been worth a lot more than the 
conferencing cost/time loss.

The barrier to this model is financial.  mandrakesoft cannot easily 
afford to blow three weeks of every developer's time for the training 
nor easily work through the initial roughnesses of the new system.  It 
would require steadfast leadership and extreme effort on the part of 
management to make it work.  When the finances are better you might try 
to implement it, but for now, with the exception of better communication 
to testers, I think you are doing the best that can be done.

Civileme



Reply via email to