Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Mathias Bauer
Hi Sophie,

thank you for your comments. I'm glad to get feedback from non code
hackers also as I'm hoping to make life easier for them also.

I will add your points to my list. Let me give some immediate answers to
some of your points. I will come back if I can say to one of the other
points I have put on my list for now and snipped in my reply.

Sophie wrote:

 What a community member like me has the most to fight with when doing QA 
 is time. It's not very hard to check for issues, to run VCLTestTools or 
 write TCS, but it takes a lot of time for each (and also a lot of time 
 to learn to do every thing the right way, but it's really a very good 
 experience thanks to the e-team I've worked with :).
 
 Tools are not so difficult (for a non technical person like me) and I 
 found that dealing with EIS (the QA part) is really helpful.
 The good points :
 - Termite to build install sets
 - Ability to run the Automation CAT0 on Sun servers
 - Direct upload of the tests results on QUASTe
 - Comparison with MWS, you just have to run again the fails locally.
Really, this part of the work is superfluous. IMHO tests that are known
to be broken in a particular milestone should be skipped in a CWS based
on it also. So if you get a red state, you know that something is
wrong without the necessity to browse through Quaste. I hope that this
will be changed, but I don't know what the Sun QA engineers working on
the autotests are planning. The testing procedures themselves are not
part of our effort to improve the build environment, so I can only guess.

 - EIS allow a very good coordination between all of us.
 All this chain is really time saving (your own machines can rest ;)
Thanks, it's good to get some positive feedback also. :-)

 Concerning the localization process. Well, we still have a lot of things 
 to improve from my point of view. A better communication and 
 coordination with developers may have its place here more than our usual 
 complains against Pootle. So yes,
 - helpcontent should be part of the concerned CWS, if it is mark that 
 the help is concerned, then the corresponding task should part of the 
 CWS issue list
 - there should not be a new CWS added to SVN with no spec or a detailed 
 issue
Well, we always advertized the value of specs, it's good to read that
this opinion is shared by those who we saw as consumers of
specifications.

Whether help content must be part of a feature CWS or
not should be seen in the general context of how we want to localize it.

 - I support your idea of working with SCM, if well documented I'm sure 
 we will be able to get the support of other l10n teams to work on CWS 
 and mercurial, and more if that avoid us all this mess up in the last 
 snapshots.

Currently the way how localizations get into OOo is very inefficient. At
some point in time a deadline is reached and a whole bunch of
localizable content is exported from the source tree and handed over.
Localizers do their work, send it back and have to wait until Sun
release engineering has collected it and provided a new build containing
all the work. If bugs are found, the next round follows. I think we can
do better.

My idea for a long term goal is that all localization related content
(help, UI etc.) resides in an own Mercurial repository that localizers
can check out and directly commit to (if they want). A *simple* build
process (that does not require to check out the whole OOo source) allows
them to test their results themselves and immediately correct errors,
but that would be optional. Of course it should still be possible to
work as today (send changes to Sun release engineering), comparable to
the situation of code contributors that either can create an own CWS or
send in patches.

In a system like this localization even does not need to have a
deadline. Once a feature is done and documented, it can become
localized. Whether we want to do that is open for discussion.

We are far from that idea - one necessary precondition for it would be
to free our l10n builds from using the OOo Resource Compiler and I'm
sure there are other obstacles, but basically I don't understand why it
shouldn't be possible. As the localization is very important for OOo the
effort to improve its process surely would be justified.

What do you think? Of course I don't want to impose something on others,
but perhaps many people would like to work in such a faster and more
flexible way, even if it needed some work on the build environment and
some change in working style of the l10n teams. Maybe nobody dared to
ask for that because it was considered to be improbable that something
like that could be implemented?

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Helge Delfs - Sun Germany - ham02 - Hamburg
Hi,

On 09/23/09 08:29, Mathias Bauer wrote:
 Really, this part of the work is superfluous. IMHO tests that are known
 to be broken in a particular milestone should be skipped in a CWS based
 on it also. So if you get a red state, you know that something is
 wrong without the necessity to browse through Quaste. I hope that this
 will be changed, but I don't know what the Sun QA engineers working on
 the autotests are planning. The testing procedures themselves are not
 part of our effort to improve the build environment, so I can only guess.

One should use here a more granular argumentation. If a test has a red
state it must'nt have the same cause that leads to this. There might be
another reason for this.
QUASTe has an overview page it's called 'Analyze errors' that only lists
 the differences between testresults on MWS and CWS with the same
milestone. So it is fast and easy to get an overview and compare
testresults.
This page can be reached by choosing a platform and page 'Testresults
grouped by application' has a button 'Analyze errors'.



Best Regards
 Helge

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Stephan Bergmann
On Sep 23, 2009, at 8:45 AM, Frank Schoenheit, Sun Microsystems  
Germany wrote:

Hi Mathias,

Really, this part of the work is superfluous. IMHO tests that are  
known
to be broken in a particular milestone should be skipped in a CWS  
based

on it also.


Don't think this is a good idea, since a test can be broken in  
different

ways. For instance, if your test checks 10 aspects, and one of them is
broken in MWS, you still want to know if the other 9 are okay in your
CWS. Otherwise, you'll notice a breakage in those 9 only when the one
failure is fixed, and the whole test re-enabled in MWS.


I think this point is moot.  I strongly believe that tests that are  
regularly run on all CWS must also be run on the MWS, and the MWS must  
not be declared ready until all those test succeed.  Everything else  
is a waste of time.


(The link below is about a different scenario of running tests, if I  
understand correctly.)


-Stephan


An interesting read (IMO), somewhat related to the topic:
http://blog.ulf-wendel.de/?p=259 (sorry, German, but I know you  
speak it

pretty well :)
My favoite quote, which I'd strongly agree to:

 Deactivating a test, to reach 0 test failures, is wrong, since the
 equality of a failure is also a sensible information.


-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Helge Delfs
Hi Sophie,

On 09/21/09 20:15, Sophie wrote:
 - No documentation on the interaction between EIS and QUASTe (or I
 didn't find it and yes I can write it by myself ;). There is still one
 button that I don't understand in QUASTe that sound to have an impact on
 EIS. Also I'm not sure that what you (Sun) see is what I (community)
 see, we do not have access to the same interfaces.

I assume you mean the button 'Submit status to EIS' when analyzing
testresults on a CWS.
Normally if all required autotests via testbot are finished the testbot
triggers QUASTe to evaluate the testresults created by VCLTesttool.
QUASTe then will sent its allover status (green or red) to EIS
application. Those buttons (sent status 'green' or 'red' to EIS) give
the possibility to overide the status in EIS or to sent a status
manually if it could'nt be sent automatically due to e.g. network or
other problems. Only the QA-Representative has the possibilty to submit
this status (login required) that's the reason why it isn't visible to
all browsing QUASTe. I promise to update the documentation[1]
Regarding QUASTe there are no limitations what Community can see. Both
(internal and community QUASTe) are using the same database. The only
reason for us to use an internal QUASTe is to have the possibility to
sent testresults created by autotests automatically to database from
within our internal network.

 - No access to TCS repository (that mean that being a community member,
 you don't access all the tools involved in the process).
 All in all, as a community member, you just have the feeling that it's
 very very difficult to contribute to CWS QA, when for some CWS of
 course, it may not be more complicated than following the localization
 process for example.

TCS tooling indeed is a SUN-internal tool that was developed by me
before OOo was open-sourced. Currently the testcase specifications are
exported each friday[2]
We know about the lack to have this tool available for the community.
Currently I'm working on a new TCS tooling that will be integrated into
QUASTe and will be available hopefully by end of this year.


Best Regards
 Helge


[1]http://wiki.services.openoffice.org/wiki/QUASTe
[2]http://qa.openoffice.org/ooQAReloaded/TestcaseSpecifications/


-- 
===
Sun Microsystems GmbH   Helge Delfs
Nagelsweg 55Quality Assurance Engineer
20097 Hamburg   OOo Team Lead Automation
http://qa.openoffice.orgmailto:h...@openoffice.org
http://wiki.services.openoffice.org/wiki/User:Hde
PGP Key-ID: 0x395940C4

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Frank Schoenheit, Sun Microsystems Germany
Hi Stephan,

 I think this point is moot.  I strongly believe that tests that are  
 regularly run on all CWS must also be run on the MWS, and the MWS must  
 not be declared ready until all those test succeed.

I believe that, too, but it's theory only. Pretty similar to the theory
that of course no release should have bugs which the previous release
hadn't. But in reality, you will always need to sacrifice those rules -
both of them - to higher needs. If you fix an important, urgent,
serious, data-killing bug, and notice too late (by whatever
definition, not only because you ran the tests too late) that you broke
something else, then you might decide that it's worth it. Here,
automatic tests are in no way different from manual tests.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Sophie

Hi Mathias, all,
Mathias Bauer a écrit :

Hi Sophie,

thank you for your comments. I'm glad to get feedback from non code
hackers also as I'm hoping to make life easier for them also.


Thanks :)


I will add your points to my list. Let me give some immediate answers to
some of your points. I will come back if I can say to one of the other
points I have put on my list for now and snipped in my reply.

Sophie wrote:

What a community member like me has the most to fight with when doing QA 
is time. It's not very hard to check for issues, to run VCLTestTools or 
write TCS, but it takes a lot of time for each (and also a lot of time 
to learn to do every thing the right way, but it's really a very good 
experience thanks to the e-team I've worked with :).


Tools are not so difficult (for a non technical person like me) and I 
found that dealing with EIS (the QA part) is really helpful.

The good points :
- Termite to build install sets
- Ability to run the Automation CAT0 on Sun servers
- Direct upload of the tests results on QUASTe
- Comparison with MWS, you just have to run again the fails locally.

Really, this part of the work is superfluous. IMHO tests that are known
to be broken in a particular milestone should be skipped in a CWS based
on it also. So if you get a red state, you know that something is
wrong without the necessity to browse through Quaste. I hope that this
will be changed, but I don't know what the Sun QA engineers working on
the autotests are planning. The testing procedures themselves are not
part of our effort to improve the build environment, so I can only guess.


Helge has answered here, the Analyze errors function gives an overview 
of the differences between MWS and CWS with Automation CAT0, so it is 
fast and easy.



- EIS allow a very good coordination between all of us.
All this chain is really time saving (your own machines can rest ;)

Thanks, it's good to get some positive feedback also. :-)

Concerning the localization process. Well, we still have a lot of things 
to improve from my point of view. A better communication and 
coordination with developers may have its place here more than our usual 
complains against Pootle. So yes,
- helpcontent should be part of the concerned CWS, if it is mark that 
the help is concerned, then the corresponding task should part of the 
CWS issue list
- there should not be a new CWS added to SVN with no spec or a detailed 
issue

Well, we always advertized the value of specs, it's good to read that
this opinion is shared by those who we saw as consumers of
specifications.


Sepcs are important for people like me for several tasks we achieve in 
the project : for QA tasks (closing an issue verified in a CWS, doing 
TCS on dev snapshots, etc), localization (you can't localize correctly a 
new functionality when you only have strings lost in several files or 
the UI, check for integration errors) and documentation.


Whether help content must be part of a feature CWS or
not should be seen in the general context of how we want to localize it.


Not only, sometimes changes are not documented because the corresponding 
 CWS has not been mark help relevant, having an issue in the CWS tasks 
list would help to make sure (or at least give a chance that) it will be 
documented in the same release the CWS is integrated.


- I support your idea of working with SCM, if well documented I'm sure 
we will be able to get the support of other l10n teams to work on CWS 
and mercurial, and more if that avoid us all this mess up in the last 
snapshots.


Currently the way how localizations get into OOo is very inefficient. At
some point in time a deadline is reached and a whole bunch of
localizable content is exported from the source tree and handed over.
Localizers do their work, send it back and have to wait until Sun
release engineering has collected it and provided a new build containing
all the work. If bugs are found, the next round follows. I think we can
do better.

My idea for a long term goal is that all localization related content
(help, UI etc.) resides in an own Mercurial repository that localizers
can check out and directly commit to (if they want). A *simple* build
process (that does not require to check out the whole OOo source) allows
them to test their results themselves and immediately correct errors,
but that would be optional. Of course it should still be possible to
work as today (send changes to Sun release engineering), comparable to
the situation of code contributors that either can create an own CWS or
send in patches.

In a system like this localization even does not need to have a
deadline. Once a feature is done and documented, it can become
localized. Whether we want to do that is open for discussion.

We are far from that idea - one necessary precondition for it would be
to free our l10n builds from using the OOo Resource Compiler and I'm
sure there are other obstacles, but basically I don't understand why it
shouldn't be 

Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Sophie

Hi Helge,
Helge Delfs a écrit :

Hi Sophie,

On 09/21/09 20:15, Sophie wrote:

- No documentation on the interaction between EIS and QUASTe (or I
didn't find it and yes I can write it by myself ;). There is still one
button that I don't understand in QUASTe that sound to have an impact on
EIS. Also I'm not sure that what you (Sun) see is what I (community)
see, we do not have access to the same interfaces.


I assume you mean the button 'Submit status to EIS' when analyzing
testresults on a CWS.


Yes :)

Normally if all required autotests via testbot are finished the testbot
triggers QUASTe to evaluate the testresults created by VCLTesttool.
QUASTe then will sent its allover status (green or red) to EIS
application. Those buttons (sent status 'green' or 'red' to EIS) give
the possibility to overide the status in EIS or to sent a status
manually if it could'nt be sent automatically due to e.g. network or
other problems. Only the QA-Representative has the possibilty to submit
this status (login required) that's the reason why it isn't visible to
all browsing QUASTe. I promise to update the documentation[1]
Regarding QUASTe there are no limitations what Community can see. Both
(internal and community QUASTe) are using the same database. The only
reason for us to use an internal QUASTe is to have the possibility to
sent testresults created by autotests automatically to database from
within our internal network.


Thanks for this explanation, it's clear to me now.



- No access to TCS repository (that mean that being a community member,
you don't access all the tools involved in the process).
All in all, as a community member, you just have the feeling that it's
very very difficult to contribute to CWS QA, when for some CWS of
course, it may not be more complicated than following the localization
process for example.


TCS tooling indeed is a SUN-internal tool that was developed by me
before OOo was open-sourced. Currently the testcase specifications are
exported each friday[2]
We know about the lack to have this tool available for the community.
Currently I'm working on a new TCS tooling that will be integrated into
QUASTe and will be available hopefully by end of this year.


Thanks for your work on this. It's important if we want more community 
members to participate on testing dev snapshots or helping on CWS QA 
(btw I need to check why mine do not appears online).


Kind regards
Sophie

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Stephan Bergmann

On 09/23/09 11:17, Frank Schoenheit, Sun Microsystems Germany wrote:

Hi Stephan,

I think this point is moot.  I strongly believe that tests that are  
regularly run on all CWS must also be run on the MWS, and the MWS must  
not be declared ready until all those test succeed.


I believe that, too, but it's theory only. Pretty similar to the theory
that of course no release should have bugs which the previous release
hadn't. But in reality, you will always need to sacrifice those rules -
both of them - to higher needs. If you fix an important, urgent,
serious, data-killing bug, and notice too late (by whatever
definition, not only because you ran the tests too late) that you broke
something else, then you might decide that it's worth it. Here,
automatic tests are in no way different from manual tests.


But such activity should be the rare exception (otherwise, you are 
doomed, anyway), where you can most probably take care of the 
consequences (like broken tests) on a case-by-case basis.  So, we should 
be able to bring theory and practice together reasonably close...


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Mathias Bauer
Frank Schoenheit, Sun Microsystems Germany wrote:

 Hi Mathias,
 
 Really, this part of the work is superfluous. IMHO tests that are known
 to be broken in a particular milestone should be skipped in a CWS based
 on it also.
 
 Don't think this is a good idea, since a test can be broken in different
 ways. For instance, if your test checks 10 aspects, and one of them is
 broken in MWS, you still want to know if the other 9 are okay in your
 CWS. Otherwise, you'll notice a breakage in those 9 only when the one
 failure is fixed, and the whole test re-enabled in MWS.

Then the granularity of test is wrong. As we have limited resources, we
must not waste our time. It's better to skip a particular test
(hopefully in a granularity so that not too many aspects are dropped)
for some milestones and fix the root cause for its breakage ASAP then
wasting so much time as now. Of course fixing the breakage must get high
priority.

 An interesting read (IMO), somewhat related to the topic:
 http://blog.ulf-wendel.de/?p=259 (sorry, German, but I know you speak it
 pretty well :)
 My favoite quote, which I'd strongly agree to:
 
   Deactivating a test, to reach 0 test failures, is wrong, since the
   equality of a failure is also a sensible information.

Just because someone blogs it doesn't mean that it is correct. There are
a lot of blogs available. :-)

Or in short words: I don't buy that statement. Information comes at a
price and it must be taken into account if you don't have unlimited
resources.

We could keep the test if the status page clearly would show me test
run gave the same result as the master test. The waste in time would be
smaller then (a broken test doesn't take a lot of time, the waste comes
from manually checking the results as we have to do now). But I take
every bet that everybody would do the same with the sensible
information that the master has the same failure as the CWS: read it
and forget it.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Mathias Bauer
Helge Delfs - Sun Germany - ham02 - Hamburg wrote:

 Hi,
 
 On 09/23/09 08:29, Mathias Bauer wrote:
 Really, this part of the work is superfluous. IMHO tests that are known
 to be broken in a particular milestone should be skipped in a CWS based
 on it also. So if you get a red state, you know that something is
 wrong without the necessity to browse through Quaste. I hope that this
 will be changed, but I don't know what the Sun QA engineers working on
 the autotests are planning. The testing procedures themselves are not
 part of our effort to improve the build environment, so I can only guess.
 
 One should use here a more granular argumentation. If a test has a red
 state it must'nt have the same cause that leads to this. There might be
 another reason for this.

I maintain my point that the price we have to pay for this tiny little
piece of information in the rare cases where we really have an
additional reason for the failure is too high. This is looking for
perfection in the wrong area.

 QUASTe has an overview page it's called 'Analyze errors' that only lists
  the differences between testresults on MWS and CWS with the same
 milestone. So it is fast and easy to get an overview and compare
 testresults.
 This page can be reached by choosing a platform and page 'Testresults
 grouped by application' has a button 'Analyze errors'.

I know that finally you can find out if an error is real or not, but
that's not the point. It shouldn't be necessary. And I wouldn't call
that fast and easy.

But as this does not touch the build environment and nobody asked to
change something (it was just my private opinion), we can leave that for
now. Let's agree to disagree.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Mathias Bauer
Jörg Jahnke wrote:

 Hi Mathias,
 
 Mathias Bauer schrieb:
 ...
 
 Currently the way how localizations get into OOo is very inefficient. At
 some point in time a deadline is reached and a whole bunch of
 localizable content is exported from the source tree and handed over.
 Localizers do their work, send it back and have to wait until Sun
 release engineering has collected it and provided a new build containing
 all the work. If bugs are found, the next round follows. I think we can
 do better.
 
 My idea for a long term goal is that all localization related content
 (help, UI etc.) resides in an own Mercurial repository that localizers
 can check out and directly commit to (if they want). A *simple* build
 process (that does not require to check out the whole OOo source) allows
 them to test their results themselves and immediately correct errors,
 but that would be optional. Of course it should still be possible to
 work as today (send changes to Sun release engineering), comparable to
 the situation of code contributors that either can create an own CWS or
 send in patches.
 
 In a system like this localization even does not need to have a
 deadline. Once a feature is done and documented, it can become
 localized. Whether we want to do that is open for discussion.
 
 Even with the current L10N process this increased flexibility, that you 
 seem to have in mind, can IMO be reached, since it would be possible to 
 import the new and changed strings from every milestone into the Pootle 
 system used for translation. But a little more automation should be done 
 in order to reach this goal since doing these imports manually is quite 
 time consuming. Then translators could continuously work on the translation.
 
 Getting the translations back into the code still requires a CWS. This 
 work is, at least at the moment, done by Ivo or Vladimir, and they do it 
 for all languages, saving the L10N teams a lot of work IMO. Currently 
 such an L10N CWS is created once per release, which means that 
 translators get feedback on their work quite late. This frequency could 
 be increased, and I know that Ivo has spent some work on automating the 
 processes around creating an L10N CWS in order to create these CWS more 
 often, but AFAIK there still remains some work to do.
 
 So, while I think you did draw a valid picture of an L10N build process 
 above, I think we could improve the situation with far less changes to 
 the current processes and thus far less work. IMO we should try to 
 implement the improvements I sketched out above first and then see if we 
 have to go a step further and perhaps change the processes as you 
 outlined above.

Sure! Every improvement helps. And if we can think about how we can get
rid of all the intermediate steps in the long run, that would be great!

If I just look on the matter from an exernal POV, I don't see why
localization contributions should be treated differently than others.
Code developers either create a CWS or send in a patch. Localization
developers currently only have option 2, even if they wanted to use
option 1 in case they can cope with it. Moreover, they can not even test
their patches in a running build, they have to wait for a build done by
Sun Releng - what a difference to code contributions!

Having both options would be a worthwile goal for the *future* because
it reduces the turnaround times and also the number of necessary
turnarounds dramatically. And the future is what we are currently
discussing. Please don't get me wrong, not everything what I have
written means that I want it all and I want it now. :-)

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-23 Thread Mathias Bauer
Sophie wrote:

 Whether help content must be part of a feature CWS or
 not should be seen in the general context of how we want to localize it.
 
 Not only, sometimes changes are not documented because the corresponding 
   CWS has not been mark help relevant, having an issue in the CWS tasks 
 list would help to make sure (or at least give a chance that) it will be 
 documented in the same release the CWS is integrated.

Ah, that was what you where pointing at! Yes, good point.

 We have already talk about improving the process on the l10n list and at 
 the last OOoCon also. See the thread here :
 http://l10n.openoffice.org/servlets/ReadMsg?listName=devmsgNo=10607
 As said, I'm not technical enough to know what would be the best tools 
 and process, but I'm used to SCM and I can understand the benefits we 
 can gain here. So what you described above using Mercurial is a kind of 
 dream I'm sure all l10n teams would like to see happen :)
 
 It may be not relevant for the discussion here, but we have had to fight 
 against integration issues, formatting issues, development issues, 
 etc... I've spent hours translating strings that were not integrated or 
 seen changes finally reverted.
 So for me it's not only a localization process issue but a better 
 coordination/communication between the development process and the 
 localization process.

I have just outlined a skeleton of what I could imagine as (dream of?)
a localization process. The nasty details will be interesting to
discuss. I think that the less steps we have between the files that we
are using for a localized build, and what a localization developer
actually uses to provide his work, the better will be the result. But I
agree with Jörg, this will take time and if smaller improvements can be
done before, we should be glad to get them.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-22 Thread Bernd Eilers

Sophie wrote:

Hi Mathias, all,



Hi Sophie,


[...snip...]


Also I'm not sure that what you (Sun) see is what I (community) 
see, we do not have access to the same interfaces.




Well basically the interfaces are the same where possible with only a 
few exceptions.


In EIS only the description and comments for ChildWorkspaces marked as 
Sun internal are hidden to the community as those may contain sensitive 
information which is not to be published eg. information about a Sun 
customer for which the ChildWorkspace would be important.
But you do also see other information on the Sun external EIS that is 
available for those internal ChildWorkspace eg. the current status of 
those ChildWorkspace. Such Sun internal ChildWorkspaces are rarely used.


And than some links to services may differ in EIS. Like there is one Sun 
internal EIS incarnation and one externally there is one quaste service 
in the Sun internal network and one quaste service in the external 
network which both basically do the same work / provide the same data. 
The links in the Sun internal EIS incarnation to the quaste service do 
thus also differ to those used in the Sun external EIS incarnation.



 [...snip..]

Kind regards
Sophie



Kind regards,
Bernd Eilers

-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org



Re: [tools-dev] Comments on Mathias blog post about contributing

2009-09-22 Thread Caolán McNamara
On Mon, 2009-09-21 at 20:15 +0200, Sophie wrote:
 - Not enough bots or optimized builds bots to have more capacity for 
 building install sets.

This was particularly miserable coming up to the feature freeze. I think
there were 80+ hour waiting times on builds on the solitary windows
bot :-(

 - Install sets by the bots are different from Sun builds that make some 
 automated tests failing (why the quickstarter is available on linux 
 builds? it took me some time to understand that I have to disable it first)

In general the ideal situation is that configure with no special
options should generate the same set of features as used by a Sun build.
Unfortunately internal and external don't use the same configuration
process so the two fall out of sync sometimes. In the short term I
reckon we should change the gtk quickstarter configure option setting to
default off so as to match the internal settings, especially as its got
a crasher in it (#i101245#). Long term it'd be better to just use the
same configuration process, that'd avoid the occasional breakage of
configure as well :-)

C.


-
To unsubscribe, e-mail: dev-unsubscr...@tools.openoffice.org
For additional commands, e-mail: dev-h...@tools.openoffice.org