On 12 juil. 07, at 20:29, Jean-Christophe Helary wrote:


On 12 juil. 07, at 17:36, Rafaella Braconi wrote:

However, from what I understand here, the issue you see is not necessarily Pootle but the format Pootle delivers which is .po. As already said, Pootle will be able to deliver in near future the content in xliff format. Would you still see a probelm with this?

Yes, because the problem is not the delivery format, it is the fact that you have 2 conversions from the HTML to the final format and the conversion processes are not clean. Similarly, the TMX you produce are not real TMX (at least not the one you sent me).

I am not arguing that UI files would benefit from such treatment. I am really focusing on the HTML documentation.

To make things even clearer, I am saying that using _any_ intermediary format for documentation is a waste of resources.

If translators want to use intermediary formats to translate HTML in their favorite tool (be it PO, XLIFF or anything else) that is their business.

Janice (NetBeans) confirmed me that NB was considering a Pootle server exclusively for UI files (currently Java properties files), but in the end that would mean overhead anyway since the current process takes the Java properties as they are for translation in OmegaT.

In NB, the HTML documentation is available in packages corresponding to the modules, and the TMX (a real one...) allows to automatically get only the updated segments. No need for a complex infrastructure to produce differentials of the files, all this is managed by the translation tool automatically and _that_ allows the translator to have _much more_ leverage from the context and to benefit from a much greater choice of correspondances.

I suppose the overhead caused by the addition of an intermediary format for the UI files will be balanced by the management functions offered by the new system, but I wish we did not have to go through translating yet another intermediate format for the simple reason that seeing the existing conversion processes (I've tried only the translate-toolkit stuff and it was flawed enough to convince me _not_ to use its output) is likely to break the existing TMX. If the management system were evolved enough to output the same Java properties files I am sure everybody would be happy. But, please, no more conversion than necessary.

To go back to the OOo processes, I have no doubt that a powerful management system available to the community is required. But in the end, why is there a need to produce .sdf files ? Why can't we simply have HTML sets, like the NB project, that we'd translate with appropriately formed TMX files in appropriate tools ?

My understanding from when I worked with Sun Translation Editor (when we were delivered .xlz files and before STE was released as OLT) is that we had to use XLIFF _because_ the .sdf format was obscure. But in the end, the discussion we are having now after many years of running in circles apparently) revolves not on how to ease the translator's work but on how to ease the management.

If the purpose of all this is to increase the translators' output quality, then it would be _much_ better to consider a similar system that uses the HTML sets directly. Because _that_ would allow the translator to spend much more time on checking the translation in commonly available tools (a web browser...) How do you do checks on PO/XLIFF/SDF without resorting to hacks ?

Keeping things simple _is_ the way to go.

Jean-Christophe Helary (fr team)

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to