Hi Jean-Christophe,

Jean-Christophe Helary wrote:

Rafaella,

Thank you very much for the comments.

I was a little confused because it seems that for each different community project SUN manages, there is a different way to localize :)

I know ... and I am really looking forward to have all set up to the Pootle server for next time.....


I would like to know if it is possible to provide "us" (or at least me...) with the .sdf strings _before_ the current modification so as to be able to create a correct TMX file.

I cannot provide you with that, but as I said I can provide you with all other strings which are flagged as finally translated to create tmx files out of those strings....

I'll see what i can do and come back to you.

Rafaella


If you could create that TMX yourself and make it available it would be even better. That TMX would contain the state of the corpus before the modifications (2.2.1) and would allow translators who work with TMX supporting tools (including Sun's own OLT, or OmegaT to name only the free ones) to work efficiently with the current files.

For you information, I decided to create a .pot file out of the .sdf so that I was sure there were no "pseudo-translated" strings in French and I created a "pseudo-tmx" with the original contents that I use to match every source string with.

This solution is better than hand editing the whole file but the whole thing would be even more efficient if instead of a pseudo-tmx I had the real thing based on the 2.2.1 contents.

Do you think it is possible to get that from SUN ?

Regards,
Jean-Christophe

On 18 juin 07, at 17:03, Rafaella Braconi wrote:

Hi Jean- Christophe,

in the Q&A session you may find the answer to your question already:
http://wiki.services.openoffice.org/wiki/Translation_for_2.3#Q_.26_A

Also, please see my comments inline:

Jean-Christophe Helary wrote:

I realized a few days ago that the .sdf (at least for the fr project) for the coming 2.3 contains weird stuff without much of an explaination as to how to differenciate the different parts.

1) in some places the target part is made of what would be a "fuzzy" in PO, but without specific notification of the fuzzy character


what you see is the previous translation. This means that in the meanwhile the English text has been updated and since in most cases the old translation contains terminology which can be reused for updating the string, we decided to keep as a sort of *suggestion* the previous translation instead of overwriting it with the English text.

2) in some places it seemingly contains exact matches


sometimes the English text has been updated in such a way that this is not translation relevant. For example a typo in the English text has been corrected. Since the authors may not necessarily know if a change is translation relevant or not, they flag the English updated text has updated and it gets extracted as *changed* strings when we prepare the files to send to translation.

3) in some other places it contains the source English string


when the English text is completely new. This means that this is the first time the strings gets translated.


In the case where the fuzzy is present, the reference links are sometimes totally different. Which means that besides for the actual editing of the translation, it is also necessary to edit the links.


Yes, in this case the translation needs to be updated including links, tags, variables etc....


I wonder about the utility of such a mechanism especially since there is no way to differenciate between the 3 patterns in the .sdf itself.


The utility is that in may cases the previous translation contain terminology that can be reused to update the text....


It seems to me it would have been faster to _not_ insert fuzzies at all and to provide a complete TMX of the existing OOo contents instead.


They are not fuzzies.....


Right now, if one wants to create a TMX out of the .sdf files (either with the Translate toolkit or with Hearstome translation suite, I suppose there are other ways though), it is impossible to have the source strings corresponding to the fuzzy target and thus the matching in a TMX suppotirting CAT tool will not be of much use.


You cannot create TMX out of the sdf files provided because the translated strings contained in it are not final translations....


Is there still a way to get SUN to provide the l10n teams with TMX of the existing contents, similarly to what we can get through the SunGloss system ?


We could provide you with an sdf files containing the final translations if that helps....

Rafaella


(FYI, the NetBeans team is provided with TMX and that greatly enhances the localization process.)

Jean-Christophe Helary


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to