Hi All
I think I have said it before but I think we need to see this managed at the publication level. CKM currently stores translations as distinct assets. This has a number of advantages: 1) Translations can be added, reviewed, accredited asynchronously for the same archetype 2) Translations can be updated independently of revisions 3) Archetypes can be downloaded with the languages required Cheers, Sam From: openehr-implementers-boun...@openehr.org [mailto:openehr-implementers-bounces at openehr.org] On Behalf Of Thomas Beale Sent: Monday, 4 May 2009 9:02 PM To: For openEHR implementation discussions Cc: For openEHR technical discussions Subject: Re: [Fwd: [JIRA] Created: (SPEC-302) Translations embedded in the ADL are not efficient and should instead use 'gettext' catalogs.] Tim Cook wrote: On Thu, 2009-04-30 at 22:03 +1000, Thomas Beale wrote: It is clearly true that with a number of translations the archetype will grow bigger, and initially (some years ago) I thought separate files might be better as well. But I really wonder if it makes any difference in the end - since, in generating the 'operational' (aka 'flat') form of an archetype that is for end use, the languages required (which might still be more than one) can be retained, and the others filtered out. I don't think gettext would deal with this properly - the idea that an artefact can have more than one language active. I can only refer you to the "bazillions" of applications that use gettext. Browsers and GUI widgets everywhere are designed, expecting gettext catalogs. Not using gettext means that every implementation has to develop their own filtering mechanisms; in place of reuse of proven existing technology. OR; you could choose to develop an openEHR filtering specification. Then develop browser interfaces and widget interfaces to match. but my question was: if we want an archetype to retain 2 languages, e.g. english and spanish, out of the (say) dozen available translations, can gettext be made to do that? The other good thing about the current format (which will eventually migrate to pure dADL+cADL) is that it is a direct object serialisation, and can be deserialised straight into in-memory objects (Hash tables in the case of the translations). Hmmmm, sorry, I don't get the point here. Seems to me you are saying that you pull all translations into memory. Instead of letting the application decide which one it wants. well that is the default; but depending on what 'application' we are talking about, this is quite likely what is wanted - e.g. if it is an archetype design tool that also managed translations. But I take your point - we probably should make it so that dADL can ignore some parts of an input file. Anyway, I think that we need to carefully look at the requirements on this one, before leaping to a solution... Of course. That is why I suggested targeting the 2.0 version. There is a good chance that there will be knock on effects (good or bad) to the RM (AuthoredResource, et.al.) as well. I'd like to go back to a very basic question I have. What is the use of having the original language as (a specific) part of the archetype if it isn't meant to be the validation language? Seems to me that it is "the" expression of the original author for the construction of the archetype. Translations are a convenience for everyone else. Not sure I understand the question Tim - do you mean: is the original language used in validation? There are very few things that are linguistically dependent in the validation operation - only where regular expression constraints are used....can't think of any others off hand. The linguistic elements of the ontology section get used on the UI of course, and in documents, but that is for humans, not computing. - thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20090520/feb9e7b9/attachment.html>