Archetypes at OMG

2013-05-22 Thread Thomas Beale

Some of you may be interested to note that a new OMG RfP 'Archetype 
Modelling Language (AML)' will be under discussion at the June OMG 
meeting .

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



How to model this XML type as Archetype?

2013-05-15 Thread Thomas Beale

Bj?rn,

the main question is whether these terms exist in a terminology outside 
the archetype. Then a binding is appropriate, as Ian shows below. 
However, I don't think the following would parse. It should be :

 term_bindings = <
["AccidentSiteTerms"] = <
items = <
   ["at0073"]= <[AccidentSiteTerms::V]>
["at0074"] = <[AccidentSiteTerms::T]>
 >
 >
 >

In a better version of the parser, it would allow:

 term_bindings = <
["AccidentSiteTerms"] = <
items = <
   ["at0073"]= <[AccidentSiteTerms::V|V - Vei, gate, fortau, gang- , 
sykkelvei|]>
["at0074"] = <[AccidentSiteTerms::T|T - Other transport area|]>
 >
 >
 >

i.e. using the standard IHTSDO string form of a coded term, which looks like

code|natural language term|

Normally this should be accepted in any location where just the 'code' 
is accepted.

- thomas


On 15/05/2013 01:11, Ian McNicoll wrote:
> Hi Bjorn,
>
> 2 options
>
> 1. Just parse the term, since you know that you are using a controlled 
> terminology, the first letter will always be the key.
>
> 2. Alternatively, create a term binding, since each valueset is 
> essentially a mini-terminology with a keyed 'code' i.e 'V', 'T' etc, 
> or do these come from a formal terminology, such as ICD-X?
>
> e.g.
> term_bindings = <
> ["AccidentSiteTerms"] = <
> items = <
>   ["at0073"]= <[AccidentSiteTerms::V::V - Vei, gate, fortau, gang- , 
> sykkelvei]>
> ["at0074"] = <[AccidentSiteTerms::T::T - Other transport area**]>
>
> >
> >
> >
>
> Ian
>

-- 
Ocean Informatics <http://www.oceaninformatics.com/>*Thomas Beale
Chief Technology Officer*
+44 7792 403 613Specification Program, /open/EHR 
<http://www.openehr.org/>
Honorary Research Fellow, UCL <http://www.chime.ucl.ac.uk/>
Chartered IT Professional Fellow, BCS <http://www.bcs.org.uk/>
Health IT blog <http://wolandscat.net/category/health-informatics/> 
View Thomas Beale's profile on LinkedIn 
<http://uk.linkedin.com/in/thomasbeale>

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130515/13ea3bf7/attachment.html>
-- next part --
A non-text attachment was scrubbed...
Name: ocean_full_small.jpg
Type: image/jpeg
Size: 4085 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130515/13ea3bf7/attachment.jpg>
-- next part --
A non-text attachment was scrubbed...
Name: btn_liprofile_blue_80x15.png
Type: image/png
Size: 511 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130515/13ea3bf7/attachment.png>


Archetype meta-data - moving foward

2013-05-06 Thread Thomas Beale
On 06/05/2013 10:47, Diego Bosc? wrote:
> In fact, 'license' could be translated, but translating 'copyright'
> makes less sense
>

Clearly we are not in the business of creating translations of things 
like the CC licenses ourselves, which is the license of archetypes (at 
least openEHR ones). We would need to rely on those ones that are 
created by creativecommons.org community. This CC page 
 talks about translating 
licences.

It's not obvious to me on a brief look, but I would expect that for any 
given canonical license URL like 
http://creativecommons.org/licenses/by-sa/3.0/ to have equivalents in 
other languages like http://creativecommons.org/licenses/by-sa/3.0/es 
for Spanish etc.

I also suspect that for a CC (and other) license in English language, 
and with 'international' as the jurisdiction, that English is actually 
the official language of the license, for all users, on the assumption 
that any court that might process a case based on one of these licenses 
would be an international court and have English as its working language 
(like the Hague ICC does). The only use of translations - I think - is 
to just enable non-EN maternal language users to more easily understand 
the license.

So we either treat the license field as a non-translated field and just 
include canonical (EN) URL, and assume the user will go and find the 
translation if they need one - I think this will be easier. If we treat 
it as a translatable field, then we probably have to figure out a 
correct URL for each translation, which might just be the 'en' one for 
languages in which the CC license is not yet available. This seems an 
annoyance with no real gain.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Archetype meta-data - moving foward

2013-05-03 Thread Thomas Beale
On 03/05/2013 11:28, Diego Bosc? wrote:
> By the way, we should use the momentum to also revamp the available
> metadata. A few ideas:
>
> - Move 'copyright' from language specific information to general
> metadata (It's not being really translated at the moment).
> - Move 'references' from other_details to general metadata (It's
> important enough IMHO).
> - Information about date of validation, validity time and who validated it.
> - RM version this archetype was based on.
> - etc.
>

Personally I would agree with all of the above. I have already added the 
rm_release to the ARCHETYPE class now in the AOM (not yet pushed up), 
but for the others, I suggest we try to create a wider discussion to do 
this exercise with a small amount of discipline, but still be in a 
crowd-sourcing mode (is that possible ;-)

To that end, I added a child page to the Knowledge Artefact 
Identification page 
,
 
here 
, 
dedicated to meta-data. I added some tables where we can potentially 
review the current model and propose changes. If people think this isn't 
sufficiently detailed, feel free to rework it in another way.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



About openEHR BMM

2013-05-02 Thread Thomas Beale
On 02/05/2013 08:36, David Moner wrote:
> Hi,
>
>
>
> Exactly, that was one of the original ideas, at least add that RM 
> version information at the archetype header. As you say, that only 
> indicates the version used to create the archetype and not the 
> compatibility with other versions of the RM (forward or backward). 
> That could be even added also as metadata: rm_compatibility = 
> <"openEHR-RM_1.0.0", "openEHR-RM_1.0.1">
> Since we can have all the RM versions loaded at the tools it should be 
> quite easy to check the validity towards all of them automatically.

I would suggest we don't include this list in the archetypes themselves, 
because it can easily change as more testing is done - and that would 
mean reversioning and releasing archetypes as this happens - even though 
nothing has changed in the archetype. So I would suggest we need to 
conceive of some functions in a registry service that provides this 
information.

>
>
>
> For me those visualization characteristics are more about preferences 
> of each tool/user. So it's fine for me to separate them in a different 
> place, where they can be modified without changing the model itself, 
> whose definition should remain immutable.

maybe 'tool preferences' is the way to go..

>
> That, said, I must say that we are not big fans of BMM :-)
> While we agree that current alternatives (i.e XMI) are not usable in 
> practice nowadays, we find extremely improbable that BMM gains big 
> acceptance outside the openEHR world. I doubt that we ever see the HL7 
> CDA model expressed in BMM for example. So we decided to support it as 
> another option, but we still hope that the industry finds a way to 
> agree on a common usable format for defining models.
>

well, as I said earlier, I built it over a weekend because none of the 
current options were palatable. I still don't see a clear better 
solution than BMM to our current model interop needs - today. My feeling 
is that something may emerge out of Ecore that can be usable. Right now, 
I am not sure still that it is semantically powerful enough.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



About openEHR BMM

2013-05-01 Thread Thomas Beale
On 01/05/2013 14:48, pablo pazos wrote:
> Hi Thomas, having a small spec would be great, thanks!
>
> BTW, does anyone use XML representation of UML diagrams to process 
> class models?

wait time <= 5 days

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



About openEHR BMM

2013-05-01 Thread Thomas Beale
On 01/05/2013 10:28, Bert Verhees wrote:
>
> Let me explain my problems and what I wish and what I think about 
> doing about that, and than what my problem is with the BMM-solution
> -
> I have a problem with both archetype-editors, I explained a few times 
> on this list why.
> Both change archetypes while loading them, f.e. one likes to add 
> node-id's to datavalues, and the other does not like that.
> There are some more incompatibilities, between the both. I forgot the 
> details.
> Then one is not able to create demographic archetypes, also a problem.
> -
> Both are not configurable.
> I would like to have an archetype editor which can be feeded with some 
> RM-definition, and configured to use it, and then is being able to 
> create archetypes following that definition.
> -
> That does not exist, I like to build it myself, (when I have time). It 
> is not very difficult, there is already code which can be used for 
> understanding, from LiU, so half of the wheel is already invented.
> I think I need three months to do so. So I want to to build any RM 
> depending on the feed-file.
> I would, like LinkEHR, build in in Java, in a eclipse-framework is easy.
> The archetype-editor should export to ADL, but also export to an 
> XML-schema language, probably Relax NG.

all these criticisms are fair, and need to be addressed. I am hoping 
that we can get some combined effort from various vendors and others to 
work on a more coherent new generation of tools. The tool space is 
changing a lot, and it may be that the strategy is to target Eclipse 
with a group of plug-ins that together provide a good quality integrated 
modelling experience. I don't believe this is that hard to achieve - 
most of the difficult algorithms have been worked out in existing tools, 
so a fair bit of logic can be ported or re-used. Expect some 
announcements on this in the near future, and be prepared to contribute!

> -
> Until now LinkEHR used XML-Schema, which is good enough for expressing 
> an master-RM. I am satisfied with any other way too, XMI, for example.
>
> I know there is a lot of bad XMI-software, this is because vendors try 
> to put all kind of things in it, which should not be in it, and in 
> this way they make it incompatible.
> But XMI itself, it is a well defined standard from OMG. It is also a 
> very succesful standard. I think it must be possible to validate and 
> use it in a standardized way.

actually this is not true to date; I happen to be in communication with 
some of the relevant OMG people, and XMI has been recognised as a 
'historically serious problem' by OMG at a recent board meeting, and is 
being treated as almost an emergency situation. This was because many 
tools did not respect the spec, made implementation choices where the 
spec was lacking, possibly as well as adding things of their own. My 
impression is that now, i.e. 2013 going forward, things should improve 
reasonably rapidly. But prior to now, XMI has been a nearly unusable 
format for practical purposes.

> I never studied the software-vendors well, but I think there must be 
> good XMI-producing software. You mention BOUML, I know the product, it 
> has  a plugin-interface, at least i
> -
> My problem is, when you are going to use software which can only run 
> from Windows and you need to create parsers to understand the output 
> (like LinkEHR has published a grammar-file), it will be a stony road, 
> with hard to solve bugs and incompatibility situations. We have seen 
> that until now, only two archetype-editors on the market, and after 
> five years, still not compatible.

You mean EA I presume? I think the next step is to see if we can 
replicate the EA plug-in in other UML tools. BOUML for example runs on 
all 3 major platforms, and Rational Software Architect must as well, 
since it's Eclipse based. So this is just a piece of work for someone to 
do. I am sure Michael will publish his plug-in for use by others.

>
> As I understand you are planning to use a niche definition, which can 
> only be created by one vendor (Enterprise Architect) and let important 
> software (archetype-editors) rely on that.

it's what we did so far, because at the point when we needed a solution, 
there simply was nothing working that we could just use. For a future 
plan... there isn't a defined plan yet, I think it is up to people here 
to help define it. The two  things I think are important are a) that the 
/semantics/ of the BMM schemas can be supported by other solutions and 
tools and b) that a schema file is human readable and writable. We might 
be able to migrate to using some Ecore syntax, and/or XMI. I personally 
have not had the time to go and look at this.

One thing the BMM format has achieved which we could not in any other 
way has been to connect the following tools together:

  * at least one UML tool (so far: EA)
  * the ADL 1.5 Workbench
  * the LinkEHR tool
  * tools that Intermo

About openEHR BMM

2013-05-01 Thread Thomas Beale
On 30/04/2013 23:45, Bert Verhees wrote:
>
>>
>> Michael van der Zel at Results4Care put together a great little 
>> plug-in for Enterprise Architect 
>
>
> Stupid product, EA, cannot be used in an environment based on 
> international standards, but is even when used on Mac of Linux 
> depending on Internet Explorer and Microsoft Database Access 
> Extensions. Probably to lazy to develop their products in a 
> vendor-independent way.
>
> If in this way EA gets the status of preferred third party tooling for 
> modeling in OpenEhr context, I think, that it is a very bad evolution.
>

Hi Bert,

we're not in the business of endorsing UML products, but the UML 
situation is always murky. In theory anyone should be able to share 
models saved in XMI format. Historically that never worked - each tool's 
XMI was broken in different ways, the XMI specification itself is 
unclear and vastly over-complicated (as is the underlying meta-model for 
UML).

So let's say the openEHR community would like some proper computable UML 
models... what do we do? The most recent attempt was done in a tool 
called BOUML , which was free and is now a pay-for 
tool (about EUR50). It output generation of XMI and code is superior 
from what I can work out and it has good support. So I took the work of 
Eric Browne who built most of the RM in that tool, finished it (more or 
less) and you can see the XMI files online in the GitHub 
reference-models repository 
.

This XMI file was used as the original input to EA, which more people 
seem to use, in CIMI, and it nearly worked. There were some errors, and 
the BOUML vendors fixed that very quickly. EA's XMI import was the main 
problem however, but to their credit, Sparx also made fixes to improve 
it in the last 12 months.

I think the Rational tool was also able to import this XMI. So it 
appears that we are getting closer to XMI becoming a trustworthy format, 
and if we continue to publish an XMI file from a reliable tool, and put 
pressure on vendors whose XMI doesn't work (along with everyone else in 
the world who is in a similar situation), the various tools should 
eventually converge on being able to talk the same XMI.

I think that's about the best we can do. Do you have other suggestions?

- thomas

BTW I will regenerate the online XMI with the latest BOUML tool.
-- next part --
An HTML attachment was scrubbed...
URL: 



About openEHR BMM

2013-04-30 Thread Thomas Beale
On 30/04/2013 18:30, Diego Bosc? wrote:
> I think Thomas created it from scratch. There is a page on the wiki
> discussing it 
> (http://www.openehr.org/wiki/display/dev/Machine-readable+model+representations+of+openEHR),
> but we studied mostly to the bmm files included on the archetype
> workbench in order to understand it.

yep. That link explains why I did it. Simple summary: XMI is a horror 
and hardly works between tools that implement it (and there is no hope 
of hand-writing an XMI schema). And Ecore was broken for generic types. 
We might converge to some Ecore/EMF format at some point, but right now, 
BMM is a nice lightweight format, and works ok.

Michael van der Zel at Results4Care put together a great little plug-in 
for Enterprise Architect that traverses a UML model in memory and pumps 
out a BMM schema for it. So now we have a nice way of having a primary 
UML model expression and a generated tool-consumable format (BMM 
schemas), which will help tool chains components to communicate - right 
now the ADL workbench and now LinkEHR can consume it.

The converter is pretty good right now, but David Moner's group has 
obviously found a few more bugs than I found, which is good - hopefully 
we can converge on a very tight version of the EA converter soon. Then 
the same thing can be done with openEHR, 13606, any other model in EA, 
which means we have a way of representing a RM in UML, and driving 
archetype tools from that.

I'm just putting together a GitHub repo now for it on which I'll post a 
spec, the class models I use (in UML) and pointers to every 
implementation we can find.

- thomas




About openEHR BMM

2013-04-30 Thread Thomas Beale
On 30/04/2013 16:33, David Moner wrote:
> Hello all,
>
> We have just implemented the support of Basic Meta Model files (BMM) 
> in LinkEHR Editor as a format to import new reference models into the 
> tool.
>
> First of all, I think that it is necessary to clarify some erroneous 
> ideas or misunderstandings about LinkEHR that have been recently 
> published. Until now, LinkEHR used XML Schema as an input format to 
> define reference models. It is analyzed to create the internal 
> information structures needed to edit archetypes based in that model. 
> Internally, LinkEHR follows a pure implementation of the AOM 1.4 model 
> so that the only limits of the tool are what can be expressed as an 
> archetype.
>
> The decision to support XML Schema as an input format is based on the 
> fact that many reference models are only or normatively expressed in 
> that way (for example HL7 CDA, HL7 CCD or CDISC ODM). This has nothing 
> to do with the discussion about the expressiveness of the XML Schema 
> language, but just a solution needed to support some daily used and 
> well established models such as CDA.
>
> That said, we decided to implement the support of BMM definitions as 
> an additional input format to XML Schema, in order to extend the 
> possibilities of the tool. That implementation took around three days 
> and the only problems came from the interpretation of the BMM format. 
> Some doubts arose and we want to share them for discussion.
>
> - Schema identification. This is just a curiosity. The BMM 
> identification includes the following information. It is curious that 
> here the RM release is required as part of the identification schema 
> (which is completely logical), but it is not used for the generation 
> of the archetype identifier or archetype header to make its 
> localization safer, as we have requested some time in the past 
> (http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/2011-April/005943.html).
> rm_publisher = <"openehr">
> schema_name = <"rm">
> rm_release = <"1.0.2">

If we see the multi-axial identifier not as a fixed string, but a 
computed output from a bunch of identifying meta-data, as per the recent 
knowledge identification proposal update 
,
 
then we should consider adding the rm_release to these items. Then it is 
just a case of defining a function that includes it in one of (possibly 
many) ids. I didn't think to include it in this draft, but we can do that.

I still don't believe it's useful in the primary multi-axial ontological 
identifier, because there is no certain computational relationship 
between an archetype and a given release of the RM. Some archetypes will 
be valid for many releases, others for only one. But having it in the 
archetype meta-data is a good idea, because that nails down at least the 
release at which the archetype was created. Then it can be interrogated 
by some sort of compatibility testing tools.

David, can I suggest you add a comment in the feedback table on that 
page, so I don't forget to do this.

you probably should also report a relevant summary of these points on 
the CIMI list and/or to Michael van der Zel so he can fix his generator.

>
> - Order of the properties. It is not specified if there is an order of 
> appearance of all reserved words and sections of the BMM. Depending on 
> this, the implementation strategy of the parser varies. Is the order 
> relevant? We assumed that it is relevant for the header sections, but 
> it is not at the definition of the classes.

Good point. My default assumption in these kinds of things is to order 
the properties in the way we want them. In my software, I consume those 
structures straight into HashTables, which in Eiffel, remember the 
chronological input order. I'll have to check if my visual rendering 
respects this or not... anyway, it seems to me that a BMM specification 
should say that the order found in the input schema is intended to be 
significant.

>
> - Cardinality property of Single attributes. Testing the CIMI BMM we 
> have found several places where a P_BMM_SINGLE_PROPERTY had a 
> "cardinality" property defined. We interpreted that as an error, since 
> a monovalued attribute has no cardinality.

that's most likely because Michael van der Zel's generator is pulling 
this info straight off UML structures (or whatever EA's internal 
representation is) and UML is pretty dumb - it thinks everything has a 
cardinality. I had not actually noticed this, so it means my tools at 
least are just ignoring this (I use a parse => DOM-tree => object 
structure chain, and anything in the DOM-tree that doesn't exist on the 
object just gets ignored).

>
> - Is_abstract as string. Also at the CIMI model we found several 
> definitions as is_abstract = <"True">. We interpreted it as an error 
> since it should be a boolean value without double quotes.

Also correct. 

New ADL/AOM proposals to solve some old problems

2013-04-29 Thread Thomas Beale
On 29/04/2013 09:01, Mikael Nystr?m wrote:
>
> Hi Tom,
>
> Is the intention that the new data type TERMINOLOGY_CODE also can 
> contain a post-coordinated code so it, for example, can contain a 
> expression in SNOMED CT compositional grammar? (See 
> www.snomed.org/tig?t=rfg_expression_scg 
>  for more details 
> about SNOMED CT compositional grammar.)
>

yes exactly. Technically I suppose it should be called 
TERMINOLOGY_CODE_PHRASE or TERMINOLOGY_CODE_STRING or something like 
that. It's the same idea as the openEHR CODE_PHRASE, the point here is 
to have an AOM type that is minimal, and just can represent a basic 
constrainable terminology code idea, and be easily mapped to actual RMs.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Updated ADL, AOM 1.5 and new ODIN specifications

2013-04-26 Thread Thomas Beale
On 26/04/2013 04:17, Shinji KOBAYASHI wrote:
> Hi Thomas Beale,
>
> At first, I am much sorry about lacking links.
> http://www.rugson.org/pdf/PSRCPattaya2012.pdf
> In this presentation, serialisation formats were compared with
> features and metrics, such as size and (de)serialise performance.
>
> And the second, to address binary data in ODIN format, I think using
> typing system might be good.
> ex.
>   my_binary_data = (Binary:Base128)<211234blurblur>
>

that's not a bad idea at all. I need to think about that

- thomas




New ADL/AOM proposals to solve some old problems

2013-04-25 Thread Thomas Beale
On 25/04/2013 18:44, Diego Bosc? wrote:
> 2013/4/25 Thomas Beale :
>> Consider these dichotomies:
>>
>> OWL (readable) v RDF (hideous)
>> JSON (simplistic, but readable) v XML (hard to read, tricky inheritance
>> model, tricky containment semantics, ...)
>> Ruby / Python (readable, according the young generation at least ;-) v C++
>> (much harder than it should be)
>> Ecore syntax (human readable and computable) v XMI (no need  to say anything
>> here).
>>
>> One thing we can learn from this is that where clear abstract syntaxes are
>> not found, there you find confusion.
>>
> "readability" seems like a too subjective measure, and depends mostly
> of the person/program writing it on the first place.

Nevertheless, the unreadable examples from the list above are all 
infamous for their difficulty of learning, difficulty of understanding, 
complexity, and for being things everyone wants to (and eventually does) 
replace. It's not by accident that this happens.

>>
>> This is why I can agree with the second point completely: There you
>> are making ADL better, more powerful.
>> But I see a problem with the first point as it still requires an
>> external definition of the 'mappings' between how we understand codes
>> in each one of the standards (and which information we can constraint
>> about them).
>>
>>
>> well that's true, but it's already true for types like Date, Time, DateTime
>> and Duration.  Note that a Datetime with timezone has 7 pieces of
>> information in it, and a lot of implied validity rules. Is it a leaf type or
>> a complex type? We just use ISO8601 strings for all of these, and let other
>> tools work out the obvious mappings between various RMs with TS (HL7),
>> DATE/TIME types (openEHR), XSD gXXX types (FHIR), and so on.
>>
> Dates are mostly represented as strings with more or less
> restrictions. The classes to represent the codes are more complicated
> than that (codes, terminology, mapping(s), qualifiers, etc)

only if you think it has to be a class model. But in fact, 99% of all 
codes on the planet are just single codes. A small number have some 
qualifiers / modifiers, and the standard way to do this in both the 
ontology and terminology communities is with different kinds of syntax - 
either OWL-based or IHTSDO based. CLass models always fail in this area, 
because they can never predict what new things terminology code phrases 
need to be able to represent.

I am not saying that you could not keep using the explicit CODED_TERM 
model-based approach as well, just that it's getting in the way 99% of 
the time.

>>
>>
>> I'm not exactly sure what constraint you want to express here: can you be
>> more precise?
>>
> Have a template or specialized archetype that the codes in my target
> system will have exactly one mapping (from a standard terminology to a
> local one, for example)

well the normal way to do that is with terminology bindings. The 
following example would be a pretty dangerous thing to do, especially in 
health, and in any case, it would be hard to find any real world example 
that could look like this, because LOINC and SNOMED-CT don't really 
overlap or code for the same things at all. The better way to do that is 
definitely with bindings.

>
> Sure, you can make alternatives of coded_text, the domain type here is
> the codePhrase...
>
> I think this could be a perfect valid example (ignore random codes)
>
> defining_code existence matches {1..1} matches {
>  [SNOMED-CT::
>  123456,
>  11234561,
>  1002123456,
>  61234563,
>  98752;
>  233233]
>  [LOINC::
>  72693-5,
>  1234-5,
>  3254-8,
>  6548-1,
>  44563-7;
>  3254-8]
>  [local::
>  at1000,
>  at1001,
>  at1002,
> 

New ADL/AOM proposals to solve some old problems

2013-04-25 Thread Thomas Beale
On 25/04/2013 18:52, Diego Bosc? wrote:
> You can generate operations to deal with domain types, but then AQL
> would be openEHR specific (you can call it OQL then). What I say is

Diego,

there is nothing openEHR-specific today in AQL, and allowing more 
complex primitive types like dates or codes or URIs doesn't change that. 
It will work just as well with any reference model from any industry. So 
I don't get your point here.

> that generating a path to specify a filter (and accessing it) is
> direct when the domain type has been expanded, and not so easy if you
> take it as it is. If you get the expanded path each time then the user
> won't be able to tell where the AQL path comes from (it'll have
> attributes he doesn't know about). I only see disadvantages working
> with domain types in AQL.
>

but in that case you should be arguing that we should remove the 
Date/Time types and URIs etc. It's true, we can just limit ourselves to 
Integer / Real / String / Character / Boolean / Binary, but... why? It's 
just making life needlessly difficult as far as I can see.

- thomas




Updated ADL, AOM 1.5 and new ODIN specifications

2013-04-25 Thread Thomas Beale

Shinji-san,

thanks for the feedback.

On 25/04/2013 15:39, Shinji KOBAYASHI wrote:
> Hi Thomas Beale,
>
> My comments:
> 1) Page 33, A2
>   JSON is no Java Simple Object Notation. JavaScript Object
> Notation(http://www.json.org/)

oops, getting old, memory going

> 2) How to encode binary data?
>   In order to serialise binary data as text format, it need to encoding
> system, such as Base64. What encoding system will you adopt to ODIN?

I wonder why not Base 128, since ODIN already assumed UTF-8 strings. The 
real question is: how to detect that we have some binary data?

ODIN works on the idea that every leaf type is inferrable syntactically. 
In theory we could just do

 my_binary_data = 

where some characters will be from the non-printing characters in the 
0-127 range. That wouldn't be a problem, but it could be a problem to 
distinguish from Integers, since some binary encoded data might come out 
to be

 my_binary_data = <952>

So I think some other marker is needed in ODIN. Maybe something simple like

 my_binary_data = <#952#>



> 3) FYI: This presentation slides show the comparison of seven
> serialisation techniques.
> XML, JSON, Java binary serialize, Protocol Buffers, Apache Avro,
> Protostuff, and Rugson.

I guess you meant to include a link? I found this 
<http://www.rugson.org/techs/comparison/> at Rugson.org..

> I think one of the best feature of ODIN to compare these serialisation
> techniques is that has strict type system, object oriented.
>
>

I have to admit, I don't know how any data format without dynamic type 
markers can be used with real data...

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130425/a07cf4fe/attachment.html>


New ADL/AOM proposals to solve some old problems

2013-04-25 Thread Thomas Beale
On 25/04/2013 12:21, Diego Bosc? wrote:
> PPPS: How you define an AQL filter over a domain type?
>
>

How do you define an AQL filter over a date time? Well, ok, it's not 
quite as simple as that. With a coded term type (in particular) you want 
operators like 'in set', 'in subsumption', and 'in subset'.

But why do we see this as being specific to health?

- thomas

*
*
-- next part --
An HTML attachment was scrubbed...
URL: 



New ADL/AOM proposals to solve some old problems

2013-04-25 Thread Thomas Beale
On 25/04/2013 11:47, Diego Bosc? wrote:
> As you know, I'm not a big fan of domain types, so take my comments
> with a grain of salt ;)
> I understand that back in the day when archetypes were hand crafted
> domain types could serve a purpose. But in my opinion ADL should not
> be written by hand nowadays. Tools should be the ones that 'hide' the
> 'verboseness' and provide the user with a simple interface to simulate
> domain types if you want/need that. Also, the difference in file size
> is negligible (if archetypes pass from 16kb to 20kb I wouldn't worry
> that much...).
> If you ask me I would get rid of them completely and make ADL
> completely model agnostic.

I'm not worried about size, up to a point. But there some truisms about 
formalisms - the main one is that if the context free grammar of a 
formalism only has a complicated way to do something, then any 
structural representation will also be complicated. Additionally, if 
it's complicated to express a simple thing in the formalism, it's 
probable that no users or developers understand it clearly.

Consider these dichotomies:

  * OWL (readable) v RDF (hideous)
  * JSON (simplistic, but readable) v XML (hard to read, tricky
inheritance  model, tricky containment semantics, ...)
  * Ruby / Python (readable, according the young generation at least ;-)
v C++ (much harder than it should be)
  * Ecore syntax (human readable and computable) v XMI (no need to say
anything here).

One thing we can learn from this is that where clear abstract syntaxes 
are not found, there you find confusion.


>
> This is why I can agree with the second point completely: There you
> are making ADL better, more powerful.
> But I see a problem with the first point as it still requires an
> external definition of the 'mappings' between how we understand codes
> in each one of the standards (and which information we can constraint
> about them).

well that's true, but it's already true for types like Date, Time, 
DateTime and Duration.  Note that a Datetime with timezone has 7 pieces 
of information in it, and a lot of implied validity rules. Is it a leaf 
type or a complex type? We just use ISO8601 strings for all of these, 
and let other tools work out the obvious mappings between various RMs 
with TS (HL7), DATE/TIME types (openEHR), XSD gXXX types (FHIR), and so on.

Proposing the idea of a 'terminology code' made up of a terminology id 
and a code or code-phrase (as a string expressed in e.g. the SNOMED CT 
Compositional grammar) as a built-in type doesn't seem a great leap in 
the semantic age.

>   With this new syntax, can we constraint mappings between
> codes? ( How do I say that I don't want to allow the mappings in
> certain coded text?)

I'm not exactly sure what constraint you want to express here: can you 
be more precise?

>   and what about the code qualifiers? What if your
> RM defines another kind of attribute for codes interesting to be put
> into the archetypes but not supported by this code syntax?
> If both visions (codes as a type and codes as a full structure)
> coexist then we have the same problem as we have now (or worse).

well in openEHR we have always modelled code terms as syntax, not a 
complex model of qualifiers. See the CODE_PHRASE type.

>
>
> PS: BTW, by definition a leaf constraint type (the new proposed
> 'C_TERMINOLOGY_CODE' or whatever) does not have node id, I don't see
> how one would be able to define alternatives of codes from different
> terminologies or specialize that...
> PPS: ...which is the exact same problem that domain types have (as
> they also lack node id)

it depends on what are trying to do. The 'normal' thing that 90% of 
archetypes need to do is this:

 ELEMENT[at0021] occurrences matches {0..1} 
matches {-- Certainty
 value matches {
 DV_CODED_TEXT matches {
 defining_code matches {
 [local::
 at0022, -- Suspected
 at0023, -- Probable
 at0024]-- Confirmed
 }
 }
 }
 }

If this is written without the 'custom syntax' then you have:

 ELEMENT[at0021] occurrences matches {0..1} 
matches {-- Certainty
 value matches {
 DV_CODED_TEXT matches {
 defining_code matches {
 CODE_PHRASE matches {
 terminology matches {
 TERMINOLOGY_ID 
matches {

New ADL/AOM proposals to solve some old problems

2013-04-25 Thread Thomas Beale
On 25/04/2013 10:20, Erik Sundvall wrote:
> Very interesting thoughts Tom!
>
> My initial impression of the proposal is very positive. If I 
> understand things correctly this will enable shorter and more 
> readable serializations not only in ADL but also in other formalisms.
>
> If we consider ADL being a DSL 
>  mainly 
> targeted for constraining health-related RMs then simplifications 
> towards that goal are welcome.

well actually I am thinking of ADL as a domain-independent formalism. 
Today we do have some health-IT oriented extras, but that would 
disappear with the proposal I am making here, and would be replaced by 
a) a built-in type (essentially a special kind of string) representing a 
terminology code, and b) the tuple constraint capability. Although the 
former may look like a health IT specific, I would argue that it is 
applicable in any industry, where there is any kind of coding going on. 
That surely has to be increasing.

>
> The only potential catch is implementation issues. Have you already 
> tried implementing a parser for this in some language? (If so, please 
> provide a link.) I guess the suggestion could make some implementation 
> parts easier and some a bit trickier.

I haven't done anything in the ADL parser to support this, but I think 
it will be easy to handle for leaf level tuples. For tuples of complex 
objects, it will probably be harder, but I think that is of less utility 
anyway. The main annoyance will be re-processing existing archetypes on 
the fly, but that's life

Before I do any work on it, I think it would be useful to get more feedback.

- thomas


-- next part --
An HTML attachment was scrubbed...
URL: 



Updated ADL, AOM 1.5 and new ODIN specifications

2013-04-24 Thread Thomas Beale

I have updated the ADL and AOM 1.5 specifications to reflect recent 
proposals for artefact identification. The main changes are that in the 
AOM, the archetype id as we know it today is constructed from pieces of 
meta-data, of which the version identifier is one.

A more interesting change for most people may be that I have now removed 
the 'dADL' part of the ADL specification and given it a new name and its 
own specification. For those who don't know or remember, dADL is a pure, 
generic object serialisation syntax - yes - another thing like JSON, 
etc. It's new name is Object Data Instance Notation (ODIN) and the new 
spec can be found here 
<http://www.openehr.org/releases/trunk/architecture/syntaxes/ODIN.pdf>. 
You can see this specification is in a new 'syntaxes' group at the 
bottom of the main table in the main specification baseline here 
<http://www.openehr.org/programs/specification/releases/currentbaseline>.

I have set up an ODIN project at the openEHR Github, here 
<https://github.com/openEHR/odin>, with the idea that we could collect 
the parsers and serialisers from various languages in this project, or 
else point to them from here.

Some may ask why we have ODIN (dADL), given that there is XML, JSON, 
YAML and other syntaxes. There are reasons: when dADL was first invented 
(about 2002), there was nothing except XML to use, and it is not a 
particularly clean object serialisation syntax, nor realistically human 
readable. dADL was designed to be properly object oriented, human 
readable and writable, to have rich leaf data types, to support Xpath 
pathing, and to enable much smaller texts than XML.

Amazingly, dADL / ODIN still has stronger leaf data types, as well as 
dynamic typing (a key feature lacking in JSON) and object identifiers.

For anyone interested in putting together ODIN parsers/serialisers for 
the various languages, please make yourself known, and let's discuss how 
to do it. A survery of such syntaxes indicates that there is growing 
interest in non-XML / post-XML data syntaxes (e.g. recent Dr Dobbs 
article 
<http://www.drdobbs.com/web-development/after-xml-json-then-what/240151851>), 
and I think ODIN could have its place in the wider world.

- thomas beale

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130424/a23d9d25/attachment.html>


Trying to understand the openEHR Information Model

2013-04-24 Thread Thomas Beale
On 24/04/2013 18:27, Bert Verhees wrote:
> On 04/24/2013 07:14 PM, Thomas Beale wrote:
>> if you want to distribute that, it would be a great example RM for 
>> the ADL workbench - do you have it in BMM format? 
>
> Yes, Thomas, of course I can show it, but I don't know what BMM is.
> But it is a very simple definition. Just for fun. I wrote it in an 
> hour. Someone with really understanding of wine would do it much better.
> It is in XML-Schema 1.1, because that, I thought, was the best and 
> most flexible way to define a reference model in.

BMM looks like this 
<https://github.com/openEHR/reference-models/tree/master/models/openEHR/Release-1.0.2/BMM>
 
(and properly represents types, generic types and inheritance). There is 
now an Enterprise Architecture plug-in that generates BMM from a UML 
model done by Michael van der Zel at Results4Care.

>
> I think the LinkEHR editor should also be able to use it.

LinkEHR is now using BMM as well.

>
> And archetypes, I can write a few, for example how to define a French 
> wine, which has other ways of defining quality than for example, an 
> Chili wine.
> You can even write archetypes for regions, or even for villages in 
> Bourgogne or chateaux in Bordeaux.
> And for now, I can transform them to XML Schema 1.1. But that will 
> improve.
>
> It is just fun.
>
> Bert

Fun is good.

- thomas

*
*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130424/70d2e5f1/attachment.html>


Trying to understand the openEHR Information Model

2013-04-24 Thread Thomas Beale
On 24/04/2013 16:23, Bert Verhees wrote:
> On 04/24/2013 04:52 PM, Thomas Beale wrote:
>> A, I got it.  Now I think I understand.  You aren't building a 
>> constraint based multi-level modelling system.  You are modelling 
>> archetypes in RelaxNG. Correct?
>
> Yes, that it is. I had more difficulties explaining this, must be a 
> rather unconventional way of thinking in my mind ;-)
>
> I "translate" ADL into RelaxNG for validation purpose, and some other 
> purposes, like type-attribute-assignment for index-optimizing which is 
> good for xQuery, and I believe there is some interest from GUI 
> builders for the schema's.
>
> The story is that the kernel is not only multi-modeling, but also 
> multi-reference model, and that simultaneous, so store/query EN13606 
> or OpenEHR, it doesn't matter for my kernel, also in a single statement.
>
> Just for fun I wrote a "winecellar-RM" type, grape country, region, 
> year, oak, and I can query that too, and have, besides EHR-records of 
> my patients, (coming from OpenEHR environments or EN13606) also their 
> winecellar in vision, and I can query them all three in only one 
> statement.
> Give me all patients which have influenza, taken aspirine and 
> Australian Cabernet Sauvignon in cellar.

Bert,

if you want to distribute that, it would be a great example RM for the 
ADL workbench - do you have it in BMM format?
plus some wine cellar archetypes?

We could add that to the ADL Workbench distribution. In case you want to 
make people envious of your wine collection ;-)

- thomas




New ADL/AOM proposals to solve some old problems

2013-04-24 Thread Thomas Beale

In openEHR we use custom syntax in archetypes to express ordinal 
constraints, quantity constraints and coded text constraints - i..e 
constraints on what are probably the most ubiquitous data types in health.

I have been mulling over feedback from previous debates here and in CIMI 
about the 'undesirability' of this syntax.

I have posted some new ideas on how to solve this here 
.

The executive summary is:

  * let's treat 'code' as a built in type, like a Date or a Uri; this
then makes an AOM type that constrains this trivial;
  * ADL can be augmented in a generic way to enable tuples to be
constrained, which would better solve the Quantity constraint problem
  * The Ordinal constraint syntax would be replaced by a combination of
both of the above.

Feedback welcome.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-24 Thread Thomas Beale
On 24/04/2013 15:52, Thomas Beale wrote:
> Subject:
> Re: Trying to understand the openEHR Information Model
> From:
> *Tim Cook *
> Date:
> 24/04/2013 15:29
>
> To:
> For openEHR technical discussions 
>
>
> Hi Bert,
>
> On Tue, Apr 23, 2013 at 5:28 PM, Bert Verhees  <mailto:bert.verhees at rosa.nl>> wrote:
>
>
> The "define" is in the second part of the example, that is
> (called) the compact notation although there is also another
> notation for Relax NG which is more easier to understand for
> people used to XML-Schema.
>
> The other notation is like . (partially)
>
> 
> 
> 
> 
> etc ..
> 
> 
> etc ..
> 
> 
> 
> 
>
> I can recommend the book of Eric van der Vlist, it is also free
> for download or read online
> http://books.xmlschemata.org/relaxng/
>
>
>
> A, I got it.  Now I think I understand.  You aren't building a 
> constraint based multi-level modelling system.  You are modelling 
> archetypes in RelaxNG.  Correct?
>
> So, as I first thought.  This is a flat one-level interpretation of an 
> archetype.  Not an actual validation chain.   My apologies for 
> originally misinterpreting what you are attempting.
> I am quite sure that RelaxNG or even XML Schema 1.0 will work just 
> fine for that solution.
>

Tim,

Bert isn't trying to replace the multi-level archetype modelling 
architecture - that already does what it does, he's talking about 
physically representing archetypes and templates in Relax NG for use in 
his production environment. That's just a question of what you can 
losslessly serialise to.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130424/54679606/attachment.html>


Trying to understand the openEHR Information Model

2013-04-24 Thread Thomas Beale
Subject:
Re: Trying to understand the openEHR Information Model
From:
*Tim Cook *
Date:
24/04/2013 15:29

To:
For openEHR technical discussions 


Hi Bert,

On Tue, Apr 23, 2013 at 5:28 PM, Bert Verhees mailto:bert.verhees at rosa.nl>> wrote:


The "define" is in the second part of the example, that is (called)
the compact notation although there is also another notation for
Relax NG which is more easier to understand for people used to
XML-Schema.

The other notation is like . (partially)





etc ..


etc ..





I can recommend the book of Eric van der Vlist, it is also free for
download or read online
http://books.xmlschemata.org/relaxng/



A, I got it.  Now I think I understand.  You aren't building a 
constraint based multi-level modelling system.  You are modelling 
archetypes in RelaxNG.  Correct?

So, as I first thought.  This is a flat one-level interpretation of an 
archetype.  Not an actual validation chain.   My apologies for 
originally misinterpreting what you are attempting.
I am quite sure that RelaxNG or even XML Schema 1.0 will work just fine 
for that solution.

--Tim
-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-23 Thread Thomas Beale
On 23/04/2013 19:57, Thomas Beale wrote:
>
> They are catered for 
> <http://www.w3schools.com/schema/schema_dtypes_date.asp>, but I have 
> to admit, in a pretty annoying way. But better than not being catered 
> for...
>
> The lack of support for hh:??:?? is actually the fault of the ISO8601 
> standard, and I suspect it's because the writers never actually 
> implemented a parser, and had the simple realisation that a partial 
> date or time (e.g. "1995", "12") is impossible to distinguish 
> syntactically from an integer in a mixed data 

I meant 'and never had the simple realisation...'

> stream - some other help is always needed.
>
> XML Schema solves it with the data types gMonthDay, gYear etc. Ugly, 
> but not really their fault.
>
> A slightly better designed 8601 standard would have saved a lot of 
> problems, and the ultimate fault in my view lies at the door of ISO: a 
> completely wrong model of doing standards.

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130423/946038a8/attachment.html>


Trying to understand the openEHR Information Model

2013-04-23 Thread Thomas Beale
On 23/04/2013 18:09, Timothy W. Cook wrote:
>
>
> On Mon, Apr 22, 2013 at 7:26 PM, Bert Verhees  > wrote:
>
>
>
> Another very important restriction for using XML Schema, in my
> opinion, is that you cannot have two or more elements with the
> same name but a different data type. This data type must be in
> detail the same. XML Schema regards an Element with a Dv_Text as a
> different datatype from an Element with a Dv_CodedText.
>
> Both elements will be called "items" in an XML schema representing
> an OpenEhr data structure, and thus is not allowed having them
> different details in data types. This brought Tim Cook to using
> the GUIDs in the element-names, which is unworkable in my opinion,
> and above all, probably unnecessary, because in RelaxNG this
> restriction does not exist.
>
>
> There are many reasons and benefits to using Type4 UUIDs.  I cannot 
> imagine that RelaxNG has any magic to allow global elements to be the 
> same name and have different types or two elements at the same level 
> have different types.  Certainly no programming language allows that. 
>  There are other approaches that can be used and not use UUIDs.  You 
> can nest all of your complexTypes and create VERY wide artifacts. You 
> can use intricate namespacing if you want.
>
> Other tricks are also possible, for example augmenting
> element-names during validation-time, but also that is cumbersome
> code, and that just for avoiding the problems of an ivory tower
> stupid W3C standard?
>
>
> Interesting here that you call it a stupid standard and then in a 
> later email you praise it for its industry acceptance.

I think Bert is comparing Relax NG to XML Schema, he's not just talking 
generic 'XML'.

>
> But, the things you continue to call tricks are not tricks, they are 
> features of the standard that are implemented because one or 
> more people presented sufficient use cases.  Just because Priscilla 
> Walmsley doesn't bless their use doesn't mean that they are any less 
> valid.
>
>
> So this is indeed an important restriction, which makes the clean
> use of XML Schema impossible in OpenEhr-rm, or any other ADL based
> multi level modeling system. Dirty use, tricks, ignoring
> validation errors, etc of course remain possible.
>
>
> Yes, be specific.  You probably can't model an ADL based RM in Haskell 
> or Erlang either.  But that doesn't mean they are not useful in 
> multi-level modeling with a functional design.  If you goal is to stay 
> with ADL then you have to live with those requirements.  I chose not 
> to in MLHIM for all the other benefits that come with adopting XML 
> technologies.

out of interest Tim, did you look at Releax NG or Schematron?

>
>
> There are more restrictions, but less important. For example it is
> not possible to support the Dv_Time constraint/pattern hh:??:??,
> same for Dv_DateTime. In the Dv_Date is also a problem, but can be
> worked around by the "alternative" rule, but on another way then
> it is meant to use.
>
>
> There are very clean and efficient ways to allow for partial dates in 
> XML Schema.

They are catered for 
, but I have to 
admit, in a pretty annoying way. But better than not being catered for...

The lack of support for hh:??:?? is actually the fault of the ISO8601 
standard, and I suspect it's because the writers never actually 
implemented a parser, and had the simple realisation that a partial date 
or time (e.g. "1995", "12") is impossible to distinguish syntactically 
from an integer in a mixed data stream - some other help is always needed.

XML Schema solves it with the data types gMonthDay, gYear etc. Ugly, but 
not really their fault.

A slightly better designed 8601 standard would have saved a lot of 
problems, and the ultimate fault in my view lies at the door of ISO: a 
completely wrong model of doing standards.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-23 Thread Thomas Beale
On 23/04/2013 10:37, Bert Verhees wrote:
>
> > have ADL, AOM, and object transforms
>
> What is missing is that xml offers validation and query out of the 
> box, which means it has been developed and optimized for years by many 
> companies and communities, and mostly is good quality software.
>

ok but we just agreed that XSD doesn't do the kind of validation that is 
needed by archetypes, so I think what you are really proposing is XML 
based on Relax NG as a sufficiently powerful approach that would a) 
implement the required constraint semantics of archetypes and b) create 
data that can be queried by Xpath/Xquery out of the box?

Is that your suggestion?

If so, my reaction would be: let's investigate as a community. I don't 
think anyone has sufficiently investigated Relax NG or Schematron for 
openEHR purposes, and in hindsight we probably should have. It would be 
very interesting indeed to see how much better it would work than XSD.

I think if you make any progress in your work on these questions, let's 
turn them into specs / guidelines / whatever for general use in openEHR.

- thomas



Trying to understand the openEHR Information Model

2013-04-23 Thread Thomas Beale
On 22/04/2013 23:26, Bert Verhees wrote:
>
> Verstuurd vanaf mijn iPad
>
> Op 22 apr. 2013 om 23:19 heeft Thomas Beale  oceaninformatics.com> het volgende geschreven:
>
>> which rules is it breaking? As far as I know, openEHR XML documents validate 
>> normally against the schemas.
> yes, I said it wrong, later in the message I said it better and I forgot to 
> remove this statement.
>
> So let me correct myself:
>
> You cannot represent all Archetype constraints in XML schema, you can of 
> course validate against the master scheme, but that is not very interesting. 
> To validate you need to validate against the constraints. That is the 
> important point of multi level modeling.

that's true if you try to use XSD in its native form. I have been saying 
the same thing for years. But you can represent archetypes in XML in 
another way - as a straight object serialisation of an AOM structure. 
Have a look at the XML output of the current ADL workbench. I didn't 
create an XSD for that, but it would certainly be possible.

The XML format used by the Archetype Editor is of the latter form.

>
> I discovered some important problems, besides the restriction/extension 
> structure, which is quite disturbing. You are not allowed to restrict and 
> extend an derived element at the same time. Just for clarity, A restriction 
> in deriving in XML Schema is not the same as constraining in ADL.
> Read the Priscilla Walmsley book on this, she explains it very well.

yes, and she is correct, it's a mess. See my comments to Tim earlier ;-) 
But there is no danger of openEHR doing this, I think, since we know it 
won't work effectively. That's why all the

> There are ways around this, but it is not very elegant.
>
> Another very important restriction for using XML Schema, in my opinion, is 
> that you cannot have two or more elements with the same name but a different 
> data type. This data type must be in detail the same. XML Schema regards an 
> Element with a Dv_Text as a different datatype from an Element with a 
> Dv_CodedText.
>
> Both elements will be called "items" in an XML schema representing an OpenEhr 
> data structure, and thus is not allowed having them different details in data 
> types. This brought Tim Cook to using the GUIDs in the element-names, which 
> is unworkable in my opinion, and above all, probably unnecessary, because in 
> RelaxNG this restriction does not exist.
>
> Other tricks are also possible, for example augmenting element-names during 
> validation-time, but also that is cumbersome code, and that just for avoiding 
> the problems of an ivory tower stupid W3C standard?
>
> So this is indeed an important restriction, which makes the clean use of XML 
> Schema impossible in OpenEhr-rm, or any other ADL based multi level modeling 
> system. Dirty use, tricks, ignoring validation errors, etc of course remain 
> possible.
>
> There are more restrictions, but less important. For example it is not 
> possible to support the Dv_Time constraint/pattern hh:??:??, same for 
> Dv_DateTime. In the Dv_Date is also a problem, but can be worked around by 
> the "alternative" rule, but on another way then it is meant to use.
>
> Anyway, after a few weeks I will probably define the OpenEhr RM and all 
> possible constraints in RelaxNG.
>

I agree with most of this, but I don't understand the issue - we don't 
do any of the above anyway. That's why we have ADL, AOM, and object 
transforms of the AOM... am I missing something?

- thomas




Trying to understand the openEHR Information Model

2013-04-22 Thread Thomas Beale
On 22/04/2013 21:44, Bert Verhees wrote:
> On 04/22/2013 02:12 PM, Thomas Beale wrote:
>> On 22/04/2013 10:01, Bert Verhees wrote:
>
> But I understand your point, we can discuss that without bashing XML:
> You are saying that people may want to use another storage than 
> XML-databases, and than they can't use XQuery.
> You are right, but can they use AQL?
>
> There is only an incomplete definition of AQL in a Wiki, that had no 
> substantial changes since long time, thus hardly any progress.
> There is no guarantee that the Wiki is stable.

well it has been complete enough to be implemented and used in 
production systems for some years now. You are right, there are some 
unfinished bits, but they are not key elements - they don't prevent 
large scale systems using AQL.

>
> I think you know what kind of effort and the risk is to write a new 
> query-engine on a new language-concept for any database-concept of choice.
>
> Seref said it to Randolph a few days ago, there isn't hardly any work 
> done by third parties, only two implementations of AQL, and in the 
> same sentence he calls AQL the almost most important part of the 
> OpenEHR eco-system.
> Quote of Seref in this context:
>
>> In my humble opinion, AQL is the most neglected, yet, probably one of 
>> the most important components of an openEHR implementation. It is not 
>> part of the implementation, but it has been implemented by at least 
>> two vendors that I know of, with a third having something quite 
>> similar to it.
> One could, reading this, starting to doubt if OpenEHR can exist 
> without a query language,
> I think Seref is right. It cannot. And then there is no stable 
> specification?

well, again, the specification actually has been stable for a long time. 
It has not been made official like the other specifications (that should 
happen this year), and it probably should have been earlier, but I guess 
this way we have a lot of industry knowledge about it now, so we know it 
works.

>
> Also consider this.
> How can two companies have implemented AQL if there is no stable 
> definition?
> How much money do they put at stake with uncertain result?
> These are rhetorical questions.

it hasn't been a problem for the implementing companies.

>
> It brings me to the conclusion that for third parties, there is only 
> one way to go, and that is XML, and XQuery, there is no other way to 
> get an OpenEHR system ready at this time and the coming few years.

I don't understand why you would say that, there are many already 
running. This page 
<http://www.openehr.org/who_is_using_openehr/healthcare_providers_and_authorities>
 
documents systems in production in clinical environments.

> The query language is one difficult part, the other difficult part is 
> validation. Both can be solved using standard industry-tools, I come 
> back to this at the end of this message.

An AQL implementation is actually a lot easier than you think, assuming 
that the main data are stored in blobs.

> And I am not talking about MLHIM. ;-)
>
> The OpenEHR eco-system for XML is ready and full of features.
>
> I don't say, XML is the only way, to write kernel. But it has many 
> advantages, because of the wide industry-support, and the thousands of 
> man-years development in that.
> Choosing any other solution means having to write an query engine for 
> a query language which still is not declared stable, and having to 
> write a validation-tool which, as far as I know, only exist for DADL.
>
> Implementing OpenEHR for a software-vendor, not using XML, is hardly 
> an option.

that's not at all the case. It's perfectly normal to implement the whole 
system in Java, C#, Python, Ruby, whatever, and use numerous kinds of 
native storage, object storage, it could be MUMPs, relational+blob 
storage, XML as well. But there is nothing that I can think of in XML 
technology that makes it more attractive than anything else as a basis 
for implementing a core system (it's more or less unavoidable on for 
interfacing). XML is one option, there are many others, and they work well.

>
>>
>> The general need we have in openEHR is for an abstract query language 
>> that can be used to express queries to any openEHR (or 13606 or other 
>> archetype-based system), regardless of whether its concrete 
>> persistence happens to be in XML.
>> If you are suggesting that we use Xquery/Xpath even for non-XML data 
>> representation cases, that's a different conversation. It won't work 
>> out of the box, because we use a more efficient path syntax (but 
>> which is easily convertible), and Xquery/Xpath make other assumptions 
>> due to being targetted to XML, e.g. they assum

Trying to understand the openEHR Information Model

2013-04-22 Thread Thomas Beale
On 22/04/2013 10:01, Bert Verhees wrote:
> On 04/22/2013 10:01 AM, Thomas Beale wrote:
>>
>> Hi Bert,
>>
>> Xquery wasn't stable in 2006 when we needed a query language. AQL was 
>> implemented by Ocean by 2007 and has been working since then, and 
>> something similar implemented by companies in Brazil. Later on, 
>> Marand implemented it, and I suspect someone else.
>
> I am sorry, I have no time to provide a well done analysis, but I have 
> an opinion.
>
> XQuery is stabilized in 2007, XPath is sometime longer around, but as 
> I understand, in version 2.0 it is subset of XQuery 1.0. I am reading 
> the O'Reilly book of Priscilla Walmsley about XQuery, she explains 
> very thoroughly (as we are used from her).
>
> AQL as shown in the Wiki, (that is what I know of AQL), can very well 
> be served by syntax-transformation to XPath/XQuery.
> http://www.openehr.org/wiki/display/spec/Archetype+Query+Language+Description
>
> Should one do that? Syntax-transformations? There is a risk.
>
> In favor of XQuery, there are query-engines available almost out of 
> the box, open source or closed, some which are in development for 10 
> years, based on good indexing, and still being active developed.
> With all respect, but I think there has been very good work done, 
> worldwide, and one should profit on that if possible.

that's fine for XML data. But many implementations do not use XML as the 
storage format - and there are good reasons for that - XML Schema 
representations of object data require transformation, and have 
efficiency problems that have to be addressed in one way or another.

The general need we have in openEHR is for an abstract query language 
that can be used to express queries to any openEHR (or 13606 or other 
archetype-based system), regardless of whether its concrete persistence 
happens to be in XML.

If you are suggesting that we use Xquery/Xpath even for non-XML data 
representation cases, that's a different conversation. It won't work out 
of the box, because we use a more efficient path syntax (but which is 
easily convertible), and Xquery/Xpath make other assumptions due to 
being targetted to XML, e.g. they assume the XML attribute/element 
dichotomy, which doesn't exist in normal object data; they don't assume 
an object inheritance model, and so on.

Nevertheless, if it could be shown that AQL could be mapped to a clean 
subset of Xquery/Xpath as a standard formalism, that's likely to be 
useful. It would mean that those implementers who choose XML as their 
internal data representation would be able to use standard products out 
of the box, as you say. Others might be able to some components, e.g. 
Xquery parsers in order to build a query engine that talks to non-XML data.

>
> XQuery can also be used directly to query OpenEHR datasets. I see no 
> reasons against this very good working solution. There is not really a 
> need for a separate query-language.
> At this moment AQL is a niche and XQuery is a standard. I have read 
> somewhere that also Cache from Intersystems in an additional module 
> supports XQuery, but marketing language is often gibberish. One can 
> never be sure what really is possible.
>
> Apart from that, maybe there is a wish to complete the 
> ADL/AQL-eco-system, for those who chose not to store in XML and want 
> to write their own AQL-query-engine on the database-concept of their 
> choice.
> In that case, AQL should, in my opinion, be defined as close as 
> possible to XPath/XQuery. I think very very close is possible and even 
> obvious.
> This is, because the basic goal is the same, to offer a generic 
> query-language.

well it's a bit more that that - it's to define a query language that is 
a) based on the logical content models of the data and b) needs to know 
nothing about the concrete persistence representation of the data. The 
query language also has to support terminology-based query expressions 
and subsumption.

But if it can be aligned, let's do it. It just needs someone to do the work.

> But other arguments could be: to comfort developers, to profit from 
> what is already been done (in standard-definition and in tooling), and 
> to provide interoperability with that part of the world, which 
> understands XML better than ADL/AQL.
>
> But the next issue comes up.
>
> A shortcoming of the OpenEHR-documentation is the expression of the RM 
> in XML-Schema. Derived OpenEHR-datasets can never be validated legally 
> in XML-Schema 1.1 or 1.0.

Do you mean just that the Release 1.0.2 XSDs need to be better designed? 
We certainly know that, and welcome any proposals on that (of which 
there are already many).

> So defining the RM in a XML-Schema is quite useless, and bringing 
> people on a d

Trying to understand the openEHR Information Model

2013-04-22 Thread Thomas Beale

Hi Bert,

Xquery wasn't stable in 2006 when we needed a query language. AQL was 
implemented by Ocean by 2007 and has been working since then, and 
something similar implemented by companies in Brazil. Later on, Marand 
implemented it, and I suspect someone else.

I don't know anyone who has done a serious analysis to show if Xquery 
could do the job. At the very least archetype identifiers and pathing 
would need to be catered for, but the rest might be made to work. I 
would welcome any kind of analysis like this.

- thomas

On 20/04/2013 23:00, Bert Verhees wrote:
>> I don't think there is an AQL engine open source yet, but in any case it 
>> only makes sense when there is an open source openEHR EHR service, which 
>> there currently is not.
>
> I don't think it is possible to write an AQL engine right now, because it is 
> not defined well yet. One can only anticipate on what one thinks it will be.
>
> So, my guess is it will be something between XPath and SQL. Instead of 
> fieldnames, using paths replacing fields. I think the specification takes too 
> long to arrive. It seems to me quite obvious what it will be.
>
> But writing an engine, from scratch, for it will cost, more then one year for 
> a very experienced developer, and even then.
> Companies like Oracle spend a substantial part of their developer-investment 
> in good query-engines.
>
> Storing data is peanuts, queriyng them is the hard part.
>
> But there is a way around, use what others did, and that is, using path-based 
> query-engines. There are quite some, also open source, good ones too.
>
> It is just a matter of storing your OpenEHR datasets path-based, and query 
> them path-based. For example, using an XML-database. Maybe there are other 
> possibilities too.
>
> And then you have a full featured AQL engine, as I think it will look like in 
> the future, when the specs will be finally written. Maybe some 
> syntax-translation is needed.
>
> What could the AQL-specification hold for promises, which are not already 
> delivered by XPath/XQuery right now? I cannot think of anything.
>
> I think it would be wise for the OpenEHR community to look very well on what 
> is already done by commercial companies and open source communities for years 
> now, instead of reinventing the wheel, unless of course, when there are good 
> reasons.
> It would make introduction of many OpenEHR implementations much more easy, 
> and that would be good for worldwide success for the OpenEHR specifications.
>
> Ok, these are my two cents. I am very anxious to learn why the current 
> XPath/XQuery-specifications are not good enough.
>
> Have a nice sunday.
>
> Bert.
>
> Verstuurd vanaf mijn iPad
>
> Op 20 apr. 2013 om 18:29 heeft Thomas Beale  oceaninformatics.com> het volgende geschreven:
>
>> I don't think there is an AQL engine open source yet, but in any case it 
>> only makes sense when there is an open source openEHR EHR service, which 
>> there currently is not.
> _______
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org
>


-- 
Ocean Informatics   *Thomas Beale
Chief Technology Officer, Ocean Informatics 
<http://www.oceaninformatics.com/>*

Chair Architectural Review Board, /open/EHR Foundation 
<http://www.openehr.org/>
Honorary Research Fellow, University College London 
<http://www.chime.ucl.ac.uk/>
Chartered IT Professional Fellow, BCS, British Computer Society 
<http://www.bcs.org.uk/>
Health IT blog <http://www.wolandscat.net/>


*
*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130422/96d75f9e/attachment-0001.html>
-- next part --
A non-text attachment was scrubbed...
Name: ocean_full_small.jpg
Type: image/jpeg
Size: 5828 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130422/96d75f9e/attachment-0001.jpg>


New draft of artefact identification proposal

2013-04-21 Thread Thomas Beale

For those interested in a new specification for artefact identification, 
I have more or less rewritten the previous draft, and created a new one 
based heavily on debates on these lists, experience of clinical 
modellers, CIMI community input, and implementation experience. It's 
still a draft and in development, and anyone is welcome to comment, pick 
holes, etc.

It's posted here 
.
 


- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-20 Thread Thomas Beale
On 19/04/2013 15:17, Randolph Neall wrote:
> Hi Seref,
>
> >In my humble opinion, AQL is the most neglected, yet, probably one of 
> the most important components of an openEHR implementation. It is not 
> part of the implementation, but it has been implemented by at least 
> two vendors that I know of, with a third having something quite 
> similar to it.
>
> Neglected? Two vendors? A third with something similar, implying a 
> branching of some sort? Then how has everyone else been accessing 
> their data? This implies the existence of an alternate and older query 
> engine. Is there a link describing this older one? Judging from 
> Thomas's wiki link, AQL appears part of a full-fledged query engine on 
> the order of any SQL query engine, very sophisticated. Is this query 
> engine open source? Did one of the two vendors develop it? How could 
> such a thing be neglected or even optional? Are there licensing 
> issues? Cost?
>

Vendors implement it in different ways, but to be honest, I think the 
main value isn't the straightforward part of the implementation, which 
isn't that complex, depending on how you do blobbing, it's the 
optimisations, and those depend heavily on the exact persistence layer 
choices.

I don't think there is an AQL engine open source yet, but in any case it 
only makes sense when there is an open source openEHR EHR service, which 
there currently is not.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-20 Thread Thomas Beale
On 19/04/2013 16:06, Randolph Neall wrote:
> Seref, to add to my questions:
>
> > AQL is the most neglected, yet, probably one of the most important 
> components of an openEHR implementation.
>
> Does this imply that each implementation of openEHR is sufficiently 
> different from others as not to allow for easy sharing of such things 
> as search or storage technologies?
>

Randy,

not sure what you mean by 'sharing' here? Most back-ends are designed as 
an open platform, obeying (or starting to obey) standardised service 
interfaces and the AQL language. So being able to replace one 
implementation with another is the whole idea, and certainly possible 
with some vendors already.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-18 Thread Thomas Beale
On 17/04/2013 22:04, Randolph Neall wrote:
> Thomas, somehow I'm not finding the AQL specification. It's probably 
> right under my nose on your specification/release page. Also, do you 
> have any references describing the AQL processor? Did you write 
> */that/* from scratch?? It would seem that the AQL processor would 
> indeed function as a formidable DBMS in its own right, at least with 
> regard to reads, capable of managing AND/OR logic trees and serving up 
> flat "tables" of joined data structures like any RDBMS.
>
> Randy
>

http://www.openehr.org/wiki/display/spec/AQL-+Archetype+Query+Language

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-17 Thread Thomas Beale
On 17/04/2013 18:47, Randolph Neall wrote:
> >The performance is perfectly adequate in all of these systems for the 
> kinds of queries used in point of care (e.g. typically sub 1-second), 
> and in some cases where ETL is implemented, the performance is also 
> acceptable. It's also true that quite a lot of effort and thinking has 
> gone into optimising AQL queries. There is always a query that can be 
> written that will take a long time to answer, but so far there is no 
> overlap between those type of queries and point of care latency 
> requirements i.e. such queries are always report-oriented, research 
> queries or some other kind of population query, where a (let's say) 5 
> second response is perfectly acceptable.
>
> That's excellent! Can you give any idea how long it takes to retrieve 
> into live memory and screen on a user's computer an entire EHR record 
> of "typical" size and complexity? Or does that not generally happen, 
> with records instead being fetched in smaller pieces?

Right - you wouldn't ever pull an entire EHR to the screen. I have seen 
openEHR applications pulling the main managed lists (say 6-8 
Compositions), latest lab results, plus a chronological list of 
consultations / events for the last year or so, plus key demographic 
data, all sub 0.5 sec. Then the user starts clicking on things, and more 
comes back.

More interesting screens contain a mixture of text and e.g. vital signs 
real-time graphs, which AQL copes with nicely - you can bring back just 
a 2-D array of numbers and timestamps for the graph, using AQL.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Trying to understand the openEHR Information Model

2013-04-17 Thread Thomas Beale

I should probably point out that there are some dozens of openEHR 
operational deployments 
,
 
all heavily using AQL for screen population, reporting and so on. The 
performance is perfectly adequate in all of these systems for the kinds 
of queries used in point of care (e.g. typically sub 1-second), and in 
some cases where ETL is implemented, the performance is also acceptable. 
It's also true that quite a lot of effort and thinking has gone into 
optimising AQL queries. There is always a query that can be written that 
will take a long time to answer, but so far there is no overlap between 
those type of queries and point of care latency requirements i.e. such 
queries are always report-oriented, research queries or some other kind 
of population query, where a (let's say) 5 second response is perfectly 
acceptable.

There is probably about 3 years of experience of such systems now 
(there's more like 6 years experience of commercially deployed AQL) that 
show that the performance challenges of this kind of framework are 
satisfiable, and no longer a research question (they were once obviously!).

The second order types of structures I mentioned below rely less on AQL, 
and more on smart commit type rules / triggers logic, which effectively 
enables pre-built query results to be maintained in a live system.

We're somewhere on a road where we are already riding in motorised 
transport, but we don't really know if what we have today is a Fiat 
Punto or a Maserati. Hopefully it's the Fiat, because that leaves us a 
lot of fun and room to get to the Maserati (at which point we start 
looking at air travel;-).

- thomas

On 17/04/2013 15:58, Randolph Neall wrote:
> Hi Seref,
>
> >Hint: think about how you're going to get data out before thinking 
> how you're supposed to keep it. There are lots of possibilities, but 
> you need to anchor those with a single method of access. I suggest  a 
>  brief look at Archetype Query Language (AQL)
>
> That's the whole point, Seref--"how you're going to get the data out." 
> And certainly AQL is one way to do that. My concern has to do with 
> querying performance (deserialization as a prerequisite to record 
> inspection, etc.) and the infrastructure resources necessary to 
> support them. Thomas hints at possibly some big changes when he said, 
> "There is an emerging set of 'second order' object definitions, that 
> use the URI-based referencing approach in very sophisticate ways to 
> represent things like care plans, medication histories and so on. I 
> can't point to a spec right now, but they will start to appear." I 
> don't know how radical that will prove to be. I'd assume they'd still 
> occur within the AQL paradigm. But it does appear that openEHR itself 
> is evolving on this point and perhaps for good reason.
>
> Please don't interpret my remarks as any sort of disrespect for 
> openEHR; I hope it has been apparent that my respect for the entire 
> system has grown as I have learned more about it. Some really 
> brilliant people, perhaps including you, put this whole thing 
> together. And you all do the whole world a favor by make it all open 
> and by making yourselves available for the sort of questions I have 
> raised.
>
> Randy
>


**
-- next part --
An HTML attachment was scrubbed...
URL: 



Mimetype ADL

2013-04-17 Thread Thomas Beale
On 16/04/2013 07:14, Bert Verhees wrote:
> Hi,
>
> Is there a mimetype defined for ADL-files?

There's no dedicated one; a text type should do, but remember ADL is 
UTF-8. I haven't looked up the rules on that but if text/plain allows 
UTF-8, then I would use that.

- thomas




Trying to understand the openEHR Information Model

2013-04-16 Thread Thomas Beale
On 16/04/2013 18:55, Randolph Neall wrote:
> Hi Thomas,
>
> Again, you've advanced my grasp of openEHR.
>
> >the change set in openEHR is actually not a single Composition, it's 
> a set of Composition Versions, which we call a 'Contribution'. Each 
> such Version can be: a logically new Composition (i.e. a Version 1), a 
> changed Composition (version /= 1) or a logical deletion (managed by 
> the Version lifecycle state marker). So a visit to the GP could result 
> in the following Contribution to the EHR system:
>
>   * a new Composition containing the note from the consultation
>   * a modified Medications persistent Composition
>   * a modified Care Plan persistent Composition.
>
> Your comment here is in the context of persistent Compositions, and I 
> think what you're saying is that these are a special case: persistent 
> Compositions, unlike event Compositions, contain only */one/* kind of 
> persistent information, and no event information, thus allowing clean 
> substitutions when that persistent information is later updated. This 
> would avert the horrible scenario I suggested, involving updating 
> heterogeneous persistent Compositions. If I'm grasping you, this makes 
> perfect sense.

to be 100% clear: the change set versioning model works for all 
Composition types - a single change set (what we call a Contribution) 
can contain versions of persistent and even Compositions.

Semantically, your understanding above is correct: persistent 
Compositions are always dedicated to a single kind of information, 
usually some kind of 'managed list' like 'current medications', 
'vaccinations' etc.

>
> >Systems do have to be careful to create references that point to 
> particular versions.
>
> Does that mean that tracing a web of connections with current 
> relevance requires systems to present invalidated Compositions to 
> users? Or are the links themselves revised to point to the replacement 
> Compositions?

Normally when a Composition is committed (within a Contribution) and it 
contains a LINK or DV_EHR_URI, that link points to the logical 'latest 
available' target. So the link is always valid. Such a link might point 
to e.g. a lab result Event Composition. The assumption is that the only 
changes to a lab result are corrections or in the case of microbiology 
and some other long period tests, updates - but essentially the latest 
available version = the result.

On the other hand, a link to a care plan might easily point to the care 
plan (usually a persistent Composition) as it was at the moment of 
committal. If the referencing Composition were retrieved, and that link 
dereferenced, an older version of the care plan will be retrieved.

> If the latter, how does one avoid having to recommit whole sets 
> revised compositions involved in the affected thread of links? It 
> would seem that you can't just swap out one item in a tangled web, at 
> least without some very sophisticated compensatory activities.Or maybe 
> links are somehow named in such a way as always to point to the latest 
> version of something, which you seemed to suggest is possible 
> (version-proof links?).
>
> OpenEHR is a remarkable piece of technology. An EHR record is 
> externally a collection of independent and separate documents called 
> Compositions that can be invalidated and versioned and swapped out at 
> any time. Yet, logically and internally, it is magically a vast graph 
> of nodes and edges and vertices, with connections not just within 
> archetypes but also between archetypes. Logically, the nodes 
> (typically archetypes) are not deleted (usually) nor do they lose 
> their initial identity when their contents change or when links 
> between them are altered. One wonders, then, why not just use a graph 
> DB instead of a collection of documents to house the information? 
> Wouldn't that be a shorter path to the same end and reduce some of the 
> versioning complexity (you'd say that would increase versioning 
> complexity)? Perhaps there are some openEHR implementations that are 
> doing just that. No? Could an openEHR system use a graph DB and still 
> be considered openEHR?

absolutely. Using path-based blobbing probably isn't a million miles 
from such DBs. Personally I used a wonderful object database called 
Matisse (still around today), which essentially operates as a graph db 
with write-once semantics, and I would love to have a side-project to 
build an openEHR system on that.

Nevertheless, there are a couple of container levels that have 
significance in models like openEHR, 13606, CDA and so on: the 
Composition (can be seen as a Document) and the Entry (the clinical 
statement level). So it's not completely mad to do blobbing at these 
levels, or build in other assumptions around them.

>
> Do you have a picture or map, somewhere, of your metadata graph, or 
> must I examine individual achetypes to see all the links between them?
>
> >there is an emerging set of 'second order' object defi

Trying to understand the openEHR Information Model

2013-04-16 Thread Thomas Beale

These scenarios were one of the reasons we were very careful to properly 
model commit time (system time) separately from the times of the visit, 
observations, actions etc (world time). The commit of the info may come 
days late, but it is always easy to determine a) what other clinicians 
could see on the system at time T and b) in what order things happened 
in clinical reality. The caveat is that the system won't tell you the 
full story until everyone has committed their data.

This doesn't mean there are no tricky competitive write situations, but 
via the above, and the versioning semantics (which include system-based 
branching), there are reasonably obvious strategies for correctly 
resolving the confusion.

- thomas


On 15/04/2013 20:11, Karsten Hilbert wrote:
> On Mon, Apr 15, 2013 at 08:40:59PM +0200, Bert Verhees wrote:
>
>> On 04/15/2013 06:12 PM, Thomas Beale wrote:
>>>> patient sees the GP, then visits a practice
>>>> nurse, without the GP record being committed first.
>>> yes, that's certainly a possibility, if the practice solution isn't
>>> designed to deal with it, and the staff are not trained...
>> In the Netherlands there is, what we call, the "door-handle-patient".
>> At the moment he is leaving the room, and is busy opening the door,
>> he tells what he is really worried about.
> That's standard GP land.
>
>> The GP asks the patient to sit down for an extra minute and explains
>> why he thinks it is not cancer, or he makes another appointment
>> because he thinks the patient has a point..
>> So a GP at latest should commit after the door is closed and the
>> patient has definitely gone and just before the new patient enters.
> For one thing that moment (the patient being "gone for
> good") never comes in reality.
>
> However, there's no need to define such a moment in time.
> The GP writes into the EMR whatever is known at any point
> during the consultation. Yes, that will be subject to
> editing, deleting, amending, but that's normal !
>
> The nurse (that is, any other workplace of the GPs network)
> will see whatever has been committed. Whenever something is
> committed a change notification is pushed out by the storage
> engine and clients can update themselves if relevant (that's
> how GNUmed does it). This, of course, does not yet solve the
> conflict of the user editing something that's just being
> changed but at least there's no chance to not be aware of
> it.
>
>> At the moment a patient arrives again at the nurses or
>> assistants-desk, the dossier should be fully up to date, or it should
>> be recognizable that it is not up to date
> In reality "fully up to date" never happens. It is always
> the current state of affairs.
>
>> and then the nurse has to wait until the lock is released.
> Ah, no, it doesn't make a difference whether the nurse waits
> for a lock to be released or not - because even if the GP
> released the lock the nurse has no way of knowing whether
> the GP committed everything (instructions) needing
> committing or whether the GP forgot something. That can only
> be assured by out-of-band means, say, the patient knowing
> what the nurse needs to do for him (or GP and patient
> agreeing and sending an "action sheet" *before* the patient
> leaves the room -- and still that does not prove the GP does
> not remember something needing doing after the patient left
> the exam room).
>
> It is a problem not solvable by technical means alone.
>
> Karsten


-- 
Ocean Informatics   *Thomas Beale
Chief Technology Officer, Ocean Informatics 
<http://www.oceaninformatics.com/>*

Chair Architectural Review Board, /open/EHR Foundation 
<http://www.openehr.org/>
Honorary Research Fellow, University College London 
<http://www.chime.ucl.ac.uk/>
Chartered IT Professional Fellow, BCS, British Computer Society 
<http://www.bcs.org.uk/>
Health IT blog <http://www.wolandscat.net/>


*
*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/fa0de6c9/attachment-0001.html>
-- next part --
A non-text attachment was scrubbed...
Name: ocean_full_small.jpg
Type: image/jpeg
Size: 5828 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/fa0de6c9/attachment-0001.jpg>


Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 17:11, Randolph Neall wrote:
> You've all been very helpful and clear in responding to my questions.
>
> What I've learned is that the basic unit of storage--and retrieval--is 
> a single composition, nothing bigger, nothing smaller, and certainly 
> not the complete roster of compositions as I had thought (based on my 
> mistaken notion that one could not serialize, easily, only a section 
> of a complex instantiated object tree). That resolves a lot of my 
> concerns.
>
> However, this has taken the conversation into an interesting area, 
> namely, those types of compositions of that contain what you call 
> "persistent" information, such as drug lists, problem lists, family 
> history and so on, where subsequent compositions must modify the 
> states of earlier compositions and where, as a result, subsequent 
> compositions must embody and repeat much of what is contained in prior 
> compositions.

actually, versioning is suppprted (and routinely used) for all 
Compositions. Its meaning for non-persistent Compositions is that it is 
an error correction or update. The only real difference is that there 
will be many more version updates to persistent Compositions over the 
lifetime of an EHR than for any other kind of Composition in that EHR.

> The same issue, I would think, would also arise in your workflow 
> situations (observation / instruction / action), where, again, 
> subsequent compositions--often in hierarchical relation to yet prior 
> compositions--must modify the states of items in prior compositions.

the change set in openEHR is actually not a single Composition, it's a 
set of Composition Versions, which we call a 'Contribution'. Each such 
Version can be: a logically new Composition (i.e. a Version 1), a 
changed Composition (version /= 1) or a logical deletion (managed by the 
Version lifecycle state marker). So a visit to the GP could result in 
the following Contribution to the EHR system:

  * a new Composition containing the note from the consultation
  * a modified Medications persistent Composition
  * a modified Care Plan persistent Composition.

We don't tend to see the relationships between the Compositions and 
other Compositions outside of the Contribution (i.e. the current commit) 
as hierarchical containment, rather they are references. So there are 
LINK objects containing URIs that point to other Compositions. The data 
type DV_EHR_URI can also be used directly in data to point to other data 
objects in the same or another EHR.

If for some reason some historical Composition had to be updated due to 
the current commit, it just gets a new Version as part of the Contribution.


> Once again, since everything--and its hierarchical context--is 
> immutable and cannot be modified in place, you have to reproduce that 
> entire context in each composition that modifies the state--or has a 
> dependency on--of something, however small, in a prior composition. 
> And to compound the issue even more, these subsequent compositions, 
> whose contents address prior compositions, might also contain "event" 
> as opposed to "persistent" information. So, as states of items in 
> prior compositions undergo state changes, it is not a simple matter of 
> apples-to-apples substitutions as you replace them with new versions, 
> because both prior and subsequent compositions could also contain 
> "event" information. So maybe the versioning process actually splits 
> compositions, declaring only pieces of them obsolete.

due to the referencing approach, this is not really a problem. But 
systems do have to be careful to create references that point to 
particular versions, or 'latest version' (whatever it might be).

>
> Obviously, you've all found ways to make this work, perhaps elegantly, 
> but, as some are suggesting, at very least this would enlarge the 
> amount and scope of information involved in a single commit, thus 
> inviting contention. I see some real complexity here.

I have to say, in the systems I know of, the contention issue is 
vanishingly small. It's not to say it will never occur, but it's not a 
general problem that I know of.

> I'll have to read more about how versioning works, using the 
> references you have provided me. I did look at the common_im.pdf Eric 
> referenced, and versioning, from my brief exposure to it in this PDF, 
> is obviously one of the */most/* complex aspects of the openEHR 
> specification, as well it would be.

luckily no clinical person ever sees this in archetype land, or else 
they would all go mad ;-) (After shooting the evil spec developers for 
having the temerity to think they should even see such gears and cogs).

>
> An openEHR record, as I'm coming to understand it, is basically an 
> indexed collection of very sophisticated "documents" analogous to 
> PDFs, "documents," which, like ordinary documents, are persisted as 
> single digital streams that can be hashed and signed, and that must 
> also be deserialized and pa

Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 15:43, Ian McNicoll wrote:
> Hi Thomas,
>
> I can certainly see a situation where e.g A medication order was
> issued and the medication administered within a short time period,

well, 'short' here probably means at least minutes... that's 'long' in 
computing terms.

> requiring dynamic persistent medication summary updates (with
> references/links to the original Entries in event Compositions) where
> a lazy commit could cause an issue.

lazy commits (i.e. due to caching) are a different (and real) issue. 
Proper cache management should avoid them.

>   A problem summary list collision
> is less likely but possible e.g. where an EHR is fully
> problem-oriented and a patient sees the GP, then visits a practice
> nurse, without the GP record being committed first.

yes, that's certainly a possibility, if the practice solution isn't 
designed to deal with it, and the staff are not trained...

- thomas




Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 14:37, Grahame Grieve wrote:
> "big risk" - it's a combination of how likely it is, and how bad it is 
> if they are.
>
> Generally, current location, current medication lists, summary lists 
> are things where contention can happen. Quite often, I've seen, a 
> cascade of things will happen on a patient simultaineously as multiple 
> people focus on the patient
>
> The other place where contention is a problem I've experience has been 
> pathology reports that are not complete - in a busy lab doing 2000 
> reports/day, I observed editing contention 10-20x a day on average. 
> That's pretty low, but the consequences of a clash bad.

Grahame - can you elucidate on this? Are you saying that you have seen 
multiple parallel committers trying to update the same lab report (same 
patient, order etc) at the same time? The only way I can imagine this is 
if multiple specialist lab systems contribute to a common overall report 
(i.e. some kind of order grouping). In this case, there is unavoidably 
logic to do with how the pieces get stitched together anyway, so I am 
not sure how contention errors could arise.

- thomas




Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 14:37, Grahame Grieve wrote:
> "big risk" - it's a combination of how likely it is, and how bad it is 
> if they are.
>
> Generally, current location, current medication lists, summary lists 
> are things where contention can happen. Quite often, I've seen, a 
> cascade of things will happen on a patient simultaineously as multiple 
> people focus on the patient
>
> The other place where contention is a problem I've experience has been 
> pathology reports that are not complete - in a busy lab doing 2000 
> reports/day, I observed editing contention 10-20x a day on average. 
> That's pretty low, but the consequences of a clash bad.
>

that's very interesting. I don't think we've seen anything like that - 
not that I doubt what you are saying here. It would be very interesting 
to know in what circumstances competitive updates to Rx and Dx list for 
a patient occur. Smart systems might track such things and turn on 
pessimistic (i.e. locking-based) versioning.

- thomas



Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 11:54, Thomas Beale wrote:
>
> the update logic is Composition-level, and you can't commit something 
> smaller than a Composition. The default logic is 'optimistic' meaning 
> that there is no locking per se; instead, each request for a 
> Composition includes the version (in meta-data not visible to the data 
> author), and an attempt to write back a new version of a Composition 
> will cause a check between the current top version and the 'current 
> version' recorded for the Composition when it was retrieved. IF they 
> are identical, the write will succeed. There is also branching 
> supported in the specification. Read the Common IM 
> <http://www.openehr.org/releases/1.0.2/architecture/rm/common_im.pdf> 
> for the details.

I had meant to say here: this makes sense for EHR and similar systems 
because there is very low / no write competition for the same piece of 
the same patient record, as a general rule.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130415/64732a83/attachment.html>


Trying to understand the openEHR Information Model

2013-04-15 Thread Thomas Beale
On 15/04/2013 04:07, Randolph Neall wrote:
> I just spent quite a few profitable hours today with ehr_im.pdf, which 
> appears to be the main resource for understanding the "Information 
> Model" or "Reference Model," available for download from the CKM web 
> site.
>
> Overall, it's a very well-written document that anyone trying design 
> or implement any sort of EHR system should read. I'm left with a few 
> questions about instantiation, isolation, persistence, querying and 
> the impact of changes on stored content and querying. I hate to take 
> valuable time for anyone to answer my questions, so maybe all I need 
> are some more references.
>
> I'll first explain what I think I understand of how it all works.
>
> From what I can see, the entire system consists of a hierarchy of 
> classes, some, like the EHR, Composition, Instruction, Observation, 
> Evaluation and Action are defined as part of the reference model while 
> others, the archetypes, which are not part of the reference model, all 
> inherit from one of these RM classes. There are other RM classes, like 
> Entry, Navigation, Folder, Data_Structure, etc.) that are also part of 
> the RM and are properties of the archetypes. EHR is the base class, 
> containing, by reference, all the others. Navigational information 
> inside the composition archetypes is apparently critical. The 
> Composition type is the basic container for all other archetypes that 
> might be used within a single "contribution." And templates specify 
> which archetypes will exist in the composition types and in what 
> arrangement. All of this seems quite clear.
>
> Several things would seem to follow from all this:
>
> To access even the smallest detail from the overall record, the 
> software would need to request the entire record from the server, 
> presumably in the form of a binary stream, deserialize it all, and 
> then instantiate everything from the EHR class on down. It is somewhat 
> analogous to loading a document of some sort, something you load into 
> memory in its entirety before you can read anything from it. Am I 
> mistaken here? Or is there a way to instantiate small pieces of it? 
> That, it seems, would depend on the level at which serialization 
> occurs, whether it is serialized in pieces or in one big blob (or XML 
> document) or serializes it in smaller units.

Hi Randy,

It's a standard operation to query for and obtain objects of all sizes, 
from whole Compositions (the largest contiguous objects in an openEHR 
system) to an Element or even just the Quantity inside the Element. The 
wiki seems to be down right now, but there is a specification of the 
return structures for querying describing this.

>
> If it is all in one piece, how do you manage isolation? Can only one 
> user "check out" the record at one time? Or does it work something 
> like source control systems like SVN, where different people can 
> commit to a common project, merge differences, etc? Once you obtain 
> the binary stream from the server, you from then on know nothing of 
> changes others might also be making.

the update logic is Composition-level, and you can't commit something 
smaller than a Composition. The default logic is 'optimistic' meaning 
that there is no locking per se; instead, each request for a Composition 
includes the version (in meta-data not visible to the data author), and 
an attempt to write back a new version of a Composition will cause a 
check between the current top version and the 'current version' recorded 
for the Composition when it was retrieved. IF they are identical, the 
write will succeed. There is also branching supported in the 
specification. Read the Common IM 
 
for the details.

>
> It would also seem to follow that when you want to save your work (say 
> you added some composition) that you would serialize the entire 
> record--which may contains years of information--and send it to the 
> server as a fresh new document, completely replacing the old one, 
> which, presumably, would be moved to some "past version" archive. 
> Correct?

Not correct ;-) The EHR is a virtual information object, and has no 
containment relationship to the Compositions or other items it includes.

> If so, how do you cope with  your storage requirements roughly 
> doubling with every tiny addition to the record? I'm probably way off 
> here; you've probably got an elegant answer to this, namely, some sort 
> of segmented storage, with each composition persisted in its own 
> little blob??

yep, that's much closer.

>
> You have event classes and you have persistent classes, well described 
> in the pdf. A persistent class would be something like a current drug 
> list. Following on with my understanding, it would seem that any 
> change to this list via a new composition submission, would 
> effectively create an entirely new copy of the list, embracing any 
> changes, however s

Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale
On 10/04/2013 16:42, Randolph Neall wrote:
>
>
> The real question thus comes down to what level of thought the 
> nameable components of a model should express. If the entire model 
> could be understood as a tree, how complex should the named branches 
> of that model be, and how enduring should the names of those branches 
> be and what sort of change triggers a change of name? Should named 
> branches be allowed at all? Or should the model consist only of 
> re-usable leaves on unnamed branches?  Branches, even very complex 
> branches, would certainly exist in the models based on his CCDs, yes, 
> but they would probably not be given names, and if they are, those 
> names would not endure across even tiny changes or extensions.

this is actually a very deep question. I don't know that we even know 
the answer for 20 year old programming languages, let alone archetypes. 
But it is a core part of the thinking.

Several realisations that have been made about this topic with respect 
to archetypes:

  * archetypes are designed as 'maximal definitions' around a focal topic.
  o this doesn't mean they are data sets (they are not, except in
some edge cases like some lab results, where the archetype acts
like a template as well), but that it is entirely reasonable to
aggregate data point and data group definitions about the same
topic together, even though only different subsets of the total
set would even be deployed. Example: the BP archetype contains
systolic, diastolic, MAP and Pulse Pressure. These are 3
mutually exclusive ways of measuring BP (i.e. sys+dia OR MAP OR
PP), and yet they are in the same archetype...
  * archetypes are units of governance
  * a repository of archetypes is a library of re-usable domain data
definitions

these principles give clinical modellers at least some handle on what is 
appropriate in terms of branches, numbers and naming of data points and 
so on. I am sure that at some theoretical level, mistakes are being 
made. But in the end, the models are usable and re-usable, and that 
counts for a lot.


>
> I wonder whether at even the most granular level, immutability is 
> realistic. There is a fair bit of content that Tim models for just one 
> ICD10 code component. Each ICD10-based CCD is itself a little tree, 
> with one branch with some leaves on it. What if the WHO "extends" the 
> low-level content in some small way, adding a leaf, without changing 
> the code? That would push Tim into a new CCD (named with a new GUID) 
> for the very same code with a completely different set of GUIDs for 
> ALL the leaves. Models using the old CCD would need to adopt the new 
> one, and querying across the transition would be aborted. And that 
> would be consequential for something as basic as an ICD10 code. This 
> concern probably reflects some ignorance on my part over what sort of 
> change the WHO permits to the content of a given ICD10 code, and how 
> Tim would adapt to that.
>

I adapt with codeine. Due to the pain of thinking about it ;-)

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale
On 10/04/2013 15:46, Tim Cook wrote:
> On Wed, Apr 10, 2013 at 11:37 AM, Thomas Beale
>  wrote:
>> Tim,
>>
>> Looking at the extract below, this MLHIM model would be hard to use as a
>> basis for generating source code facades, WSDL, JSON UI form specifications,
>> and other things we regularly generate downstream from templates.
>>
> I have no clue why you would say that when there are TONS of tools and
> libraries, built and tested, for working with XML Schemas and XML data
> in building WSDL, XForms, XQueries and other XML family artifacts, as
> well as JSON.

actually, that's true, but I was thinking more about the semantic 
content of the model rather than what tools can do the transformation 
work. And I just realised now I am looking at a RM-only definition, not 
a clinical model definition, so ignore what I said for now. I need to 
look more closely at the clinical MLHIMs.

>
> Your earlier statement on this subject was just so bizarre to me that
> I didn't reply to it.  A simple survey of what is available as well as
> common sense dictate that these things all work together.  The only
> thing I can think of is your "I must build it myself" approach.  Which
> in that case, it might be difficult for you to build them.  But why?
>
> I don't understand your concern.  This is XML, you don't have to build
> or adapt it.

It depends on what you are trying to build. We have already described 
our aims:

  * 3 levels of models: Information Model / archetypes (library of
re-usable data point / group definitions) / templates (data sets)
  * specialisation ability between models in all layers (it comes for
free in the IM layer, if expressed as an OO model)
  * model - model referencing capability
  * the general semantics of object models, exemplified by e.g. UML 2
(imperfect as it is), are needed - generic types, inheritance,
redefinition
  * semantics of both constraint and extension within the archetype and
template layers of models

And we already know that XML Schema doesn't implement:

  * generic types
  * inheritance, in a reasonable way
  * even container types are needlessly annoying

So to achieve the aims, XML schema is pretty far from being adequate 
(and I recognise XSD 1.1 gets a bit closer than 1.0, but it doesn't 
really fix the core semantics). In particular, it's hard to see how to 
achieve the distinct archetype and template levels. It is however good 
enough to describe single documents / messages which is why we routinely 
generate XSDs from final templates, and deploy them as message content 
definitions.

So yes, we have to do some innovation here, and invent something new. We 
are not the only ones. Late last year, I spent a month with Dr Stan 
Huff's group at InterMountain Health, and they have invented - slowly 
and surely over 15+ years - nearly the same ecosystem as what openEHR 
has. Some of their tools are way better, some not as good. Their use of 
terminology is stronger, but flexibility of archetyping / templating 
slightly less. They have 6,000 'clinical element models (CEMs)' (each 
one is roughly the same as a single archetype primary data point, + 
context data points), so the equivalent of about 300+ archetypes @ 
average of 20 data points / archetype. The multiple levels, redefinition 
& specialisation , flattening, downstream XSD generation are all there, 
just as for the archetype / template world. Have a look at their models 
here 
<http://www.clinicalelement.com/#/20130301/Intermountain/StandardLabObs>. The 
only reason this isn't famous is because it's a proprietary unpublished 
model formalism (well historically anyway). It's really impressive work.

The second major project that is also doing very nice modelling work in 
true multiple layers at the VA is the MDHT project 
<https://www.projects.openhealthtools.org/sf/projects/mdht/> that Dave 
Carlson is leading. They are exploiting UML 2.x to the absolute limit to 
make it do the kinds of things they want to do - very similar to the 
openEHR and Intermountain list of requirements. They are extending this 
project in the direction of ADL, under the 'AML', Archetype Modelling 
Language project, which will add further ADL / AOM semantics to the 
modelling capability. This tooling right now does not support all of the 
semantics of ADL/AOM 1.5, but on the other hand has a much closer 
binding with UML and all related tools and formalisms. Now UML has its 
faults, but its formal sophistication is lot higher than XML schema.

The more recent Clinical Information Modelling Initiative 
<http://informatics.mayo.edu/CIMI/index.php/Main_Page>has coalesced some 
of the major e-health organisations from around the world, and they are 
championing a general 'semantic modelling' approach for health. CIMI 

Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale

Tim,

Looking at the extract below, this MLHIM model would be hard to use as a 
basis for generating source code facades, WSDL, JSON UI form 
specifications, and other things we regularly generate downstream from 
templates.

- thomas

On 10/04/2013 14:01, Timothy W. Cook wrote:
>> I would like to have the element-names referring to the 
>> information-model. I want to call an ITEM_LIST-attribute "items" just 
>> "items", not "items_12". If the validation-schema does not allow that 
>> (XML-Schema has this problem), and there can't be worked around, than 
>> the schema is not good enough for me. 
> Did you actually LOOK at MLHIM CCDs? For example:
>
>  
>  
>  
>  
>   ref="mlhim2:links"/>
>   ref="mlhim2:feeder-audit"/>
>   type="xs:language" default="en-US"/>
>   type="xs:string" default="utf-8"/>
>   ref="mlhim2:el-c2c7e652-46f8-498a-99d0-c85005d98f6f"/>
>   ref="mlhim2:entry-provider"/>
>   ref="mlhim2:other-participations"/>
>   ref="mlhim2:protocol-id"/>
>   name="current-state" type="xs:string"/>
>   ref="mlhim2:workflow-id"/>
>   ref="mlhim2:attestation"/>
>   ref="mlhim2:el-a69717be-7b3f-4be7-9fec-80f8ec1891e8"/>
>  
>  
>  
>  
>   substitutionGroup="mlhim2:entry-data"
> type="mlhim2:ct-a69717be-7b3f-4be7-9fec-80f8ec1891e8"/>
>   substitutionGroup="mlhim2:entry-subject"
> type="mlhim2:ct-c2c7e652-46f8-498a-99d0-c85005d98f6f"/>
>
>
*
*
-- next part --
An HTML attachment was scrubbed...
URL: 



Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale
On 10/04/2013 13:33, Tim Cook wrote:
> [reposted for Tim; hist original bounced]
>
> On Wed, Apr 10, 2013 at 5:14 AM, Thomas Beale
>   wrote:
>
>> it's similar, but misses the crucial distinction between archetypes and
>> templates. Without that there is no library of re-usable concepts to use in
>> your data-set definitions. As far as I can tell, this distinction just
>> doesn't exist in MLHIM. So it means that every 'model' has to make up its
>> own definition of standard items like vital signs, lab analytes and so on.
>>
> MLHIM allows reuse but does not allow redefinition.  Redefinition of a
> component after it has been used to generate instance data is a BAD
> THING.  You are simply looking for trouble when models can morph into
> something they were previously not.  Then we can discuss the
> complexity managing that process.  It just isn't necessary in MLHIM.

A couple of things to say here. 'Redefinition' as in openEHR and most 
model-based systems I know of doesn't mean you change something that has 
been deployed. It means being able to specialise an existing model in 
the design environment, in a similar way as in object-oriented 
programming. So the point is to be able to re-use and adapt existing 
definitions, not just 'use' things.

Not being able to do this means either:

  * you are stuck using someone else's definition, and you just live
with not having the bits you wanted
  * you have to make a copy, and rewrite to suit yourself. Now you have
a different model, technically unrelated to the original, and tools
have no idea that they might be able to query for some of the same
data points.

There is actually no such thing as 'redefining a deployed model'. Models 
can be evolved over time, and they get new version numbers. Breaking 
changes get new major versions, which are treated as distinct models in 
archetype land.

But new versions with non-breaking changes can be treated by querying, 
modelling tools, reporting etc as being compatible with earlier 
versions. Being able to query safely over longitudinal data whose models 
change over time is essential in health.

It's clear that these needs (specialisation of models, longitudinal 
querying over data) are seen as essential by large orgs, e.g. the CIMI 
members Mayo clinic, InterMountain Health, Veterans Health, Nehta and so 
on. The OHT Model-driven Health Tools (MDHT) project is founded upon 
concepts like model specialisation and re-use.

I don't think there is any way these needs can be ignored in a scalable, 
adaptable health information modelling ecosystem.

- thomas


-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130410/4e0fc5ee/attachment.html>


Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale
[reposted for Tim; hist original bounced]

On Wed, Apr 10, 2013 at 5:14 AM, Thomas Beale
  wrote:

> it's similar, but misses the crucial distinction between archetypes and
> templates. Without that there is no library of re-usable concepts to use in
> your data-set definitions. As far as I can tell, this distinction just
> doesn't exist in MLHIM. So it means that every 'model' has to make up its
> own definition of standard items like vital signs, lab analytes and so on.
>
MLHIM allows reuse but does not allow redefinition.  Redefinition of a
component after it has been used to generate instance data is a BAD
THING.  You are simply looking for trouble when models can morph into
something they were previously not.  Then we can discuss the
complexity managing that process.  It just isn't necessary in MLHIM.


>> You will notice that we encourage artifact re-use in MLHIM as well.
>> CCDs, PCTs, XForms and XQueries are all reusable.  We just do not
>> expect that there will ever be global consensus on any one artifact.
> But you did say that there is no specialisation of models possible. That
> removes a major mode of re-use. With archetypes, a development project can
> take 10 archetypes from a national CKM, or openEHR's, and formally
> specialise them, by adding further restrictions and/or extra data points, as
> well as translating them, if that's needed. Those specialised archetypes
> then go into templates they build locally. This system gives fine-grained
> re-use and re-definition, while guaranteeing that a query for any
> archetype-defined systolic BP based on a shared archetype, will work,
> anywhere in the world, regardless of data set, application, clinical context
> or language.
>
>
>> As far as reading the files.  The meta data is standards compliant RDF
>> in standards compliant Dublin Core, in a standards compliant XML
>> Schema.  What is tricky or difficult about that?
>>
>> Yes Bert, most people use tools besides a text editor to do real
>> development.  Maybe only yourself and Richard Stallman use Emacs for
>> everything?
> I have sympathies both ways. Example: trying to read RDF in raw form is
> useless. You can use a tool, but I'd rather have OWL abstract to look at,
> and that's just text.
>
> - thomas
>
>
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org


--  Timothy Cook, MSc +55 21 
94711995 MLHIM http://www.mlhim.org Like Us on FB: 
https://www.facebook.com/mlhim2 Circle us on G+: http://goo.gl/44EV5 
Google Scholar: http://goo.gl/MMZ1o LinkedIn 
Profile:http://www.linkedin.com/in/timothywaynecook

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130410/721ab338/attachment.html>


Multi-level modelling - what does it mean?

2013-04-10 Thread Thomas Beale
On 09/04/2013 22:18, Tim Cook wrote:
> There are a large number of misconceptions and incorrect assumptions
> in this thread.  I don't have time right now to address all of them
> but I will later this week.
>
> Quickly though, there are no "tricks" to what we do in MLHIM.
> Everything is 100% W3C standards compliant.
>
> Exploiting a bug in a tool (like Bert is doing in Xerces) so you can
> get what you want, is a "trick".  A very poor practice trick at that.
> One that is certain to come back to bite you and your customers.
>
> Tom's definition of Multi-Level Modelling is not different than what
> has been on the main page of the MLHIM website ( www.mlhim.org ) for a
> long time.  I am not sure how anyone can think that I do not
> understand the layers and reusability issues at stake.

it's similar, but misses the crucial distinction between archetypes and 
templates. Without that there is no library of re-usable concepts to use 
in your data-set definitions. As far as I can tell, this distinction 
just doesn't exist in MLHIM. So it means that every 'model' has to make 
up its own definition of standard items like vital signs, lab analytes 
and so on.

> You will notice that we encourage artifact re-use in MLHIM as well.
> CCDs, PCTs, XForms and XQueries are all reusable.  We just do not
> expect that there will ever be global consensus on any one artifact.

But you did say that there is no specialisation of models possible. That 
removes a major mode of re-use. With archetypes, a development project 
can take 10 archetypes from a national CKM, or openEHR's, and formally 
specialise them, by adding further restrictions and/or extra data 
points, as well as translating them, if that's needed. Those specialised 
archetypes then go into templates they build locally. This system gives 
fine-grained re-use and re-definition, while guaranteeing that a query 
for any archetype-defined systolic BP based on a shared archetype, will 
work, anywhere in the world, regardless of data set, application, 
clinical context or language.

> As far as reading the files.  The meta data is standards compliant RDF
> in standards compliant Dublin Core, in a standards compliant XML
> Schema.  What is tricky or difficult about that?
>
> Yes Bert, most people use tools besides a text editor to do real
> development.  Maybe only yourself and Richard Stallman use Emacs for
> everything?

I have sympathies both ways. Example: trying to read RDF in raw form is 
useless. You can use a tool, but I'd rather have OWL abstract to look 
at, and that's just text.

- thomas




openEHR wiki back online; need to watch performance

2013-04-09 Thread Thomas Beale

The openEHR website and wiki have been moved, and the wiki re-enabled. 
As of tonight it appears to be working fine, but earlier in the day, 
there were severe performance problems with the server, so proceed 
prudently for the next day or two, in case we need to offline the server 
and adjust things.

thanks for your patience.

- thomas beale



Multi-level modelling - what does it mean?

2013-04-08 Thread Thomas Beale

To put some numbers on things... in a 2012 snapshot of the openEHR.org 
CKM archetypes there are:

  * 267 compiling (i.e. technically valid archetypes)
  o including 94 specialised ones
  * In these archetypes there are:
  o 3208 'archetypable' nodes (i.e. LOCATABLE nodes)
  o of which 2163 level level nodes with DATA_VALUE objects.

If we concentrate on the leaf level nodes, we can think of 2163 
re-usable data points / groups for general medicine. That's not nothing.

The downside:

  * the quality is variable (due to insufficient modelling work)
  * the coverage of medicine is patchy. Some areas are heavily covered,
others with almost nothing.

Nevertheless... these archetypes are /commonly re-used /in local 
deployment situations, including some of the ones mentioned here 
.
 
Re-used usually means that:

  * they were either used as is, or further specialised in order to add
or modify some data points / groups
  * used by locally built templates, to create data set definitions that
are actually used in systems.
  * they were also used by at least one major national programme
(Australia) as a basis for production health information definitions
for national use. Some of these archetypes will be re-incorporated
into openEHR.org.
  * 30 demographic archetypes were provided by a Brazilian research group
  * numerous archetypes have had translations added by various health
professionals and research groups.

With all the limitations implied in the above (and given the relative 
lack of endorsement by official bodies, who prefer largely 
hard-to-implement 'official' standards), I don't think this can be 
claimed to be a failure.

As I said before, although there are a lot of things that can be 
improved (e.g. reference model simplifications, ADL/AOM 1.5 etc), there 
has been no thought of getting rid of or substantially changing:

  * the basic 3 levels of reference model, archetypes (the re-usable
library of domain definitions) and templates (the usually locally
produced data set definitions)
  * the ability to specialise within these levels, i.e. use an
inheritance relationship
  * the ability to connect by association one model with other(s)

Indeed, the direction of development is to strengthen all of these. If 
you consider each level of inheritance (ignore the RM) as a 'level', 
this is what I would call 'multi-level' modelling. From the discussions 
to far, I think the MLHIM aproach, is essentially a method of defining 
XSD document definitions as constrained versions of an XSD-expressed 
base information model. As Tim explained, there is no specialisation, 
nor any distinction between the library (archetypes) and data-set 
(template) levels. MLHIM may be easier to implement in the short term. 
However, I think the capability for scaling (implementing numerous new 
data-sets but with diminishing effort due to a greater library level) 
and re-use will be relatively limited in the medium to long term. I also 
think the ability to generate different kinds of artefacts from the 
underlying definitions - e.g. UI data capture screen definitions, UI 
display definitions, PDF definitions, WSDL and so on, will be relatively 
limited.

It may be that the task we set in openEHR is too ambitious! Anyway, this 
is the world as I see it...

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-08 Thread Thomas Beale

I am always somewhat surprised as well. Thanks by the way for your 
clarifying notes, that is exactly how I would summarise the discussions.

- thomas

On 07/04/2013 22:08, Randolph Neall wrote:
> Hi Thomas,
> I'm surprised that at this advanced stage of openEHR's maturity you'd 
> still have to defend concepts like these, which are self-evident. Your 
> architecture, or something closely resembling it, is actually the only 
> path to (1) computability, (2) shareability, and (3) coherent and 
> maintainable program code. Ultimately the real enemy is chaos, and 
> that's precisely what you get unless someone detects and names the 
> universal patterns amidst the diversity, and structures program code 
> to conform to such patterns. I'm not clear why this should be 
> controversial.
> This discussion is now dividing into two unrelated branches: (1) the 
> desirability of consensus around the content of data model, and (2) 
> whether the model itself, whether widely agreed to or not, 
> should embody a multi-level abstraction hierarchy permitting code and 
> logic reuse at its more abstract levels. Both branches, wrongly 
> argued, are a direct invitation to chaos. From what I understand of 
> it, openEHR is an attempt, in both regards, to avoid chaos. I can only 
> wish you success against the two challenges.




The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-07 Thread Thomas Beale
On 07/04/2013 12:11, Grahame Grieve wrote:
> Hi Tom
>
> You ask:
>
> > Is there a better meta-architecture available?
>
> When actually the question at hand appears to be: is it even worth 
> having one?
>
> I don't think that this is a question with a technical answer. It's a 
> question of what you are trying to achieve. I've written about this 
> here: http://www.healthintersections.com.au/?p=820
>

There is always a meta-architecture. It's just a question of whether 
system builders are conscious of it. If they aren't, then by definition 
they are just doing /ad hoc/ development, with no comprehension of the 
semantics of what they build.

I prefer to have conscious design going on, and make some attempts at 
defining rules for system semantics. Then you know what you can expect 
the system to do or not.

To go back to the question of meta-architecture, let me ask the 
following questions...

1. is it worth trying to have a publicly agreed (by some community at 
least) information model? I.e. to at least be able to share a 
'Quantity', a data tree of some kind, a 'clinical statement' and so on?
 => in my view yes. Therefore, define and publish some information 
model. Aka 'reference model' in openEHR.

2. do we really want to redefine the 'serum sodium', 'heartrate' and 
'primary diagnosis' data points every time we define some clinical data set?
 => in my view no. Therefore, provide a way to define a library of 
re-usable domain data points and data groups (openEHR version of this: 
archetypes)

3. do we need a way to define data sets specific to use cases, e.g., the 
contents of messages, documents etc etc?
 => in my view, yes, it seems obvious. Therefore, provide a way to 
define such data sets, using the library of 'standard data 
points/groups', and also the reference model.

and

4. would we like a way of querying the data based on the library of 
re-usable data items? E.g. is it reasonable to expect to query for 
'blood sugar' across data-sets created by different applications & sources?
 => in my view yes. To fail on this is not to be able to use the 
data except in some /ad hoc /brute force sense.


You (I don't mean Grahame, I mean anyone ;-) may answer differently, but 
if you don't care about these questions, it means you have a 
fundamentally different view about how to deal with information in 
complex domains requiring information sharing, computation, and 
ultimately intelligent analysis (health is just one such domain). Either 
you think that the above is a 'nice idea' but unachievable, or else that 
it's irrelevant to real needs, or.. something else.

If you think the questions are relevant but have different answers to 
them, it means you believe in a different meta-architecture.

Note that these considerations are actually orthogonal to whether 
standards should be built by agreeing only on messages between systems, 
or how systems are built (the topic of Grahame's blog post).

- thomas



-- next part --
An HTML attachment was scrubbed...
URL: 



The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-07 Thread Thomas Beale
On 07/04/2013 00:35, Bert Verhees wrote:
>> That's expedient, but it's also a guarantee of non-interoperability.
> As far as I can see, also from my experience, nor OpenEHR, nor MLHIM will be 
> the only datamodel system on the world. Cooperation with other systems will 
> always need a message-format. The same goes for other systems. Mapping will 
> always be (at least partly) done manually.
>
> The goal, what the customer wants, is not a solution, which dictates him to 
> throw away his system, but he wants connectivity in which his system can 
> participate.
>

Hi Bert,

that's obviously one thing customers want - data interoperability. But - 
what do they want to do with the data? Let's say that want to have a 
managed medication list, or run a query that identifies patients at risk 
of hypertension, or the nursing software wants to graph the heart rate. 
Then they need more - just being able to get the data isn't enough. You 
have to be able to compute with it. That means standardising the meaning 
somehow.

Now, each healthcare provider / vendor / solution producer could just 
define their own 'content models'. Like they do today. Or we could try 
and standardise on some of them.

The openEHR way seems to me the one that can work: because it 
standardises on the archetypes, which are a library of data points and 
data groups, it means that anyone can write their own data set 
specification (template) based on that. So you define what blood 
pressure looks like once (in the archetype) and it gets used in 1000 
places, in different ways. But - it's guaranteed to be queryable by 
queries based on the archetype.

That's the essence of the system - 3 modelling layers:

  * reference model - agree on the data
  * archetypes - agree on the clinical data points and data groups -
this only needs to be done more or less once; queries are based on
these models
  * templates - define localised / specific data sets using the archetypes

We're working on major improvements on the details in ADL 1.5, but I 
have to admit I don't think of trying to change the ground rules. These 
three logical levels are the minimum for data interoperability, content 
standardisation, and local freedom. With specialisation and association 
between models in the archetype and template layers, that's a lot of 
freedom to customise.

Is there a better meta-architecture available?

- thomas


-- next part --
An HTML attachment was scrubbed...
URL: 



The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-07 Thread Thomas Beale
On 06/04/2013 23:50, Thomas Beale wrote:
>
> [This is Tim again, initially bounced]
>
>> And that is the issue, and what is at the root of this dispute. Tim does not
>> see the point of specialization or redefinition, which, in my opinion, is
>> why he can hold forth so strongly for XML.
>>
>> Randy Neall
> You are mostly correct.  It isn't that I don't think that re-use is a
> good idea.  The knowledge modellers and developers are telling us by
> their actions that do not want to participate in the top-down, maximal
> data model approach.  As I have said many times, for many years; it is
> a wonderfully engineered eco-system. Now we know, it just doesn't work
> in real practice on a global basis.

actually, I will be a bit more specific. Let's say we are talking about 
archetypes for some of the following topics (the following are some 
openEHR CLUSTER archetypes):



None of these can be defined by 'developers'. They are clinical content, 
and only clinical professionals can develop proper versions of them. So 
what you are saying is that 'knowledge modellers' (presumably 
physicians) don't want to build such models by participating in a 
modelling exercise in which they communicate with other physicians 
working on the same models? It seems to me that the only alternative is 
that they build their own private models and ignore everyone else. 
That's expedient, but it's also a guarantee of non-interoperability.

Maybe you can explain your statements in more detail?

thanks

- thomas


**
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130407/ee7346b2/attachment-0001.html>
-- next part --
A non-text attachment was scrubbed...
Name: caheidaa.png
Type: image/png
Size: 16099 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130407/ee7346b2/attachment-0001.png>


The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-07 Thread Thomas Beale
On 06/04/2013 23:50, Thomas Beale wrote:
>
> [This is Tim again, initially bounced]
>
>> And that is the issue, and what is at the root of this dispute. Tim does not
>> see the point of specialization or redefinition, which, in my opinion, is
>> why he can hold forth so strongly for XML.
>>
>> Randy Neall
> You are mostly correct.  It isn't that I don't think that re-use is a
> good idea.  The knowledge modellers and developers are telling us by
> their actions that do not want to participate in the top-down, maximal
> data model approach.  As I have said many times, for many years; it is
> a wonderfully engineered eco-system. Now we know, it just doesn't work
> in real practice on a global basis.

Tim,

obviously some of us are interested in this statement. You say 'it just 
doesn't work in real practice'. Our experience is different, and I am 
interested in your evidence / justification of this statement.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130407/c623cfc8/attachment.html>


The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-06 Thread Thomas Beale

[This is Tim again, initially bounced]

> And that is the issue, and what is at the root of this dispute. Tim does not
> see the point of specialization or redefinition, which, in my opinion, is
> why he can hold forth so strongly for XML.
>
> Randy Neall

You are mostly correct.  It isn't that I don't think that re-use is a
good idea.  The knowledge modellers and developers are telling us by
their actions that do not want to participate in the top-down, maximal
data model approach.  As I have said many times, for many years; it is
a wonderfully engineered eco-system. Now we know, it just doesn't work
in real practice on a global basis.

So that had to change. Add in some other simplifications in the RM and
openEHR turns into MLHIM.  My goal is to encourage multi-level
modelling to solve the semantic interoperability issue. Whatever
acronym you want to tie to it.

I know that MLHIM isn't perfect, but it is designed with agility and
data durability in mind.

--Tim

-- next part --
An HTML attachment was scrubbed...
URL: 



The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-05 Thread Thomas Beale
On 05/04/2013 13:03, Thomas Beale wrote:
>
> [original post by Tim bounced; reposting manually for him]
>
> On Thu, Apr 4, 2013 at 12:50 PM, Thomas Beale
>   wrote:
>
>> if you mean the competing inheritance models - I have yet to meet any XML
>> specialist who thinks they work. The maths are against it.
>>
> Interesting that you, the creator of a technology that makes many
> people very uncomfortable (multi-level modelling), thinks that
> conventional users of XML have something to say regarding XML as a
> multi-level implementation.  Confusing.

not sure what you want to say here!


>> Can you point to some MLHIM models that show specialisation, redefinition,
>> clarity of expression, that sort of thing? I tried to find some but ran into
>> raw XML source.
> There is no need for specialisation or redefinition in MLHIM.  Concept
> Constraint Definitions (CCDS) are immutable once published. In
> conjunction with their included Reference Model version they endure in
> order to remain as the model for that instance data.  Unlike you, I
> believe that the ability to read and validate XML data will be around
> for a long time to come.  There is simply too much of it for it to
> go away anytime soon.  When it does go away, there will ways to
> translate it to whatever comes next. Such as there is today.

I don't disagree with that obviously. All openEHR systems I am aware of 
process XML data routinely, including HL7v2 data, and CDAs.

But if you say there is no need for specialisation or redefinition it 
means there is no re-use to speak of - every model is its own thing. 
This is a major departure from the archetype approach, which is founded 
upon model reuse and adaptation.

>> so does openEHR, that's what namespaces are about. If two groups both define
>> a 'blood pressure' archetype today, there is an immediate problem. In the
>> future with namespaced ids, the problem becomes manageable, since both forms
>> can co-exist.
>>
> Thanks for confirming this problem, for today.  I hope that people
> realize the potential issues that they are creating by operating
> outside of the eco-system.  I also hope that whenever, 'the future',
> arrives that people will understand that the need to use this
> namespace capability. Are there estimates yet as to when the future
> will arrive?
>

Now more or less. New versions of the documents are being published 
imminently, and the tooling is catching up to namespaces (also other 
things like annotations).

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130405/4753ad75/attachment.html>


The Truth About XML was: openEHR Subversion => Github move progress [on behalf of Tim Cook]

2013-04-05 Thread Thomas Beale

[original post by Tim bounced; reposting manually for him]

On Thu, Apr 4, 2013 at 12:50 PM, Thomas Beale
  wrote:

> if you mean the competing inheritance models - I have yet to meet any XML
> specialist who thinks they work. The maths are against it.
>
Interesting that you, the creator of a technology that makes many
people very uncomfortable (multi-level modelling), thinks that
conventional users of XML have something to say regarding XML as a
multi-level implementation.  Confusing.



> but your original statement was (I thought) that you are using XML for the
> information model as well.

Not specifically. We knew that we wanted to exercise all of the
capabilities of XML in actual implementation.  So, when building th
information model we remained conscious of that fact.  So, we knew
there were limitations.  Otherwise, the model would just be openEHR
without the EHR structures.  But, we wanted to be better prepared for
implementability without having to build all of the tools and
technologies that already exist.  We took a great idea and used
pragmatism on it.



> Can you point to some MLHIM models that show specialisation, redefinition,
> clarity of expression, that sort of thing? I tried to find some but ran into
> raw XML source.

There is no need for specialisation or redefinition in MLHIM.  Concept
Constraint Definitions (CCDS) are immutable once published. In
conjunction with their included Reference Model version they endure in
order to remain as the model for that instance data.  Unlike you, I
believe that the ability to read and validate XML data will be around
for a long time to come.  There is simply too much of it for it to
go away anytime soon.  When it does go away, there will ways to
translate it to whatever comes next. Such as there is today.

The conceptual model expressed as a mindmap (XMind template):
https://github.com/mlhim/tools/blob/master/xmind_templates/MLHIM_Model-2.4.2.xmt

A UML(ish) view:
https://drive.google.com/a/mlhim.org/?tab=mo#folders/0B9KiX8eH4fiKUFhTb2w2ZGJlWVU
I know that there is now a convention to express XML in UML models but
I have not had the time to study it properly.

There are examples; from instance data -> CCDs -> RM along with
documentation and XQL examples:
https://github.com/mlhim/tech-docs

There are more than 1500 CCDs published on the Healthcare Knowledge
Component Repository site:
http://www.hkcr.net/

Historical code and other information is on Launchpad along with
sub-projects at:http://launchpad.net/mlhim

HTH.  Thanks for asking.


> well that's just the point, they don't  - it's possible to define a model so
> that an XSD form, a programming form, a display screen form and many others
> are all derivable from that source model. We only want to define the model
> of 'microbiology result' once, after all. This single-source modelling is a
> key goal of the approach.

Right and being able to be 'transformed' into all of those expressions
is what the XML family of tools is very well known for.  So, I
misunderstood your original comment.


> there is no data in ADL, only models. Not sure what you are trying to say
> here

Really?  I have seen several examples of dADL with instance information in it.


> well, pretty much the whole world is using programming languages that are
> essentially object-oriented or object-enabled - even uber languages like
> Haskell do most OO tricks. You're using Python, that's an OO language.
>
That is true.  And each and every one of them have binding libraries
to XML Schema.

> It must (according to you) be easy to express e.g. this part of the openEHR 
> RM in XSD 1.1. I would be very interested to see how it deals  >  with the 
> generic types and inheritance, both handled by any normal programming 
> language.

I don't think you will find where I ever used the word easy. But yes
it is possible.  If you are interested enough to study then you can
discover how it can be done.  Prior to removing the unnecessary things
from the RM (for MLHIM 2.x), MLHIM was openEHR 1.0.1 compliant.  I am
not sure now if those artifacts exist.  You can check on Launchpad.



> XML wasn't designed for data representation, it was designed for structured
> document mark-up. That's why it's so horrible for data representation.
>
That is technically true; originally.  However, it is not
representative of what XML is today and is the reason why XML Schema
was designed and revised. In your opinion it is horrible but there is
a global industry that doesn't agree with you.


> well let me just point to a single feature of object languages (including
> ADL) - inheritance / specialisation. Are you saying that's of no use? How do
> you propose to adapt a model that you have to include local needs, without
> breaking the parent model semantics?

Witness my use of 

The Truth About XML was: openEHR Subversion => Github move progress

2013-04-04 Thread Thomas Beale
On 04/04/2013 12:09, Tim Cook wrote:
>>> well, since the primary openEHR projects are in Java, Ruby, C#, PHP, etc, I
>>> don't see where the disconnect between the projects and the talent pool is.
>>> I think if you look at the 'who is using it' pages, and also the openEHR
>>> Github projects, you won't find much that doesn't connect to the mainstream.
> The discussion about talent pool is about the data representation and
> constraint languages.
> XML and ADL. The development languages are common across the application 
> domain.
> I know that you believe that ADL is superior because it was designed
> specifically to support the openEHR information model. It is an impressive 
> piece of work, but
> this is where its value falls off.

actually, ADL was specifically designed to not support any information 
model, and it doesn't. It's just an abstract syntax, free of the 
vagaries of any other syntax.

> XML has widespread industry acceptance and plethora of development and 
> validation tools against a global standard.

sure. In terms of being able to /serialise /archetypes to XML, that has 
been available for probably a decade, and is in wide use. Some users 
ignore ADL entirely. I don't think anyone has an issue with this.

>> > 9-month old XML Schema 1.1 spec>
> The industry standard XML Schema Language is 1.1. The first draft was
> published in April 2004
> making it nine years old,

well, but it's been stillborn for years, everyone knows that...

>> But XML schema as an information modelling language has been of no serious
>> use, primarily because its inheritance model is utterly broken. There are
>> two competing notions of specialisation - restriction and extension.
> Interesting.  I believe that the broader industry sees them as
> complimentary, not competing.

if you mean the competing inheritance models - I have yet to meet any 
XML specialist who thinks they work. The maths are against it.


>> Restriction is not a tool you can use in object-land because the semantics
>> are additive down the inheritance hierarchy, but you can of course try and
>> use it for constraint modelling.
> Restriction, as its name implies, is exactly intended and very useful for 
> constraint modelling.
> Constraint modelling by restriction is, as you know, the corner-stone of 
> multi-level modelling.
> Not OO modelling. Which is, of course, why openEHR has a reference model and 
> a constraint model.
> They are used for the two complimetary aspects of multi-level modelling.

but your original statement was (I thought) that you are using XML for 
the information model as well. That's where it breaks, because of the 
inability to represent basic concepts like inheritance in the way that 
is normally used in object modelling (and most database schema languages 
these days).


>> Although it is generally too weak for
>> anything serious, and most projects I have seen going this route eventually
>> give in and build tools to interpolate Schematron statements to do the job
>> properly. Now you have two languages, plus you are mixing object (additive)
>> and constraint (subtractive) modelling.
>>
> Those examples you are referring to are not using XML Schema 1.1.
> Or at least not in its specified capacity. There is no longer a need
> for RelaxNG or Schematron to be mixed-in.
> Your information on XML technologies seems to be quite a bit out of date.

I'm just reporting what I know to be the case in various current 
national e-health modelling initiatives, none of which I am directly 
involved in... all the serious ones use XSD 1.0 + Schematron.


>> Add to this the fact that the inheritance rules for XML attributes and
>> Elements are different, and you have a modelling disaster area.
>>
> I will confess that XML attributes are, IMHO, over used and inject
> semantics into a model
> that shouldn't be there.  For example HL7v3 and FHIR use them extensively.
>
>
>> James Clark, designer of Relax NG, sees inheritance in XML as a design flaw
>> (from http://www.thaiopensource.com/relaxng/design.html#section:15 ):
> Of course! But then you are referencing an undated document by the
> author of a competing/complimentary tool,
> that is announcingannounces RelaxNG as new AND its most recent
> reference is 2001.
> So, my guess is that it is at least a decade old. Hardly a valid opinion 
> today.

I can't say whether it is valid with respect to XSD 1.1, but it remains 
valid with respect to 1.0. I don't see that XSD 1.1 has a healthier 
inheritance model, so it seems to me that anyone trying to do 
information modelling (not constraint modelling) is still going to get 
into trouble. I can't see anything that contradicts Clark's statements, 
even if they are not from last week.

But let's assume I don't know what I am talking about. It must 
(according to you) be easy to express e.g. this part of the openEHR RM 

 
in XSD 1.1. I

The Truth About XML was: openEHR Subversion => Github move progress

2013-03-29 Thread Thomas Beale
On 29/03/2013 16:19, Thomas Beale wrote:
>
> Hi Tim,
>
> I don't see any problem here. The extant open 'reference 
> implementation' of openEHR has been in Java for years now, and 
> secondarily in Ruby (openEHR.jp <http://openehr.jp/>) and C# 
> (codeplex.com <http://openehr.codeplex.com/>). The original Eiffel 
> prototype was from nearly 10 years ago and was simply how I prototyped 
> things from the GEHR project, while other OO languages matured.

I should make it clear that the above is not exhaustive or definitive - 
there is the openEHRgen framework using Groovy, at least one openEHR PHP 
product and much more diversity out there.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130329/b9c14232/attachment.html>


The Truth About XML was: openEHR Subversion => Github move progress

2013-03-29 Thread Thomas Beale

On 29/03/2013 14:15, Tim Cook wrote:
> Hi Tom,
>
> I have amended the Subject Line since the thread has diverged a bit.
>
> [comments inline]
>
> On Thu, Mar 28, 2013 at 9:55 AM, Thomas Beale
>   wrote:
>> one of the problems with LinkEHR (which does have many good features) is
>> that it is driven off XSD. In principle, XSD is already a deformation of any
>> but the most trivial object model, due to its non-OO semantics. As time goes
>> on, it is clear that the XSD expression of data models like openEHR, 13606
>> etc will be more and more heavily optimised for XML data. This guarantees
>> such XSDs will be a further deformation of the original object model - the
>> view that programmers use.
> I agree with you that you cannot represent an object model, fully, in
> XML Schema language.
> However, you seem to promote the idea that object oriented modelling
> is the only information modelling approach[1].
> This is a critical failure. The are many ways to engineer software
> using many different modelling approaches.
> So abstract information modelling, as you have noted, does not
> necessarily fit all possible software modelling approaches and it is
> unrealistic to think that it does. In desiging the openEHR model you
> chose to use object oriented modelling. The openEHR reference
> implementation uses a rather obscure, though quite pure,
> implementation language, Eiffel. I think that history has shown that
> this has caused some issues in development in other object oriented
> languages.

Hi Tim,

I don't see any problem here. The extant open 'reference implementation' 
of openEHR has been in Java for years now, and secondarily in Ruby 
(openEHR.jp <http://openehr.jp/>) and C# (codeplex.com 
<http://openehr.codeplex.com/>). The original Eiffel prototype was from 
nearly 10 years ago and was simply how I prototyped things from the GEHR 
project, while other OO languages matured.

I am not sure that we have suffered any critical failure - can you point 
it out?


>> So now if you build archetypes based on the XSD,
>> you are not defining models of object data that software can use (apart from
>> the low layer that deals with XML data conversion). I am unclear how any
>> tool based on XSD can be used for modelling object data (and that's nearly
>> all domain data in the world today, due to the use of object-oriented
>> programming languages).
> I think that if you look, you will find that "nearly all of the domain
> data in the world" exists in SQL models, not object oriented models.
> So this is a rather biased statement designed to fit your message.
> Not a representation of reality.

ok, so I'll clarify what I meant a bit: most domain (i.e. industry 
vertical) applications are being written in object languages these days 
- Java, Python, C#, C++, Ruby, etc.  The software developer's view of 
the data is normally via the 'class' construct of those languages. You 
are right of course that the vast majority of the data physically 
resides in some RDBMS or other. However, the table view isn't the 
primary 'model' of the data for I would guess a majority of software 
systems development these days. There are of course major exceptions - 
systems written totally or mainly in SQL stored procedures or whatever, 
but new developments don't tend to go this route. In terms of sheer 
amount of data, these latter systems are probably still in the majority 
- since tax databases, military systems etc, legacy bank systems are 
written this way, but in terms of numbers of software projects, I am 
pretty sure the balance is heavily in the other direction.

> That said, the abstract concept of multi-level modelling, where there
> is the separation of a generic reference model from the domain concept
> models is very crucial. Another crucial factor is implementability; as
> promoted by the openEHR Foundation mantra, "implementation,
> implementation, implementation".
>
> The last and possibly most crucial issue relates to implementability,
> which is the availability of a talent pool and tooling. In order to
> attract more than a handful of users to a technology there needs to
> exist some level of talent as well as robust and commonly available
> tools.
>
> The two previous paragraphs are the reasons that the Multi-Level
> Healthcare Information Modelling (MLHIM) project exists.

well, since the primary openEHR projects are in Java, Ruby, C#, PHP, 
etc, I don't see where the disconnect between the projects and the 
talent pool is. I think if you look at the 'who is using it' pages 
<http://www.openehr.org/who_is_using_openehr/>, and also the openEHR 
Github projects <https://github.com/openEHR>, you won

openEHR Subversion => Github move progress

2013-03-21 Thread Thomas Beale

The last of what I think are the active Subversion repositories on the 
old openEHR.org server has been converted to GitHub now (the Archetype 
Editor). Repositories which appear to be inactive but could be converted 
if anyone wants:

  * liu_knowledge_tools (Linkoping has a more recent version of this AFAIK)
  * the original 'knowledge' repository containing a lot of old NHS
archetypes
  * knowledge_tools_java - not sure about this one.
  * ref_impl_python

For those who have links, checkouts or other pointers to any openEHR SVN 
repositories, please now refer to the Github openEHR repositories 
<https://github.com/openEHR>.

Any questions like 'where did xxx go?', feel free to post them here.

- thomas beale

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130321/daea12b2/attachment.html>


ADL Workbench command line tool

2013-03-14 Thread Thomas Beale
On 14/03/2013 19:18, Bert Verhees wrote:
> On 03/14/2013 03:53 PM, Thomas Beale wrote:
>> I had a feeling someone would want this ok, it's next up.
> Didn't you ever need it?

well normally we just use the workbench. Path extraction has been there 
for probably 8 years...

- thomas




ADL Workbench command line tool

2013-03-14 Thread Thomas Beale
On 14/03/2013 10:53, Jos? Hil?rio Almeida wrote:
> Thank you for your work on this tool.
> A powerful path extraction interface would be 
> very useful, especially for returning leaf paths. That would be very 
> handy.
>

I had a feeling someone would want this ok, it's next up.

- thomas




ADL Workbench command line tool

2013-03-13 Thread Thomas Beale

I have been developing a command line version of the ADL workbench in 
the background, using all the same compiler code of course. It has not 
yet been released, but here 
<http://www.openehr.org/downloads/ADLworkbench/command_line_tool> is 
some documentation.

I would be interested to know what the broader interest in such a tool 
would be and what people would like it to do. Examples of requirements 
might be:

  * generate operational templates (OPTs)
  * extract paths from archetypes
  * validate an entire repository of archetypes and generate a report.

It does some of these already and could be made to do many more such things.

all feedback welcome.

- thomas beale

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130313/e9f9f738/attachment.html>


Erik Sundvall's PhD Defence - Online Edition

2013-02-15 Thread Thomas Beale
On 15/02/2013 14:05, Mikael Nystr?m wrote:
>
> Dear all,
>
> The defense is available at http://youtu.be/0lpHFG3Dhts.
>
> Dipak Kalra was not present at the defense in Link?ping in person due 
> to flight cancellations, so he did his part remote from Amsterdam.
>
> Erik passed the defense and there is no only paperwork left before he 
> receives his Ph D degree. Congratulations Erik!
>

I'm still getting over seeing Erik in a tie;-)

Well done - excellent work.

I have gotten just a little way into it (it's a 3h video. If someone 
wanted to make an edited version you would instantly win the Academy 
award for best edited foreign e-health PhD thesis defence movie ;-)

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



UML 1.0.1 online HTML fixed

2013-02-06 Thread Thomas Beale

The online HTML UML web tree has been fixed. It's the link marked below, 
for those who may not remember how to get to it now:


-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: ehefedah.png
Type: image/png
Size: 38165 bytes
Desc: not available
URL: 



Questions about commit and AUDIT_DETAILS

2013-01-30 Thread Thomas Beale
On 30/01/2013 08:07, Erik Sundvall wrote:
> Hi!
>
>
> On Tue, Jan 29, 2013 at 9:48 PM, Thomas Beale 
>  <mailto:thomas.beale at oceaninformatics.com>> wrote:
>
> The point isn't for the server to know what is committed to
> itself, but for other systems to know where data that they are
> sent copies of, was originally committed.
>
>
> That was my understanding too. I think of the system id as an 
> identifying logical "domain" for versioning where there is a guarantee 
> that the same version_tree_id (e.g. 3 in 1298745::my.system::3) will 
> never be reused for another commit. In such a domain there should be 
> some mechanism to get the latest version and to assign new 
> non-conflicting version_tree_id's committs in the domain thus has to 
> be synchronized one way or another so that additional writes with same 
> ID get detected and stopped.

yes - this is crucial functionality in an openEHR EHR server/service.

>
> If those conditions are fulfilled it matters less if things are done 
> on client or server side, but I would guess that it in many cased will 
> be far easer to implement on the server side than to have a 
> distributed sync for clients.
>
> Maybe we need to contemplate capturing both the user device
> network id and the server id.
>
>
> In the LiU EEE implementation of the REST architecture described in my 
> thesis (http://www.imt.liu.se/~erisu/2013/phd/ 
> <http://www.imt.liu.se/%7Eerisu/2013/phd/>) we use the normal 
> http-server log to record user agent (device and browser/agent) and 
> originating IP. The URIs and HTTP redirections are designed in a way 
> that makes it easy to identify the HTTP-log entry associated with a 
> certain commit, so if you have a VERSION of an object and have access 
> to the HTTP-logs you can easily track this for system audit purposes. 
> Since the dates are included in the audit_details of every openEHR 
> VERSION it is also easy to figure out which log file to look in if you 
> happen to have an ordinary log rotation and archiving system.
>
> I am not sure that it would always be a too good idea to cram 
> user-agent, IP etc into the CONTRIBUTION or audit_details that are 
> persisted in the EHR and SOMETIMES transferred in EHR extracts. 1) 
> Those details may give away unwanted or unneccearily detailed info to 
> other organisations that you are sharing EHR extracts with. 2)
>

that would be my concern as well. On the other hand, if you want to 
track down a doctor who has outsourced his/her job to China 
<http://www.bbc.co.uk/news/technology-21043693> and EHR modifications 
are coming from an IP address wy outside the intended source domain, 
we might need some way to do it ;-)

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130130/bd58461a/attachment-0001.html>


Handling internalization for openEHR terminology xml. Do we need a schema update?

2013-01-29 Thread Thomas Beale
On 29/01/2013 22:18, Seref Arikan wrote:
> Greetings,
> Ian and I have been working on internalization of openEHR terminology 
> XML for a project. Being the lazy person that I am, I wrote an Xquery 
> snippet to reuse the existing work in the Archetype Editor's 
> terminology file, which is quite comprehensive in terms of the 
> languages it contains. However, this did not help perform the full 
> internalization.
>
> The problem is, the group elements' name attributes are the only 
> identifiers for group elements, and even if one translates the 
> concepts under groups, the group name is still in English. An example 
> with Turkish:
>
> 
> 
> 
> 
> 
> 
>
> the group name is "null flavours". Group element has no attribute such 
> as conceptID, that would allow me to fully internationalize the xml 
> file. The concept id for null flavours actually exits in the Archetype 
> Editor xml file, but if I use that code here, I'll probably be 
> inventing a non-standard hack.
>
> Does it make sense to add a conceptId attribute to group element? (and 
> could anybody let me know the location of the latest schema for 
> openEHR terminology XML?)
>
> Best regards
> Seref

https://github.com/openEHR/terminology

but see this page 
 as well 
(note that all references to the 'SVN' repo now mean the Git repo)

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



Questions about commit and AUDIT_DETAILS

2013-01-29 Thread Thomas Beale
On 23/01/2013 15:59, pablo pazos wrote:
> Hi Bert / Sam,
>
> Thanks for your answers.
>
> The idea is that the new COMPOSITION will be available to the EHR 
> SYSTEM when it arrives to the SERVER. I understand the difference 
> between finishing a COMPOSITION (e.g. signing and setting the end 
> time) and committing it to be available to the system (e.g. other 
> CLIENTs could access the new COMPOSITION).
>
> I agree with Bert that AUDIT_DETAILS.system_id should be "the system 
> on which the author is working/committing, normally not the server.", 
> but IMO this is the opposite to the current definition of that field.
>
> Moreover, if that field is set to the SERVER's ID it will be 
> redundant, because the SERVER knows that the COMPOSITION was committed 
> to itself, but what doesn't knows is the ID of the system where the 
> COMSPOTION was authored (e.g. the SERVER could identify the CLIENT by 
> it's IP, but 1. IP's change, 2. there could be a middleware so the IP 
> received by the SERVER could not be the IP of the CLIENT).
>
> What do you think?

The point isn't for the server to know what is committed to itself, but 
for other systems to know where data that they are sent copies of, was 
originally committed. If this information is not available, then that 
data, when sent to another place doesn't indicate where it was 
committed. If the audit trail includes some machine name of a client 
device, it's no help on its own. Maybe we need to contemplate capturing 
both the user device network id and the server id.

It depends on what we think these ids are needed for. The server id is 
easy - when informatoin is shared, you want to know where it was 
originally committed (which might not be the same as the machine or 
service you got it from today) so that further requests could be made 
there. The utility of the client device id is probably only inside the 
original environment, but I am not sure how it would be used. I would be 
interested in Pablo & Bert's ideas...

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



ADLWB 1.5.0 1826 Beta 8 exception on File > Open

2013-01-25 Thread Thomas Beale

Hi Pablo,

can you please raise an issue report here 
, with the usual details of 
platform., how you installed etc?

thanks

- thomas


On 25/01/2013 17:44, pablo pazos wrote:
> Hi all, I just opened the ADLWB, and if the first thing I do is File > 
> Open, I get:
> *
> *
-- next part --
An HTML attachment was scrubbed...
URL: 



Questions about commit and AUDIT_DETAILS

2013-01-23 Thread Thomas Beale
On 23/01/2013 05:11, pablo pazos wrote:
> Hi all, this question is related t oa previous thread: 
> http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/2012-November/007392.html
>  
>
>
> I just want to check a couple of things to validate my implementation 
> of an openEHR Server.
>
> The definition of AUDIT_DETAILS.system_id is: "Identity of the system 
> where the change was committed. Ideally this is a machine- and 
> human-processable identifier, but it may not be.".
> Let's say I have a CLIENT where COMPOSITIONS are created, and a SERVER 
> where COMPOSITIONS are committed by the CLIENT.
> If I understand this correctly, AUDIT_DETAILS.system_id would be the 
> SERVER ID. If so, where can I specify the CLIENT's ID (the system that 
> committed the COMPOSITION). This information is needed to have the 
> complete log of the commit.
>
> In the other hand, where COMPOSITIONs are imported from the CLIENT, 
> the FEEDER_AUDIT_DETAILS.system_id is the "Identifier of the system 
> which handled the information item", so it is the CLIENT's ID.
>
> If this is right, why do we have different definitions for X.system_id 
> for different scenarios of sending information from a CLIENT to a 
> SERVER (e.g. the 1st case is the SERVER's ID, on the 2nd is the 
> CLIENT's ID).
>
>

Hi Pablo,

The original idea was that a logical EHR service id would be used. A 
'client id' is likely to be meaningless and untrackable. The id is only 
useful if it is relatively permanent, and future information requests 
can be made to that logical EHR system. It would also be the id of the 
system that other users who could see this information were using, and 
where medico-legal investigations take place.

In a more cloud-based world, it might not seem so clear, because 
numerous organisations might be committing to a physical service that 
supports multi-tenanting.

However, in either case, it should be something like a domain name of an 
EHR service that is understood to be the legal EHR repository facility 
of the organisation in which the clinician works.

There might be an argument for having another field for 'client device 
type' (e.g. phone, iPad etc).

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



ADL Worbench for Linux

2013-01-18 Thread Thomas Beale
On 18/01/2013 15:05, Bert Verhees wrote:
> On 01/18/2013 02:23 PM, Thomas Beale wrote:
>> git clone of https://github.com/openEHR/adl-archetypes.git somewhere 
>> convenient
> Thanks for the link, very useful for me :-)
>
> Bert
>
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org 
>
>

there are lots of useful links here...


-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130118/f927daa2/attachment-0001.html>
-- next part --
A non-text attachment was scrubbed...
Name: cjchbhbj.png
Type: image/png
Size: 27554 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130118/f927daa2/attachment-0001.png>


ADL Worbench for Linux

2013-01-18 Thread Thomas Beale

Bert,

I just did a new install on an up-to-date Ubuntu installation (Dell 
laptop, very standard) and it ran out of the box, no problems. (The 
docking arrangement of the windows is a bit weird, but can be manually 
adjusted, and the tool remembers the last state over sessions).

The exception is on reading an image file but it looks as if they are 
all there. I can't remember what happens if you have no GTK+ installed, 
but I don't think it's this error; might be worth checking anyway.

My suggestions in order would be (as a prior action, maybe do a git 
clone of https://github.com/openEHR/adl-archetypes.git somewhere 
convenient - that will give you some test & CKM archetypes):

  * rm -rf the install directory and rerun the tar xjvf command, just to
be sure that it completes properly; retry
  * if no luck, check if there is some strange permissions on the
install directories or subdirectories that would prevent file reading.
  * if no luck, try on another linux machine

We are working on bundling all the icons into the app, but we have not 
done it yet, hence all those little files.

If none of the above works, can you raise an issue on the PR tracker 
?

thanks

- thomas


On 18/01/2013 11:24, Bert Verhees wrote:
> On 01/18/2013 11:55 AM, Seref Arikan wrote:
>> I know it is cross platform :) That is why I wrote, "developed under 
>> Windows", which implies that the developer might have used Windows 
>> style relative paths for images.
>>
>> On Fri, Jan 18, 2013 at 10:14 AM, Peter Gummer 
>> > > wrote:
>>
>> On 18/01/2013, at 20:11, Seref Arikan wrote:
>>
>> > from the top of my head: reads like a path problem with the
>> images embedded into AW, due to fact that it is being developed
>> under Windows, and you're trying to run it under Linux.
>>
>>
>> Yes, an image path problem; but no, ADL Workbench is
>> cross-platform. It works under Linux just as well as Windows.
>> Regardless of the platform, the image files have to be in the
>> correct place.
>>
>> Bert, did you build this yourself or did you install it from
>> http://www.openehr.org/downloads/ADLworkbench/home ?
>>

-- next part --
An HTML attachment was scrubbed...
URL: 



ADL Worbench for Linux

2013-01-18 Thread Thomas Beale
On 18/01/2013 10:55, Seref Arikan wrote:
> I know it is cross platform :) That is why I wrote, "developed under 
> Windows", which implies that the developer might have used Windows 
> style relative paths for images.

nope, Peter is way smarter than that ;-)

See this kind of code - 
https://github.com/openEHR/adl-tools/blob/master/libraries/common_libs/src/utility/app_resources/shared_resources.e
 

It's all platform-independent. Now that doesn't mean we don't have a 
bug, but it's unusual these days to be in this area, since this code is 
stable for quite some time now... anyway, we'll find it soon enough once 
we can reproduce Bert's error.

- thomas



ADL Worbench for Linux

2013-01-18 Thread Thomas Beale
0> Routine failure.  Fail
> 
> ---
> GUI_APP_ROOTshow_splash_window @1
> <F57C1CA4> Routine failure.  Fail
> 
> ---
> GUI_APP_ROOTmake_and_launch @3
> <F57C1CA4> Routine failure.  Fail
> 
> ---
> GUI_APP_ROOTroot's creation
> <F57C1CA4> Routine failure.  Exit
> 
> ---
>
>
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> <mailto:openEHR-technical at lists.openehr.org>
> 
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org
>
>
>
>
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org


-- 
Ocean Informatics   *Thomas Beale
Chief Technology Officer, Ocean Informatics 
<http://www.oceaninformatics.com/>*

Chair Architectural Review Board, /open/EHR Foundation 
<http://www.openehr.org/>
Honorary Research Fellow, University College London 
<http://www.chime.ucl.ac.uk/>
Chartered IT Professional Fellow, BCS, British Computer Society 
<http://www.bcs.org.uk/>
Health IT blog <http://www.wolandscat.net/>


*
*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130118/714e9380/attachment.html>
-- next part --
A non-text attachment was scrubbed...
Name: ocean_full_small.jpg
Type: image/jpeg
Size: 5828 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130118/714e9380/attachment.jpg>


website improvements, including openEHR system deployments around the world

2013-01-16 Thread Thomas Beale

The home page <http://www.openehr.org/home> now has an improved layout 
and newsfeeds. More recent news items are being added soon.

The 'what is openEHR' page <http://www.openehr.org/what_is_openehr> is 
hopefully more informative.

The who is using openEHR page 
<http://www.openehr.org/who_is_using_openehr> now has 2 new sections, 
one for current and contracted deployments into production sites, and 
one for funded research using openEHR.

We are certain there are updates required to this information, including 
new deployments, so please let us know <mailto:webmaster at openehr.org>. 
It would be good to have updates on the academic programmes as well.

NOTE: there is currently no section for vendors working with openEHR 
yet; this will come in the near future. We will make an announcement on 
what kind of information to provide for this.

All website feedback welcome here <http://www.openehr.org/aboutthiswebsite>.

- thomas beale


-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130116/357f293e/attachment-0001.html>


defaultValue/assumedValue in CPrimitive.

2013-01-07 Thread Thomas Beale
On 07/01/2013 13:52, Bert Verhees wrote:
> Please can some one short explain what the difference is between 
> assumedValue and defaultValue in CPrimitive?
>
> Thanks
> Bert

Assumed value.
assumed value is a value you set in the archetype. If no value (at all) 
occurs in the data for that item (e.g. patient position for BP 
measurement) but applications want a value, they can check the archetype 
(in fact it will be in the template) to see if there is an assumed 
value. If there is, it can be used.

I don't believe it has had much use in archetyping over the years.

Default value.
the system or app should pre-populate the field with the default value. 
If no user value is supplied, it will remain the default. In this case, 
there is a value in the data.

- thomas




csingleattribute and existence

2013-01-07 Thread Thomas Beale

Bert,

one very useful thing you can do is to identify guidelines for use of 
the current specification. E.g. statements of the form

if existence is set on a single-valued attribute, and there is only one 
child object, no occurrences should be set, since they can always be 
inferred from the owning attribute's existence.

and so on. These kind of statements I can add to the ADL 1.5 spec (which 
we should treat as the usable spec these days).

thanks

- thomas

On 07/01/2013 11:21, Bert Verhees wrote:
>
>>> But besides that, suppose you have a CSingleAttribute with REQUIRED 
>>> set with more CObjects as alternatives in it.
>>> All occurrences for the CObjects need then to be set to 0..1, every 
>>> other setting is erroneous.
>>> Occurrences 0..0 is useless, why define a CObject if it may never 
>>> occur.
>>> Occurrences 1..1 is useless, why define alternative CObjects if the 
>>> one chosen is defined.
>>>
>>> Maybe the occurrences of CObjects should not be looked at when child 
>>> of a CSingleAttribute
>>
>> occurrences can be 1..1 if it is the only possibility.
>
> My statement was that it is useless, it can be possible but has no 
> meaning. Skipping the alternatives is more clear.
> And if there are no alternatives, setting the 
> CSingleAttribute.existence to REQUIRED does the same.
>
>>
>> occurrences can be 0..1 on two alternatives, with an additional rule 
>> that says that either A or B must be there (thus satisfying the 1..1 
>> in the attribute itself)
> That is the only meaningful occurrence possible in the CObject. So if 
> there is only one meaningful, what is the point of making it 
> configurable?
>
>>
>>> -
>>> It is that I am looking further in the world then existing archetypes.
>>> We had the discussion about the tried enforcing top-down-structure 
>>> of archetypes and the consequences of this policy a few weeks ago.
>>
>> I'm not sure how this relates to the technical issue we are 
>> discussing here...?
>
> It is because you advised me to use the existing OpenEHR archetypes 
> and Java-implementation. I indicated why I don't do that exclusively.
>
>>
>>>
>>> I am also looking further than the existing Java-libraries, but that 
>>> I will soon announce more about this.
>>
>> I am not claiming that the current specification approach is perfect. 
>> But the experience I know about elsewhere leads me to think it is 
>> pretty workable; we don't seem to have any problems in most tools or 
>> libraries on this issue.
>>
>> If there are aspects you are thinking about in some other kind of 
>> archetype, please share it, that would help.
>
> No it is not perfect, and yes it is workable. My suggestions were 
> partly that I was not sure to understand the construct well, and 
> partly to discuss improvements.
>
> When I have other issues, I will gladly discuss them.
>
> Bert
>
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org 
>
>


-- 
Ocean Informatics   *Thomas Beale
Chief Technology Officer, Ocean Informatics 
<http://www.oceaninformatics.com/>*

Chair Architectural Review Board, /open/EHR Foundation 
<http://www.openehr.org/>
Honorary Research Fellow, University College London 
<http://www.chime.ucl.ac.uk/>
Chartered IT Professional Fellow, BCS, British Computer Society 
<http://www.bcs.org.uk/>
Health IT blog <http://www.wolandscat.net/>


*
*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130107/019d3149/attachment.html>
-- next part --
A non-text attachment was scrubbed...
Name: ocean_full_small.jpg
Type: image/jpeg
Size: 5828 bytes
Desc: not available
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130107/019d3149/attachment.jpg>


csingleattribute and existence

2013-01-07 Thread Thomas Beale
On 07/01/2013 09:32, Bert Verhees wrote:
> On 01/07/2013 02:40 AM, Thomas Beale wrote:
>>
>
> I think, Thomas, the logic is as follows, the CSingleAttribute can, as 
> in the specs, have one or more then one children (CObjects).
> Only one can be chosen, the others are alternatives.
>
> The CSingleAttribute can have existence 0..1 (OPTIONAL), 1..1 
> (REQUIRED) and 0..0 (NOTALLOWED).
> The last one I don't understand, why having an attribute as it is 
> defined to stay null.

this is used in ADL 1.5 templates to remove attributes not required in 
that particular data set (only if the RM permits it of course).

>
> But besides that, suppose you have a CSingleAttribute with REQUIRED 
> set with more CObjects as alternatives in it.
> All occurrences for the CObjects need then to be set to 0..1, every 
> other setting is erroneous.
> Occurrences 0..0 is useless, why define a CObject if it may never occur.
> Occurrences 1..1 is useless, why define alternative CObjects if the 
> one chosen is defined.
>
> Maybe the occurrences of CObjects should not be looked at when child 
> of a CSingleAttribute

occurrences can be 1..1 if it is the only possibility.

occurrences can be 0..1 on two alternatives, with an additional rule 
that says that either A or B must be there (thus satisfying the 1..1 in 
the attribute itself)

> -
> It is that I am looking further in the world then existing archetypes.
> We had the discussion about the tried enforcing top-down-structure of 
> archetypes and the consequences of this policy a few weeks ago.

I'm not sure how this relates to the technical issue we are discussing 
here...?

>
> I am also looking further than the existing Java-libraries, but that I 
> will soon announce more about this.

I am not claiming that the current specification approach is perfect. 
But the experience I know about elsewhere leads me to think it is pretty 
workable; we don't seem to have any problems in most tools or libraries 
on this issue.

If there are aspects you are thinking about in some other kind of 
archetype, please share it, that would help.

thanks

- thomas




csingleattribute and existence

2013-01-07 Thread Thomas Beale
On 06/01/2013 20:29, Bert Verhees wrote:
> On 01/06/2013 08:44 PM, Thomas Beale wrote:
>>
>> Hi Bert,
>>
>> existence is a property of CAttribute (multiple or single). It 
>> indicates if the attribute value (i.e. some object) must exists or 
>> can be null.
>>
>
> How about this:
>
> Since its function in CSingleAttribute is also done by 
> CObject-attribute occurences, it could be removed from the 
> CSingleAttribute. This would make tools that check this superfluous.

Hi Bert,

it can't really, because you can have CAttributes that have no CObject 
children. Setting the existence to {1} for example on such an attribute 
says that there have to be values, but says nothing further about them.

On the other hand, if there are child CObjects, these CObjects could 
each have occurrences set to {0..1} (e.g. if they are alternatives).

so it's not quite as simple as it seems. There probably is a 
simplification available in the future, but my suggestion for now, 
assuming you are using the Java library and existing openEHR archetypes, 
is to stick with the way the library works at the moment - I assume it 
does sensible things...

- thomas




csingleattribute and existence

2013-01-06 Thread Thomas Beale

Hi Bert,

existence is a property of CAttribute (multiple or single). It indicates 
if the attribute value (i.e. some object) must exists or can be null.

occurrences is a property of a CObject, and indicates how many instances 
of that object constraint can exist in the data.

It can be used on CObjects under CMultipleAttributes to indicate how 
many instances of each CObject (there can be multiple CObjects, e.g. 
systolic bp, diastolic bp etc, each of which could potentially have more 
than one instance in the data). Commonly, many objects under a 
CMultipleAttribute can only have one or zero occurrences, so occurrences 
is set to {0..1}

Occurrences on an object under a CSingleAttribute can only indicate 0..0 
or 1..1 (based on an original value of 0..1). In theory, occurrences on 
an object under a CSingleAttribute could conflict with existence on the 
CSingleAttribute. Tools can easily check this (and they do).

hope this helps.

- thomas

On 06/01/2013 15:51, Bert Verhees wrote:
> Excuse me the following question, maybe I am just looking over the 
> answer all the time
>
> What is the use of both together existence and occurences in case of a 
> CSingleAtttribute.
>
> And what if both have conflicting information?
>
> For example, existence gives REQUIRED and occurrences gives minOccurs=0
>
> Thanks for a short answer




openEHR website tip of the day - CKM comments

2012-12-28 Thread Thomas Beale

If you follow the link on the CKM news feed, i.e.



You go straight to the CKM comments in question:





-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: dcddhcca.png
Type: image/png
Size: 17362 bytes
Desc: not available
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: eggdgjed.png
Type: image/png
Size: 52410 bytes
Desc: not available
URL: 



New website - tip of the day

2012-12-21 Thread Thomas Beale

Don't forget the live news coming from #openEHR and #CKM


-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: aacdfhha.png
Type: image/png
Size: 123638 bytes
Desc: not available
URL: 



openEHR icons - available in reference-models Git repo

2012-12-20 Thread Thomas Beale

I have uploaded the icons I used in the ADL Workbench into the 
reference-models Git repo . 
They can be found for the various reference models at locations like 
this 
.

The icons are a mixture of icons originally used in CKM, and some 
(rougher) developed by me for AWB. This resource might be a useful 
starting point for people wanting to try and make a nicer set, or simply 
reuse them as they are (e.g. you can obtain a .zip of a directory 
containing icons from any Git repo very easily).

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



translating the openEHR website

2012-12-19 Thread Thomas Beale

Hi Wang,

we won't (we need translators!) - we have now quite a few offers and 
diverse ideas on how to technically enable the translation. We need to 
experiment with these and see what works. So my suggestion is - hold off 
until we make a bit more progress on that. Ideally we would publish some 
guidelines on how to do a translation pretty soon. We need to make it as 
easy as possible.

- thomas

On 19/12/2012 10:57, edwin_uestc wrote:
> pls don`t forget me.Maybe I can do something.

-- next part --
An HTML attachment was scrubbed...
URL: 



translating the openEHR website

2012-12-18 Thread Thomas Beale
On 18/12/2012 12:49, Athanasios Anastasiou wrote:
> Hello Thomas and everyone
>
> Just a quick question/suggestion:
>
> Are we really talking about fundamentally different websites or just 
> translations?

Here I am talking about a translation of (parts of) the central website 
(as Gunnar said, some bits probably should just stay in English).

We expect there to be separate websites in specific locales, either on a 
country basis, or like Pablo said, openEHR.org.es that covers a Spanish 
language community. Those sites are managed by people in those locales, 
and reflect local interests & needs. Koray has been working on some 
general concepts to get 'order' in this world.

Eventually I would suggest that we think about adopting similar colours 
/ scheme from the central website, to make all these sites look 
'openEHR-ish'. For website developers of any local sites, please feel 
free to copy anything you see in the Git repo of the central site 
.

>
> In other words, are we just talking about changing the "labels" 
> according to openehr.org/[language-code] or could it be that a few of 
> the pages of the "/es" (for example) website would have different 
> content (perhaps adapted to local conditions)?.

Well there is technically no reason not to do that - since if we put 
each translation under its own directory, other content can go into 
those directories.

But I do think we should not try to make the central site do everything 
- there is a lot of local content for each country that would be very 
local indeed. Note - we can however keep adding more rules to Apache to 
do redirections so that local content has nicer URLs.

I could be proven wrong however!

>
> If the websites are addressing a [language-users] community (as it was 
> mentioned before) and not a specific geographic area, maybe it would 
> be worth taking the time to add (or borrow) some minimal 
> internationalization features on the current website.

Adriana only just started looking at this, and has no special expertise 
in this area. There doesn't seem to be any textbook on how to do this, 
and info on the web is sparse. If you know the magic process for 
internationalising a website, I'll get her in touch with you and you can 
help her out.

>
> Therefore, instead of translating all resources, we just translate a 
> big key/value dictionary (in text format).
>
> What do you think?

I don't know how that works - since most content pages (i.e. the most 
useful stuff to translated) is static HTML. My approach (possibly to 
dumb) would have been to run the pages through google translate and then 
fix all the wrong bits ;-)

>
>
> All the best
> Athanasios Anastasiou
>
> P.S. The site already uses php anyway, so why not make it a bit more 
> "active"?

any suggestions welcome.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



translating the openEHR website [From Gunnar Klein]

2012-12-18 Thread Thomas Beale

[Gunnar - your posts are bouncing - think your subscription is under an 
old .se address - do you want to check 
<http://www-test.openehr.org/community/mailinglists>? (see how easy it 
is to find everything now ;-)]

Subject:
Re: translating the openEHR website [From Gunnar Klein]
From:
Gunnar Klein 
Date:
18/12/2012 10:20

To:
openehr-technical at lists.openehr.org


Dear Thomas,

I volunteer to make a Swedish version. If other Swedish language natives 
want to join me please write to me.

It would probably be a good idea if you write some general instructions 
for the editors of the localization web pages.

Kind regards

Gunnar
> On 18.12.2012 10:52, Shinji KOBAYASHI wrote:
> Hi Thomas,
>
> I forked GitHub web-site project. Can I make /jp sub-directory to work
> under top?
> Could you please point it out where should be?
> Japanese translation would appeal capability of translation much, I 
> will try it.
>
> Regards,
> Shinji
>
> 2012/12/18 Thomas Beale :
>> accountability
> ___
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org 
>

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20121218/2a3c2f62/attachment.html>


translating the openEHR website [From Gunnar Klein]

2012-12-18 Thread Thomas Beale
On 18/12/2012 11:36, pablo pazos wrote:
> Hi Thomas,
>
> About openEHR.org.es, lets say it's more like a group of interest than 
> an oficial branch of the openEHR.org site translated to spanish.
>
> That's what we have right now, but in the future we can find a way to 
> have specific contents generated by us and oficial openEHR contents 
> translated to spanish (and meet the requirements (?) to be an official 
> openEHR community based on a common language instead of a country/region).
>
> BTW, openEHR.org.es is for spanish speakers, not a Spain based community.
>

I understand the idea, but what would openEHR Spain do if it wants its 
own Spanish local website, to do with Spanish locations, legislation, 
companies etc? It would mean that openEHR.org.es was taken. I don't see 
any problem right now, but it might be worth just thinking about how 
domains will be organised in the future...

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



translating the openEHR website [From Gunnar Klein]

2012-12-18 Thread Thomas Beale
On 18/12/2012 12:46, Bert Verhees wrote:
> On 12/18/2012 10:14 AM, Thomas Beale wrote:
>> @Bert: thanks for the offer. 


Shinji can be the first one to take the pain, hopefully we'll have it 
worked out for you in a week's time. Ok, more than a week's time. Some 
warm wine drinking may slow things down...

- thomas



translating the openEHR website [From Gunnar Klein]

2012-12-18 Thread Thomas Beale
On 18/12/2012 09:52, Shinji KOBAYASHI wrote:
> Hi Thomas,
>
> I forked GitHub web-site project. Can I make /jp sub-directory to work
> under top?
> Could you please point it out where should be?
> Japanese translation would appeal capability of translation much, I will try 
> it.
>

Shinji,

it might be a bit early to do too much work on it, but why not get the 
workflow right. In Git, you should see the following structure:



We will create a 'lang' directory at the top level. *You should 
therefore create a 'lang/jp' directory*. Don't worry about the 'lang' 
appearing in URLs, we can deal with that in the Apache rewrite rules.

I think if you just try to translate some of the content on the home 
page, and some of the stable-looking pages one level down - don't go too 
much further because there are still major changes going on in some 
directories. I'll get Adriana to create a list of what appears to be 
stable and what is not.

If you do a bit of work, and push it back to your fork, we'll then get 
it pushed into the main repo (I still have to work out exactly how we do 
this in Github ;-). We'll then upload it, create an Apache rewrite rule 
that does:

/lang/([a-za-z])/(.*) -> /$1/$2

which will have the effect of making the physical directory 
www.openehr.org/lang/jp/something be served as 
www.openehr.org/jp/something, which I think is a bit more normal.

Let's just try this in Japanese, then I suggest the next step is for us 
to provide a list of paths we think are stable enough to translate - 
then some other languages can get started.

- thomas



-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: jfjdecgh.png
Type: image/png
Size: 5888 bytes
Desc: not available
URL: 



translating the openEHR website [From Gunnar Klein]

2012-12-18 Thread Thomas Beale
On 18/12/2012 02:26, Shinji KOBAYASHI wrote:
> Hi Thomas and Gunner,
>
> Having translated portal would appeal wider range, especially for 
> beginners.
> On the other hand, openEHR.jp site has another accountability as the 
> domestic
> artefacts repository. We can have two sites for their responsibility.
>
> 1)http://www.openehr.org/jp/
>  Translated version of official openEHR.org site.
> 2) http://www.openehr.jp/
>  Repository of Japanese artefacts, such as translated documents, 
> presentation/education materials,
> seminar information.
>
> My answer to the questions.
> 1) The workflow on GitHub seems reasonable for me, but we need to try 
> it to prove that it works.
> 2) Your suggested URL openehr.org/jp  is good 
> for us, Japanese community, but I think redirection
> openehr.org/jp  to openehr.jp 
>  is not useful as described before. Localisation 
> has two dimension just
> you mentioned, language and geographical location. I do not have good 
> idea for Spanish community,
> but I think it is a common problem for international language 
> community, even in English.
> There are many English speaking countries, but localisation is 
> necessary, just now Koray is trying.

@Shinji: Ok so let's assume we set up each language on the central site 
as openehr.org/jp etc, and you will be able to use where you like at 
your end.

@Gunnar: I take your points, but not sure what to do about them - i.e. I 
am not sure what to practically do about the need for a mix of local and 
central content, other than for local websites / wikis etc to be created 
as we are doing. I think the main thing we can do now is to keep the 
central site small, which was a conscious objective from the start. The 
local needs in different countries will clearly be different, so I think 
we just have to see how the local web presence in each place develops.

@Bert: thanks for the offer.

All - we are still working on some content, so the central website is 
not 'finished' .. but it will never be, there will always be something 
more to do. So we could start as an experiment just one translation job 
to see how the workflow works. The main thing we would need to agree on 
is probably how we document the changes we make on the central site in 
Git, so that translators can detect what changes have happened that they 
need to reflect.

I think we might be ready to try this experiment in the next week or so 
(we are still adjusting some mechanical aspects of the site). It sounds 
like we make the experiment either Japanese or Dutch - who wants to be 
the guinea pig? (I.e. who has time ;-)

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



New website - tip of the day

2012-12-17 Thread Thomas Beale

Fixes:

  * we now have tooltips on links at the bottom (the ones that are not
necessarily obvious from their name, e.g. "GitHub")
  * Some links have been renamed to have 'CKM' in the title, to make it
more obvious where they point
  * CKM link at the top right, next to the 'wiki' link - we can think of
wiki and CKM as the major extensions of the website

Tips:

  * looking for the specification releases? Here
 they
are...

-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: bihajcfb.png
Type: image/png
Size: 39500 bytes
Desc: not available
URL: 



translating the openEHR website [From Gunnar Klein]

2012-12-17 Thread Thomas Beale

Subject:
Re: translating the openEHR website - Also a localised content?
From:
"Gunnar Klein, NTNU" 
Date:
17/12/2012 16:47

To:



Dear Tom and other techies,

A wonderful idea with translated content and the general work flow 
described sounds feasible to me. However, I think it would make sense 
not to require the various non English language sites to follow exactly 
the master openEHR. Firstly, because it would make sense to launch some 
content in several languages before everything is translated, and in 
several cases I think all the content will never be translated, some of 
the technical stuff will be better read in original English in some 
countries. However, the "LOCALISED" openEHR web pages may also contain 
material that relates to national work, in particular of course as 
directly related to openEHR implementations. Documents may be uploaded 
in various languages with content that it will not always make sense to 
translate.

Regarding the excellent Japanese initiative, I suggest they should be 
offered to move the content to the main site but with the openEHR.jp as 
a pointing entry. Such sites may be establsiehed in other countries also 
but I think they shall generally not have there own content but be 
pointers to the openEHR.org. Especially where the same language is used 
in several countries and continents it may be a complicated 
proliferation which in one sense is welcome. An offer to one person or a 
small group of 2-3 persons per geographical area to work directly with 
the openEHR international site makes sense to maintain some control over 
content of the foundation content.

Best regards

Gunnar

On 17/12/2012 15:29, Thomas Beale wrote:
>
> we are trying to work out the best approach to translations of the 
> openEHR website. The mechanism for the website itself is probably 
> straightforward:
>
>   * for each language xx, we create a copy of the current website
> under a directory /xx/, and push this to the Github repo that
> contains the website
>   o or perhaps separate repos, one per language?
>   * the people who want to do the translation work clone the repo,
> replace the EN text with their language and upload the changes
>   * we push the changes to the main website
>
> Most URLs in the website are relative, so this should work. Clearly 
> changes on the main website need to be reflected over time on the 
> other websites, but we can rely on proper commit comments in the Git 
> repo to take care of that.
>
> *First question *- does this seem a reasonable workflow to  adopt?
>
> The *second question *that I can see is: what is the starting URL & 
> location? Taking Japan as an example:
>
> Shinji's group already has openEHR.jp. Currently it is their own 
> website. However, with a translated form of the international website, 
> would it make sense for openEHR.jp to point to www.openEHR.org/jp? If 
> so, then the translated international website would need a prominent 
> link back to the current openEHR.jp. OR... if they prefer to land on 
> the current openEHR.jp, what URL should get a user to 
> www.openEHR.org/jp - presumably just that.
>
> These questions apply to all languages, but not all locations or 
> languages equate to a country. For example, if we made 
> www.openEHR.org/es, I am sure we only want one of those, even though 
> there can technically be some small differences between the Spain / 
> Central & South America variants. But there is no openEHR.es and 
> openEHR.org.es (which appears to be taken) would correspond to Spain only.
>
> In the end, I think the best we may be able to do is to provide a 
> www.openEHR.org/xx for each language translation, and it will be up to 
> local openEHR.orgs to add links or Apache rewrite rules to connect to 
> these locations. So multiple Spanish-speaking countries could all 
> point to this ES translation of the central site.
>
> All ideas welcome.
>
> - thomas
>

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20121217/21f6d10c/attachment.html>


translating the openEHR website

2012-12-17 Thread Thomas Beale

we are trying to work out the best approach to translations of the 
openEHR website. The mechanism for the website itself is probably 
straightforward:

  * for each language xx, we create a copy of the current website under
a directory /xx/, and push this to the Github repo that contains the
website
  o or perhaps separate repos, one per language?
  * the people who want to do the translation work clone the repo,
replace the EN text with their language and upload the changes
  * we push the changes to the main website

Most URLs in the website are relative, so this should work. Clearly 
changes on the main website need to be reflected over time on the other 
websites, but we can rely on proper commit comments in the Git repo to 
take care of that.

*First question *- does this seem a reasonable workflow to adopt?

The *second question *that I can see is: what is the starting URL & 
location? Taking Japan as an example:

Shinji's group already has openEHR.jp. Currently it is their own 
website. However, with a translated form of the international website, 
would it make sense for openEHR.jp to point to www.openEHR.org/jp? If 
so, then the translated international website would need a prominent 
link back to the current openEHR.jp. OR... if they prefer to land on the 
current openEHR.jp, what URL should get a user to www.openEHR.org/jp - 
presumably just that.

These questions apply to all languages, but not all locations or 
languages equate to a country. For example, if we made 
www.openEHR.org/es, I am sure we only want one of those, even though 
there can technically be some small differences between the Spain / 
Central & South America variants. But there is no openEHR.es and 
openEHR.org.es (which appears to be taken) would correspond to Spain only.

In the end, I think the best we may be able to do is to provide a 
www.openEHR.org/xx for each language translation, and it will be up to 
local openEHR.orgs to add links or Apache rewrite rules to connect to 
these locations. So multiple Spanish-speaking countries could all point 
to this ES translation of the central site.

All ideas welcome.

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 



New website - tip of the day

2012-12-13 Thread Thomas Beale

For newcomers to the new website  (test 
version), it may be useful to get some tips on the features.

Here are some useful things from the lower menu, full of shortcuts for 
openEHR-ites who know what they want:

  * the *Release 1.0.1 UML *site is now hosted on GitHub, and we are
pulling pages through the canonical URL
http://www-test.openehr.org/releases/1.0.1/reference-models/openEHR/UML/HTML


  o it's not yet perfect, and we probably should reverse proxy it,
but I have not yet worked out Apache rules that work with Github
to do that
  o Release 1.0.2 UML exists, but we have not yet created nice pages
for it (you can see it on the wiki).
  * *CKM *is on the 'Model Repository' link
  * The *openEHR GitHub***repositories are on the Github link - you will
see quite a number have been moved there now
  * The *openEHR Youtube channel* has some useful training material.

Remember, if you want to post some feedback about the website, go to the 
'about this website' page .


-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: gjdigafb.png
Type: image/png
Size: 16240 bytes
Desc: not available
URL: 



<    3   4   5   6   7   8   9   10   11   12   >