How about creating an openEHR test base?
-technical_lists.openehr.org ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/556c395b/attachment.html
How about creating an openEHR test base?
processing, which is currently ADL, dADL, and will extend to XML-AOM (I just haven't gotten around to this yet). I have not thought too much about test cases for JSON or YAML, but I have done the output serialisations for them. Having done the first implementation of JSON, I think it is too weak a formalism to be seriously useful, because it lacks too many basic semantics - particularly dynamic type markers. Its cousin YAML is over-complicated (and in its whitespace form, nearly impossible to get right!), but does have proper OO semantics and I think can be used as a lossless serialisation. Others may have more evolved ideas on how these particular formalisms should be used in openEHR, so I am very happy to be educated by the experts. My main aim is to make sure that the transformations of ADL = JSON and ADL = YAML are correct. You can experiment with JSON, YAML and XML outputs of any ADL 1.4 or 1.5 archetypes right now, using the ADL workbench, which has a bulk export mode into these formalisms. We have already discussed last week with Rong Sebastian about moving the openEHR terminology there, and how to manage it more effectively, so the scope of this knowledge repository is going to continue to grow anyway. So any community input on how to expand this repository and manage it is welcome from my point of view (I realise the above might only be a subset of your original scope Pablo, so there are probably some things that still need to be done elsewhere.) - thomas ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/181377e5/attachment-0001.html
How about creating an openEHR test base?
these particular formalisms should be used in openEHR, so I am very happy to be educated by the experts. My main aim is to make sure that the transformations of ADL = JSON and ADL = YAML are correct. You can experiment with JSON, YAML and XML outputs of any ADL 1.4 or 1.5 archetypes right now, using the ADL workbench, which has a bulk export mode into these formalisms. We have already discussed last week with Rong Sebastian about moving the openEHR terminology there, and how to manage it more effectively, so the scope of this knowledge repository is going to continue to grow anyway. So any community input on how to expand this repository and manage it is welcome from my point of view (I realise the above might only be a subset of your original scope Pablo, so there are probably some things that still need to be done elsewhere.) - thomas ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/be607bcd/attachment.html
How about creating an openEHR test base?
, because it lacks too many basic semantics - particularly dynamic type markers. Its cousin YAML is over-complicated (and in its whitespace form, nearly impossible to get right!), but does have proper OO semantics and I think can be used as a lossless serialisation. Others may have more evolved ideas on how these particular formalisms should be used in openEHR, so I am very happy to be educated by the experts. My main aim is to make sure that the transformations of ADL = JSON and ADL = YAML are correct. You can experiment with JSON, YAML and XML outputs of any ADL 1.4 or 1.5 archetypes right now, using the ADL workbench, which has a bulk export mode into these formalisms. We have already discussed last week with Rong Sebastian about moving the openEHR terminology there, and how to manage it more effectively, so the scope of this knowledge repository is going to continue to grow anyway. So any community input on how to expand this repository and manage it is welcome from my point of view (I realise the above might only be a subset of your original scope Pablo, so there are probably some things that still need to be done elsewhere.) - thomas -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/27bce04a/attachment-0001.html
openEHR on GitHub (was Re: How about creating an openEHR test base?)
Hi Thomas Beale, Our, Ruby implementation repository has already moved on GitHub for our convenience last year for our convenience. I was wondering if we could move our repository under github://openehr/ruby-impl-openhr. It would be comprehensive rather than under skoba/ruby-impl-openehr for publicity. Regards, Shinji Kobayashi 2012/5/7 Thomas Beale thomas.beale at oceaninformatics.com: yes, we will obviously migrate over to Github in the coming months. I have a slight concern about how to avoid chaos, and I do think we need to think carefully about how we organise Git projects/subprojects in general. The openEHR terminology is not large (at all), but looks like it will become more than one file, according to a discussion the other day (I will write this up and post it before doing anything), but I was thinking it needs to be part of a broader openEHR knowledge repository. Although... I have listed it as a distinct 'component' of the specification program - maybe it should have its own repository anyway. Translations of it will multiply the number of files substantially as time goes on, so that is another reason perhaps for a separate repository. I think test archetypes templates probably should be separate from test example data, so that is two repositories right there. That would give us: open terminology test archetypes templates test example data We need to add existing active software projects: Java ref implem project ADL Workbench (Ocean) Archetype Editor Opereffa Not sure about the following: LiU modelling tools Ruby I think is on its own repository; the Python implementation I believe is no longer openEHR, but some kind of custom fork in its own repositories. openEHR on .Net is on codeplex. Any others? - thomas On 07/05/2012 10:55, Erik Sundvall wrote: Hi Tom! Could we use the openEHR github project (that you registered) for hosting a subproject with the openEHR Terminology? I believe it can make ongoing branching/patching more visible and easier to merge/administrate. There is no hurry to move existing test-archetypes there, but for new efforts (terminology, RM-instance-examples etc) me might as well start there (perhaps as a separate subproject). ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org
How about creating an openEHR test base?
Interesting point again. There are various bits of functionality implemented in different projects, but the projects have different open source licences. I'm not Rong of course, but his code uses mpl, and since I've used his code when I started Operaffa, Opereffa is mpl too (though it'll be apache very soon). So you'd need to check how licensing issues need to be handled if you use Rong's code, assuming your work is not under mpl. I think you've touched another important point Pablo Kind regards Seref On Mon, May 7, 2012 at 10:37 PM, pablo pazos pazospablo at hotmail.com wrote: Hi Rong, That's great news, but we have our own RM implementation because it handles ORM too. But I think I can adapt your xml-binding component to use our RM impl, what do you think? -- Kind regards, Ing. Pablo Pazos Guti?rrez LinkedIn: http://uy.linkedin.com/in/pablopazosgutierrez Blog: http://informatica-medica.blogspot.com/ Twitter: http://twitter.com/ppazos http://twitter.com/ppazos Date: Mon, 7 May 2012 21:08:57 +0200 Subject: Re: How about creating an openEHR test base? From: rong.acode at gmail.com To: openehr-technical at lists.openehr.org On 7 May 2012 16:39, pablo pazos pazospablo at hotmail.com wrote: Hi Seref, I've a tool that generates composition instances from archetypes and data, what I don't have is a way to generate a valid XML form from those compositions. Hi Pablo, The xml-binding component in the Java reference implementation does just that. It binds RM object instance to generated XML objects that can be serialized according to published XSD. /Rong ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/769f28c0/attachment.html
Questions about ADL/AOM 1.5, archetype flattening and operational templates
A more realistic example: http://img96.imageshack.us/img96/8431/8566bdf17b8b46ad85acbb3.png definition COMPOSITION[at] occurrences matches {1..1} matches { -- HIV report content existence matches {0..1} cardinality matches {1..2; ordered; unique} matches { allow_archetype OBSERVATION[at0001] occurrences matches {0..*} matches { -- Initial Test include archetype_id/value matches {/openEHR-EHR-OBSERVATION\.HIV_Test\.v1/} } allow_archetype OBSERVATION[at0002] occurrences matches {0..*} matches { -- Confirmation Test include archetype_id/value matches {/openEHR-EHR-OBSERVATION\.HIV_Test\.v1/} } } } This report includes an initial test and a confirmation test, both HIV Tests (which in fact have their own snomed codes). Initial and confirmation test can be checked using different techniques. Again, if you resolve the slot you are losing the information that one is an initial test and the other is a confirmation test and you . 2012/5/3 Thomas Beale thomas.beale at oceaninformatics.com The example below I would say is taking things to extremes. Normally, if you are going to create separate archetypes, they have distinct semantics. Here you are trying to use one archetype for three purposes, but to nevertheless retain the semantic distinctions inside the parent archetype, rather than specifying them in the child archetypes. So one has to ask the question: why bother with separate archetypes here? If you really want to have this ELEMENT archetype for some the purpose of reuse, then you can constraint ELEMENT.name to be the coded term you want in each case i.e. 'systolic BP' etc. I have to admit I don't see much use in having such an ELEMENT archetype, because it is not really saying anything much. Defining the same thing inline seems to be clearer and easier. Do you have any more realistic examples? - thomas On 03/05/2012 09:18, Diego Bosc? wrote: Ok, let me make an example so I can explain me better. I'm not saying this is the way we should model this case, but just to show that the use case is there. If we get blood pressure archetype and decide to represent systolic, diastolic, and mean arterial pressure as slots to another archetype (in this case pressure_measurement), you get something like this http://img717.imageshack.us/img717/6919/a4e77856c56c4c5499c5d1b.png this is the ADL code: definition ENTRY[at] occurrences matches {1..1} matches { -- Blood Pressure items existence matches {0..1} cardinality matches {0..*; unordered} matches { CLUSTER[at0001] occurrences matches {0..*} matches { -- Blood Pressure Measurement parts existence matches {0..1} cardinality matches {0..*; unordered; unique} matches { allow_archetype ELEMENT[at0003] occurrences matches {0..*} matches { -- Systolic include archetype_id/value matches {/CEN-EN13606-ELEMENT\.pressure_measurement\.v1/} } allow_archetype ELEMENT[at0006] occurrences matches {0..*} matches { -- Diastolic include archetype_id/value matches {/CEN-EN13606-ELEMENT\.pressure_measurement\.v1/} } allow_archetype ELEMENT[at0009] occurrences matches {0..*} matches { -- Mean Arterial Pressure include archetype_id/value matches {/CEN-EN13606-ELEMENT\.pressure_measurement\.v1/} } } structure_type existence matches {1..1} matches { CS occurrences matches {1..1} matches { -- codeValue existence matches {0..1} matches {STRC01} codingSchemeName existence matches {0..1} matches {CEN/TC251/EN13606-3:STRUCTURE_TYPE} } } } } } ontology terminologies_available = SNOMED-CT, ... term_definitions = [es] = items = [at] = text = Blood Pressure description = Blood Pressure [at0001] = text = Blood Pressure Measurement description = a meassure of a BP [at0003] = text = Systolic description = Peak systemic arterial blood pressure - measured in systolic or contraction phase of the heart cycle. [at0006] = text = Diastolic description = Minimum systemic arterial blood pressure - measured in the diastolic or relaxation phase of the heart cycle.
openEHR on GitHub (was Re: How about creating an openEHR test base?)
On 08/05/2012 03:59, Shinji KOBAYASHI wrote: Hi Thomas Beale, Our, Ruby implementation repository has already moved on GitHub for our convenience last year for our convenience. I was wondering if we could move our repository under github://openehr/ruby-impl-openhr. It would be comprehensive rather than under skoba/ruby-impl-openehr for publicity. * * you certainly can. I have to travel for a few days, but once I am back I will get on to organising with you and other teams how to structure the openEHR Github area. - thomas -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/ef5f9fbb/attachment-0001.html
Questions about ADL/AOM 1.5, archetype flattening and operational templates
{ -- PQ units existence matches {1..1} matches { CS occurrences matches {1..1} matches { -- codeValue existence matches {0..1} matches {mm[Hg]} codingSchemeName existence matches {0..1} matches {UCUM} } } value existence matches {1..1} matches {|0.0..1000.0|} } } } } And as you can see, you have lost text, descriptions, and codes. This kind of problem can in fact show up. e.g. AIDS report should require two different AIDS tests, one for the first test and another for the confirmation test. Another different example could be having a main diagnosis (as an obligatory slot with their own code), and secondary diagnosis (0..* slot with their own code) referring both to an hypothetical diagnosis archetype 2012/5/2 Thomas Beale thomas.beale at oceaninformatics.com: On 02/05/2012 16:58, Diego Bosc? wrote: so you have to define two different archetype id even if the archetypes are the same? and again, slot text, description and codes are lost with this kind of approach if the archetypes are the same, you just use that archetype once, and allow multiple occurrences. There is never a need to duplicate an identical constraint object in an archetype. I am not sure what you mean by the 'slot text, description and code being lost'. Everything is right there in its archetype. A template contains all the codes. It doesn't include copies of the description because it doesn't need it - flattened objects are operational entities ('compiled' entities) not source entities. It's the same when you compile Java source code - the comments disappear in the output. - thomas ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- Thomas Beale Chief Technology Officer, Ocean Informatics Chair Architectural Review Board, openEHR Foundation Honorary Research Fellow, University College London Chartered IT Professional Fellow, BCS, British Computer Society Health IT blog ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- Dr Ian McNicoll office +44 (0)1536 414 994 fax +44 (0)1536 516317 mobile +44 (0)775 209 7859 skype ianmcnicoll ian.mcnicoll at oceaninformatics.com Clinical Modelling Consultant, Ocean Informatics, UK Director openEHR Foundation www.openehr.org/knowledge Honorary Senior Research Associate, CHIME, UCL SCIMP Working Group, NHS Scotland BCS Primary Health Care www.phcsg.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/4bbd5101/attachment-0001.html
How about creating an openEHR test base?
Dear Erik and all (This email might appear a bit long but it actually makes just two points a) Data Synthesizer Tool, b)Availability of Realistic Subject data) A) Data Synthesizer Tool I absolutely agree on the data synthesizer tool. It is something i would like to do as a test case for parsing an archetype's definition node and generating a representative object because in this case, each and every node defined in the spec would have to be handled. It's not that much of a time consuming task if you already have the RM builder. The AM provides everything that is needed (For example: http://postimage.org/image/mcytss26f/ bounds for primitive types, cardinality / multiplicity for other data structures), so instead of just creating an object from the RM and attaching it in a hierarchy (just by calling its constructor maybe), some values would have to be generated and attached to its fields as well. Once the RM object is constructed it can be serialized to anything (XML included) (and there goes a first test base) From this perspective, it is absolutely essential that the XSDs are valid (to ensure a valid structure) and also (Seref's got a very good point) that the archetypes are valid to ensure a valid content. B) Availability of Realistic Subject Data As far as clinically realistic datasets are concerned, i would like to suggest the following: The Alzheimer's Disease Neuroimaging Initiative (ADNI) in the US is a long term project that collects, longitudinally, various clinical parameters from subjects at various stages in the disease (http://adni.loni.ucla.edu/). At the moment, the dataset contains about 800 subjects. Each subject would have 4-5 sessions associated with it (at 6 month intervals usually) and for each session a number of parameters would be collected such as MMSE scores, ADAS Cog scores, received medication, lab tests and others as well as imaging biomarkers (MRI mostly). A basic demographics section is also available for each subject. (To put it in the context of a visualisation, the story that these data reveal is the progression of AD on a subject / population of subjects which is very interesting.) The data are made available as CSV files (about 12 MB just for the numerical data). An application must be made to ADNI to obtain the data. As redistribution of the data is prohibited (http://adni.loni.ucla.edu/wp-content/uploads/how_to_apply/ADNI_DSP_Policy.pdf) we would be working towards a tool that would accept a set of ADNI CSV files and transform them into a local openEHR enabled repository. The task here would be to create some archetypes / templates that reflect the structure of the data shared by ADNI and then scan the CSVs and populate the openEHR enabled repository. The CSV files are not in the best of conditions (the structure has been changed from version to version, certain fields (such as dates) might be in a number of different formats, the terminology is not exactly standardised, etc). For us (ctmnd.org) to work on these files we have created an SQL database and a set of scripts that sanitize and import the CSVs. I would be interested in turning this database into an openEHR enabled repository (whether a set of XML files or proper openEHR database) because it can be used for a number of things (especially for testing AQL). If you think that this can be of help, let me know how we can progress with it. Obviously the tool can be made available to everybody who can then apply to download the ADNI data locally. I am not so sure about the data (even if they become totally anonymised), i will have to check, but in any case, going from I have nothing to I have a database of multi-modal data from 800 subjects that is more realistic than test data is got to worth the trouble of converting the CSVs. Looking forward to hearing from you Athanasios Anastasiou
How about creating an openEHR test base?
http://lists.openehr.org/**mailman/listinfo/openehr-** technical_lists.openehr.orghttp://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/78c98d66/attachment.html
How about creating an openEHR test base?
Hello Seref Many thanks for the UCI reference, i was personally not aware of it and it's a great resource. Well, as it seems there are plenty of dummy but realistic (!) dataset opportunities out there for creating a test-base, it is indeed a matter of time and i am sorry to not have more experience with actually building archetypes, i can see the value in this and i'd definitely give it a try. Perhaps we can create drafts though and even if these are not entirely correct they would be edited by others (?) All the best Athanasios Anastasiou On 08/05/2012 12:16, Seref Arikan wrote: Hi Athanasios, The problem is always about time. If someone is willing to model an existing clinical data set, then for those who do not know about it, the UCI machine learning repository has some interesting clinical data sets. They're freely available for research, and I think it would be fairly easy to use them for the type of test based we're discussing. Just google UCI machine learning repository, and you should see what I'm talking about. If the openEHR community has members who can put time into creating models for any of these (or other) data sets, and then turning them to valid RM serializations, I for one will not say no to that :) Kind regards Seref On Tue, May 8, 2012 at 11:38 AM, Athanasios Anastasiou athanasios.anastasiou at plymouth.ac.uk mailto:athanasios.anastasiou at plymouth.ac.uk wrote: Dear Erik and all (This email might appear a bit long but it actually makes just two points a) Data Synthesizer Tool, b)Availability of Realistic Subject data) A) Data Synthesizer Tool I absolutely agree on the data synthesizer tool. It is something i would like to do as a test case for parsing an archetype's definition node and generating a representative object because in this case, each and every node defined in the spec would have to be handled. It's not that much of a time consuming task if you already have the RM builder. The AM provides everything that is needed (For example: http://postimage.org/image/__mcytss26f/ http://postimage.org/image/mcytss26f/ bounds for primitive types, cardinality / multiplicity for other data structures), so instead of just creating an object from the RM and attaching it in a hierarchy (just by calling its constructor maybe), some values would have to be generated and attached to its fields as well. Once the RM object is constructed it can be serialized to anything (XML included) (and there goes a first test base) From this perspective, it is absolutely essential that the XSDs are valid (to ensure a valid structure) and also (Seref's got a very good point) that the archetypes are valid to ensure a valid content. B) Availability of Realistic Subject Data As far as clinically realistic datasets are concerned, i would like to suggest the following: The Alzheimer's Disease Neuroimaging Initiative (ADNI) in the US is a long term project that collects, longitudinally, various clinical parameters from subjects at various stages in the disease (http://adni.loni.ucla.edu/). At the moment, the dataset contains about 800 subjects. Each subject would have 4-5 sessions associated with it (at 6 month intervals usually) and for each session a number of parameters would be collected such as MMSE scores, ADAS Cog scores, received medication, lab tests and others as well as imaging biomarkers (MRI mostly). A basic demographics section is also available for each subject. (To put it in the context of a visualisation, the story that these data reveal is the progression of AD on a subject / population of subjects which is very interesting.) The data are made available as CSV files (about 12 MB just for the numerical data). An application must be made to ADNI to obtain the data. As redistribution of the data is prohibited (http://adni.loni.ucla.edu/wp-__content/uploads/how_to_apply/__ADNI_DSP_Policy.pdf http://adni.loni.ucla.edu/wp-content/uploads/how_to_apply/ADNI_DSP_Policy.pdf) we would be working towards a tool that would accept a set of ADNI CSV files and transform them into a local openEHR enabled repository. The task here would be to create some archetypes / templates that reflect the structure of the data shared by ADNI and then scan the CSVs and populate the openEHR enabled repository. The CSV files are not in the best of conditions (the structure has been changed from version to version, certain fields (such as dates) might be in a number of different formats, the terminology is not exactly standardised, etc). For us (ctmnd.org http://ctmnd.org) to work on these files we have created an SQL database and a set of scripts that sanitize and import the CSVs. I would be
How about creating an openEHR test base?
Once again we have tooling to convert csv files to openEHR using template data schema but someone has to do the hard work of creating the archetypes, templates and transforms to make it all happen. This continues to be the blocker of this kind initiative. Let us know if anyone has the bandwidth. Heath On 08/05/2012 8:08 PM, Athanasios Anastasiou athanasios.anastasiou at plymouth.ac.uk wrote: Dear Erik and all (This email might appear a bit long but it actually makes just two points a) Data Synthesizer Tool, b)Availability of Realistic Subject data) A) Data Synthesizer Tool I absolutely agree on the data synthesizer tool. It is something i would like to do as a test case for parsing an archetype's definition node and generating a representative object because in this case, each and every node defined in the spec would have to be handled. It's not that much of a time consuming task if you already have the RM builder. The AM provides everything that is needed (For example: http://postimage.org/image/**mcytss26f/http://postimage.org/image/mcytss26f/bounds for primitive types, cardinality / multiplicity for other data structures), so instead of just creating an object from the RM and attaching it in a hierarchy (just by calling its constructor maybe), some values would have to be generated and attached to its fields as well. Once the RM object is constructed it can be serialized to anything (XML included) (and there goes a first test base) From this perspective, it is absolutely essential that the XSDs are valid (to ensure a valid structure) and also (Seref's got a very good point) that the archetypes are valid to ensure a valid content. B) Availability of Realistic Subject Data As far as clinically realistic datasets are concerned, i would like to suggest the following: The Alzheimer's Disease Neuroimaging Initiative (ADNI) in the US is a long term project that collects, longitudinally, various clinical parameters from subjects at various stages in the disease (http://adni.loni.ucla.edu/ ). At the moment, the dataset contains about 800 subjects. Each subject would have 4-5 sessions associated with it (at 6 month intervals usually) and for each session a number of parameters would be collected such as MMSE scores, ADAS Cog scores, received medication, lab tests and others as well as imaging biomarkers (MRI mostly). A basic demographics section is also available for each subject. (To put it in the context of a visualisation, the story that these data reveal is the progression of AD on a subject / population of subjects which is very interesting.) The data are made available as CSV files (about 12 MB just for the numerical data). An application must be made to ADNI to obtain the data. As redistribution of the data is prohibited (http://adni.loni.ucla.edu/wp-** content/uploads/how_to_apply/**ADNI_DSP_Policy.pdfhttp://adni.loni.ucla.edu/wp-content/uploads/how_to_apply/ADNI_DSP_Policy.pdf) we would be working towards a tool that would accept a set of ADNI CSV files and transform them into a local openEHR enabled repository. The task here would be to create some archetypes / templates that reflect the structure of the data shared by ADNI and then scan the CSVs and populate the openEHR enabled repository. The CSV files are not in the best of conditions (the structure has been changed from version to version, certain fields (such as dates) might be in a number of different formats, the terminology is not exactly standardised, etc). For us (ctmnd.org) to work on these files we have created an SQL database and a set of scripts that sanitize and import the CSVs. I would be interested in turning this database into an openEHR enabled repository (whether a set of XML files or proper openEHR database) because it can be used for a number of things (especially for testing AQL). If you think that this can be of help, let me know how we can progress with it. Obviously the tool can be made available to everybody who can then apply to download the ADNI data locally. I am not so sure about the data (even if they become totally anonymised), i will have to check, but in any case, going from I have nothing to I have a database of multi-modal data from 800 subjects that is more realistic than test data is got to worth the trouble of converting the CSVs. Looking forward to hearing from you Athanasios Anastasiou __**_ openEHR-technical mailing list openEHR-technical at lists.**openehr.orgopenEHR-technical at lists.openehr.org http://lists.openehr.org/**mailman/listinfo/openehr-** technical_lists.openehr.orghttp://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/44eb3365/attachment.html
How about creating an openEHR test base?
On 7 May 2012 23:37, pablo pazos pazospablo at hotmail.com wrote: Hi Rong, That's great news, but we have our own RM implementation because it handles ORM too. But I think I can adapt your xml-binding component to use our RM impl, what do you think? Pablo, The xml-binding component leverages the annotated constructors in the RM classes for instantiating RM objects. It uses reflections extensively. Take a look of the XMLBinding class for some inspiration. I am sure you can adapt it for your own classes. /Rong
How about creating an openEHR test base?
Hi Heath, I don't want to open the scope to much at this stage. I know this is a process that will take some time. Maybe some of us can focus on artifacts and others on services repositories. I really like the idea of having different repositories sharing the same artifacts, this can be a good technical proof of concept of a distributed CKM. (not a new topic, but maybe a forgotten one: http://lists.openehr.org/pipermail/openehr-clinical_lists.openehr.org/2011-September/002201.html). If some of you want to open the access to your services, I can write clients for the EHRGen project to consume artifacts and evaluate how it all works together. Kind regards,Pablo. Date: Tue, 8 May 2012 08:19:11 +0930 Subject: Re: How about creating an openEHR test base? From: heath.fran...@oceaninformatics.com To: openehr-technical at lists.openehr.org Hi Erik, I think that using an EHR service to store RM instances would be better than storing in SVN or GIT. Ultimately if the service was able to work from a GIT repository we would have the best of both worlds. I had considered offering the Ocean EHR server but I assumed the usual issues relating to the commercial backend would have made this not suitable so I didn't bother. Would your service be an alternative, especially since it is RESTful? Perhaps there is a need for multiple service implementations to be available working from the same instance repository, I am sure each have their strengths and weaknesses and interface approaches. For example the ocean EHR service picked up a data validation error reported on the list that another didn't. We can also use this to start comparing service models. Heath On 07/05/2012 4:32 PM, Erik Sundvall erik.sundvall at liu.se wrote: Hi! I agree that we need some RM instances etc initially. We have versioned compositions in the demo server for our LiU EEE-system. We don't know if they are 100% according to spec since they have not been extensively tested. I'll upload some of them to the wikipage after a deadline I have this week (remind me if they are not there next monday ;-) I can give a limited number of people access to them now via REST-interfaces (HTTP via a browser works fine). Mail me off-list if you are in a hurry. Would EHR-data reflecting a number realistic patient stories be interesting to collaborate on as a second step? I am in desperate need of such EHR data in order to create and test EHR-visualisations. Getting real patient data is a pain to get access to and if we get it we can never share it. Could we share the effort of creating a number of such EHR instances (and perhaps write a shared academic paper about it) - If so let's first check/discuss some of the options for data entry and once that is fixed we can involve more clinicians to create and improve/review the stories. A shared set could be reused in several projects and make them more comparable too. Best regards, Erik Sundvall erik.sundvall at liu.se http://www.imt.liu.se/~erisu/ Tel: +46-13-286733 -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/d109724f/attachment.html
How about creating an openEHR test base?
Hi Heath, The issues I mentioned were from seeing emails on the lists from other colleagues reporting problems, until now I didn't worked with openEHR XSDs. I remember someone mentioned a problem of correspondence between XSDs and openEHR specs. Maybe each member can mention what problems they had (Erik?, Athanasios?). Just for fun I've searched XSD on the lists: https://www.google.com/search?sourceid=chromeie=UTF-8q=xsd+site%3Alists.openehr.org%2Fpipermail%2Fopenehr-implementers_lists.openehr.org%2F#hl=essclient=psy-abq=xsd+site:lists.openehr.org%2Fpipermail%2Fopenehr-implementers_lists.openehr.orgoq=xsd+site:lists.openehr.org%2Fpipermail%2Fopenehr-implementers_lists.openehr.orgaq=faqi=aql=gs_l=serp.3...42653.42653.0.42798.1.1.0.0.0.0.0.0..0.0...0.0.C216hd-inngpbx=1bav=on.2,or.r_gc.r_pw.r_cp.r_qf.,cf.osbfp=ca1c69677034f246biw=1280bih=687 https://www.google.com/search?sourceid=chromeie=UTF-8q=xsd+site%3Alists.openehr.org%2Fpipermail%2Fopenehr-technical_lists.openehr.org%2F#hl=essclient=psy-abq=xsd+site:lists.openehr.org%2Fpipermail%2Fopenehr-technical_lists.openehr.orgoq=xsd+site:lists.openehr.org%2Fpipermail%2Fopenehr-technical_lists.openehr.orgaq=faqi=aql=gs_l=serp.3...2087.2087.0.2601.1.1.0.0.0.0.242.242.2-1.1.0...0.0.3-xa3a0gTaYpbx=1bav=on.2,or.r_gc.r_pw.r_cp.r_qf.,cf.osbfp=ca1c69677034f246biw=1280bih=687 Please do contribute! you can add your name and attach the files here: http://www.openehr.org/wiki/display/dev/Development+test+base so there's no mess up with current releases. Please mention what changes you have done to the XSDs here: http://www.openehr.org/releases/1.0.2/its/XML-schema/index.html If you have some XML instances for those schemas, would be great too! -- Kind regards, Ing. Pablo Pazos Guti?rrez LinkedIn: http://uy.linkedin.com/in/pablopazosgutierrez Blog: http://informatica-medica.blogspot.com/ Twitter: http://twitter.com/ppazos Date: Tue, 8 May 2012 08:32:20 +0930 Subject: RE: How about creating an openEHR test base? From: heath.fran...@oceaninformatics.com To: openehr-technical at lists.openehr.org Hi Pablo, What issues do you have with the XSD? We have been producing valid instances for years. I have tools that can validate these in seconds. I am sitting on hundreds of test instances. Problem is I am not sitting around with nothing to do. If you have a student willing to do some dot NET code with little support you can go to openehr.codeplex.com to get what you need to create and validate openehr instances against OPTs and RM. BTW, I have a local xsd that further constrains the published schema that picks up several additional RM invariants. Happy to contribute this but don't want to confuse the status of the official schema. I also have a demographic schema which I believe is currently not part of the current openEHR release. Heath -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20120508/891bd5f2/attachment.html