Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread Bert Verhees
Op 18-02-2012 1:26, fred trotter schreef:
 Very confusing, and I have yet to see something compelling that can be 
 done in OpenEHR that cannot be done with HL7 RIM.
It is not that I want to interfere between you and Thomas. I am happy to 
leave the interoperability discussion with you.

Just want to exercise my mind with following statement.
OpenEHR is a system-specification, you can simulate a HL7 v3 RIM machine 
on an OpenEHR kernel.
And also you can simulate a EN13606 machine on an OpenEHR-kernel.
And also, in the Netherlands we invented OranjeHIS (long time ago, but 
still working to get it implemented), which is a datamodel for GP-systems.
You can use an OpenEHR-kernel to simulate this datamodel.

Not that you should do that, but it might be possible, but maybe you run 
against some detail-problems and you will need some software-logic to 
overcome them.
Maybe even, adjust the Reference Model.

Regards
Bert Verhees





openEHR - Persistence of Data

2012-02-18 Thread Bert Verhees
Op 17-02-2012 20:49, pablo pazos schreef:
 the openEHR RM is an Object Oriented model, a programmer should 
 implement the model on the ORM tool and the schema should be generated 
 from those classes, in fact is the schema what accommodates to the 
 classes.

Starting with OpenEHR often means, For every class a table, maybe write 
your own access-layer, specially optimized, maybe better then hibernate, 
because customized.
But still, to store a Locatable, with item-tree's, clusters, datavalues, 
it used about 40 insert-statements, (automatically generated), or more.
To retrieve it, about the same number of Select-statements.

Then start optimizing, and create f.e. a wide table Datavalue, which 
could contain a complete datavalue, so you do not any more needed to 
split up the datavalues.
It reduces the number of SQL-statements/indexes to, maybe 25 instead of 40.
Also maybe try to create a very wide table Locatable, hoping that you 
can do some more saving, for example, the name-attribute which is a 
DvText, the uid which is (mostly) a hierobjectid, you can flatten that.

It helps a bit. You reduced again the number of tables/SQL/indexes, say, 
to 17 for a decent Locatable.

I don't think that is a good way to solve this problem.

There are other solutions.

XML is one, but I am thinking about another way.

Does someone know how an Object Database works?
You offer an object, it never has seen, and it stores it.
And can be queried on attribute-values.

I don't know how it works, but it could be like this:
I think it could work on attribute-paths.
If it does, it looks like OpenEHR, which works on archetype-paths which 
are representations of attribute paths.

Every leaf-value has a path pointing to it.
Every locatable has a UID.
These both make every single leaf value unique.

Imagine:
When splitting the data over tables, your query engine needs to use more 
then one SQL statement, or join several statements, needs to open 
indexes, needs to read key-fields and jump to them in indexes, and so on 
and on.
Every table/class involves costs. The more complicated an 
Locatable-object is, the more expensive it is to store.

It is also possible to flatten the whole business to one table. Only one 
simple query retrieves a complete locatable. Only one index used.
Also, implementing AQL is not very hard, the necessary information is 
indexed available.

That must be very very fast. I don't think, faster is possible. Only 1 
SQL statement to retrieve all values of a Locatable.

And there is no problem with the large number of records that table has, 
and there is no scaling problem.
Every record is, in fact, one leaf value.
The rumor says Postgres is fastest (I heard, Oracle has a problem 
admitting this) until 100 million records, after that one should start 
thinking about NOSQL-solutions.
100 million records mean about 25.000 average patients. 4000 datavalues 
for one average patient.
The older versions of Locatables, they of course go to a separate table. 
Some NOSQL-databases have a versioning system themself.

But there are some disadvantages. you must create RM-instances from 
path-notations, you must keep pointers to parents, the software logic is 
more complicated then dumping it to hibernate or XML. But this to solve, 
not really rocket science, and because the kernel is stable the code 
last a long time. And even when the RM-specs change, without migrating, 
It can support more versions of the RM-model in one table, simultaneous.

Think about it.

I did think about that, the kernel-speed improved with a factor 50.

I saw, some month ago, a person, technician who wrote an SQL engine for 
a HL7 RIM database. The engine created SQL statements to retrieve 
values. I cannot remember the exact numbers, but I thought, 200 tables 
were involved, and the SQL statement (automatically generated) was about 
100 lines at least.
It is easy for programmers, it develops quick and without errors, but is 
it good?

I think speed and simplicity is necessary for success.
Speed because of the simple table design and short SQL statements, 
simplicity because of the archetypes, not software, defining the job.

regards
Bert Verhees



openEHR - Persistence of Data

2012-02-18 Thread Bert Verhees
Op 18-02-2012 2:13, Bert Verhees schreef:
 It is also possible to flatten the whole business to one table. Only one
 simple query retrieves a complete locatable.
Flatten is not the right term. It suggest a very wide table.

I meant a very thin table, only three main fields. The UID, the path and 
the value.
And some more fields for meta-information, that is all it needs.

The concept is known as key/value-pairs

Bert Verhees



Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread Thomas Beale
 (and even 
back-ends).

That's the 'sell'. It has taken some time, but all the proof is there 
now; all that is needed is some further building out of tools, creating 
the final specifications.

hope this clarifies

- thomas

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20120218/e8bffaf8/attachment.html


Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread fred trotter
 (please, no flame wars, below I am just trying to explain _my_ point of
 view to Fred;-)


There is no need to worry about a flame war. I am certainly dubious, but I
take what you guys are doing and saying very seriously.
It seems like you are taking a totally different approach to semantic
interoperability than I generally favor.

My view is that semantic interoperability is simply a problem we do not
have yet. It is the problem that we get after we have interoperability of
any kind. This is why I focus on things like the Direct Project (
http://directproject.org) which solve only the connectivity issues. In my
view once data is being exchanged on a massive scale, the political
tensions that the absence of true meaning creates will quickly lead to
the resolution of these types of problems.

The OpenEHR notion, on the other hand, is to create a core substrate within
the EHR design itself which facilitates interoperability automatically. (is
that right? I am trying to digest what you are saying here). Trying to
solve the same problem on the front side as it were.

Given that there is no way to tell which approach is right, there is no
reason why I should be biased against OpenEHR, which is taking an approach
that others generally are not.

If that is the right core value proposition (and for God's sake tell me now
if I am getting this wrong) then I can re-write the OpenEHR accordingly.

Regards,
-FT

-- 
Fred Trotter
http://www.fredtrotter.com
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20120218/0a9b3010/attachment.html


Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread pablo pazos

Hi Fred,




The OpenEHR notion, on the other hand, is to create a core substrate within the 
EHR design itself which facilitates interoperability automatically. (is that 
right? I am trying to digest what you are saying here). Trying to solve the 
same problem on the front side as it were.



I think that's more acurated, but substrate is a little ambiguous here, I 
rather say that openEHR propose a generic standarized architecture based on the 
dual model (separate software from custom domain concepts). That architecture 
enables/simplifies interoperability later because the information to be 
interchanged between systems is formally defined (by archetypes: 
http://www.openehr.org/knowledge/). So any communication protocol and data 
format could be used for interoperability, and systems could interchange not 
only data, but the information definition too.
The key here is that within an openEHR based system, other standards like HL7, 
DICOM, SNOMED, MeSH, UMLS, ICD10, ... could be implemented to, each one for 
it's own task.

Hope that helps.
-- 
Kind regards,
Ing. Pablo Pazos Guti?rrez
LinkedIn: http://uy.linkedin.com/in/pablopazosgutierrez
Blog: http://informatica-medica.blogspot.com/
Twitter: http://twitter.com/ppazos
  
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20120218/90d1a760/attachment.html


Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread Bert Verhees
Op 18-02-2012 22:24, pablo pazos schreef:
 The key here is that within an openEHR based system, other standards 
 like HL7, DICOM, SNOMED, MeSH, UMLS, ICD10, ... could be implemented 
 to, each one for it's own task.

Supplementary to what Pablo wrote, I have a real life example.

In the Netherlands, HL7v3 messaging would have become mandatory for 
every Health-related-system, from the kitchen in a health-institution, a 
GP-system, or a medical-specialist system.
The idea was (very simply said) to create a message-oriented network 
where all these systems should connect.
Every health-related system was expected to run Hl7v3 messaging on top 
of it, or the system would be excluded from this network and, as a 
result, possibly also excluded from business in healthcare.
So the pressure was big, and most systems succeeded in producing and 
reading HL7 messages. Most systems, of a big variety, architectural, 
platform, etc, can now implement HL7v3 messaging, an OpenEHR-system, 
with all its flexibility can also.
---
At last the HL7v3 network failed, because of privacy-reasons (simply 
stated), but maybe it gets a second chance, but that will take some 
years to the next senate-change, and it will not be easy.

Then a strange thing happened in the Netherlands.
Now the HL7v3 network failed, for reasons which have nothing to do with 
HL7v3, many systems hurried to go back to the messaging standards they 
used before.
That is mainly Edifact messages and HL7 v 2.x. Defined 15 years ago or 
more. The old working horses.
(an OpenEHR system can also produce these messages, like any system can)

Why is that, the switching back? Is it for technical reasons?
HL7v3 is from semantical point of view much better than the 
legacy-messaging-systems.
So why not use it if the law doesn't force it and the implementation was 
for most systems ready?
Why switch back to these old legacy-systems, often implemented with errors?

I don't know for sure.

I think, one reason, it is because the new network did not come to 
live, and the organisations had to revalue to their old systems, and 
those only could run on the legacy-message-standards.
---
What we can see is that from market perspective in the Netherlands, 
HL7v3-messaging is not getting implemented. The old working horses do 
the job more or less satisfactory.
Dutch technicians value the American saying: If it ain't broke, don't 
fix it.

And how about OpenEHR? There are several projects where it is getting 
implemented, some large companies are involved, some universities too.
The main reason? The flexibility it offers to build systems and the ease 
to connect to messaging standards and non (or defacto) standardized 
messaging protocols.

Bert





Meaningful Use and Beyond - O'Reilly press - errata

2012-02-18 Thread Thomas Beale

Fred,

that's pretty much it. We can disagree whether we should solve the 
sem-interop problem now (us; harder, longer) or later (you; get more 
going faster), but that's not a real debate - in some places our view 
makes more sense, in others yours is the practical sensible approach. 
Our main aim is to enable /intelligent computing/ on health data; doing 
that means semantic interoperability has to be solved. Otherwise, there 
is no BI, CDS or medical research based on data.

My only worry about not taking account of semantic / meaning issues now 
is that it will cost more later, than if it were included now. I still 
think that there is synergy to be explored in the coming 12m-2y between 
the openEHR community and the open source health Apps community (if I 
can call it that).

- thomas


On 18/02/2012 20:55, fred trotter wrote:



 (please, no flame wars, below I am just trying to explain _my_
 point of view to Fred;-)


 There is no need to worry about a flame war. I am certainly dubious, 
 but I take what you guys are doing and saying very seriously.
 It seems like you are taking a totally different approach to semantic 
 interoperability than I generally favor.

 My view is that semantic interoperability is simply a problem we do 
 not have yet. It is the problem that we get after we have 
 interoperability of any kind. This is why I focus on things like the 
 Direct Project (http://directproject.org) which solve only the 
 connectivity issues. In my view once data is being exchanged on a 
 massive scale, the political tensions that the absence of true 
 meaning creates will quickly lead to the resolution of these types of 
 problems.

 The OpenEHR notion, on the other hand, is to create a core substrate 
 within the EHR design itself which facilitates interoperability 
 automatically. (is that right? I am trying to digest what you are 
 saying here). Trying to solve the same problem on the front side as 
 it were.

 Given that there is no way to tell which approach is right, there is 
 no reason why I should be biased against OpenEHR, which is taking an 
 approach that others generally are not.

 If that is the right core value proposition (and for God's sake tell 
 me now if I am getting this wrong) then I can re-write the OpenEHR 
 accordingly.

 Regards,
 -FT
 *
 * 
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20120218/33c7f8b1/attachment.html