Providing health and care is part science and for a large part an art.
Meaning that humans are needed.

Artificial Intelligence is a nice scientific hyped topic and nothing more.

That is not to say that AI might play a role and can be of use.
It needs to be properly designed, engineered and not hacked together.
It is certain that AI applications in healthcare must be treated as Medical 
Devices.

For it function properly we need to be able to document healthcare topics 
including the full context/epistemology.
Present OpenEHR/13606 and terminology developments form a good foundation.
But are not sufficient.
What is lacking are well researched and designed shared patterns that capture 
the full context, epistemology.
CIMI is trying to do that.
CIMI, part of HL7, is possibly diverging and getting under the influence of 
practical thinking as it is  adjusting ist gols to encompass FHIR.

GF


Gerard   Freriks
+31 620347088
  gf...@luna.nl

Kattensingel  20
2801 CA Gouda
the Netherlands

> On 25 Jun 2018, at 12:21, Stefan Sauermann <sauerm...@technikum-wien.at> 
> wrote:
> 
> 82% of correct recognition rate is a desaster in healthcare.
> 74% is even worse.
> 
> My evidence based feeling is that we still will need to sort it out manually 
> for some years to come.
> 
> Hope this helps,
> Stefan
> 
> Stefan Sauermann
> 
> Program Director
> Biomedical Engineering Sciences (Master) ->
> Medical Engineering & eHealth (Master) in September 2018!
> 
> University of Applied Sciences Technikum Wien
> Hoechstaedtplatz 6, 1200 Vienna, Austria
> P: +43 1 333 40 77 - 988
> M: +43 664 6192555
> E: stefan.sauerm...@technikum-wien.at
> I: www.technikum-wien.at/mme
> I: www.technikum-wien.at/bhse
> I: healthy-interoperability.at
> fb: www.facebook.com/uastwMME
> portfolio: https://mahara-mr.technikum-wien.at/user/sauermann
> 
> Am 23.06.2018 um 18:11 schrieb Bert Verhees:
>> Today my wife showed me Plantnet.
>> 
>> https://plantnet.org/en/
>> 
>> It recognizes over 6000 plants from showing a flower or a leaf to your 
>> phone. It has learned from machine-learning 700.000 pictures, and its 
>> knowledge every day grows stronger, because it keeps on learning. And not 
>> only the looks of a flower, but if it takes location (biotope) and date in 
>> consideration, the certainty of recognizing gets stronger.
>> 
>> Now you can imagine that it must be hard to recognize a plant from a 
>> picture, without seeing the dimensions and showed in many possible angles, 
>> in sunlight, cloudy or twilight.
>> 
>> I was impressed how good it already was. Very advanced computer-knowledge 
>> for free in the hands of the millions.
>> 
>> There is also an app, I did not try it, which recognizes birds from audio. 
>> You walk somewhere, hear a bird and want to know what kind of bird that is.
>> 
>> The Berlin Natural History Museum leads a contest of 29 teams using 23 
>> different methods, with more than 82% good identifications for isolated bird 
>> recordings, and more than 74% correct identifications for recordings mixing 
>> several bird songs.
>> 
>> 
>> I often notice there is a trend in thinking that Machine Learning cannot be 
>> much help, see how miserable google-translate translates. But then we for 
>> get to see how much progress is made in other areas.
>> 
>> Why am I writing this? Just to let you think about it.
>> 
>> I wonder, Is OpenEhr usable for recognizing pattern in diseases over Machine 
>> Learning, isn't behind every diagnosis a small cloud of archetypes which 
>> forms a pattern? The features of recognizing/learning should not be found in 
>> archetypes ID's, although, that can help a lot, but it should also look to 
>> datatypes, their semantics and relations.
>> 
>> Isn't OpenEhr better for recognizing pattern then whichever classic storage 
>> structure, because the data-structures in OpenEhr are in semantic models, 
>> this instead of some weird Codd-structure, which only has technical reasons 
>> to exist.
>> 
>> (Classic data stored in classic SQL schema's could be brought over to 
>> archetyped structures, to make the base of machine-learning larger.)
>> 
>> I think, when this is developed, we should be able to get to at least two 
>> advantages.
>> 
>> 1) We don't need CKM anymore, computers can understand archetypes, we don't 
>> need to restrict ourselves to a limited number. We can also use archetypes 
>> we do not know, and maybe we never know. Even, we wouldn't need archetypes 
>> anymore, just as reminder/instruction. But the computer could create the 
>> archetypes on the fly, when seeing the kind of data, the relations, the 
>> diagnosis.
>> 
>> 2) We could use the pattern to recognize healthcare situations, and maybe 
>> treat/handle/cure on base of instructions coming from machine learning.
>> 
>> Some thoughts when walking with my wife through the wonderful dunes, and its 
>> special vegetation. Maybe I must write a blog about it.
>> 
>> Have a nice day.
>> 
>> Bert
>> 
>> 
>> 
>> _______________________________________________
>> openEHR-clinical mailing list
>> openEHR-clinical@lists.openehr.org
>> http://lists.openehr.org/mailman/listinfo/openehr-clinical_lists.openehr.org
> 
> 
> _______________________________________________
> openEHR-clinical mailing list
> openEHR-clinical@lists.openehr.org
> http://lists.openehr.org/mailman/listinfo/openehr-clinical_lists.openehr.org

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
openEHR-clinical mailing list
openEHR-clinical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-clinical_lists.openehr.org

Reply via email to