Last point: I seem to be interested in a current encounter (the now) and 
diagnosis, the article seems to be interested in an arguably just as useful 
tool, the longitudinal problem list (the ever), though very different I would 
think in approach. 




Thoughts? 

Jg







—
Sent from Mailbox for iPhone

On Thu, Oct 31, 2013 at 7:22 PM, John Green <john.travis.gr...@gmail.com>
wrote:

> Sean - quick note: after looking at the above two resources, a couple of 
> points.  The first resource confirms what I expected, that the vocabulary 
> exists in ctakes. The second confirms what I suspected: that novel approaches 
> to ordering and identification of top members of a problem list are needed. 
> Namely, that the vocabulary may be there, but thats only a tenth of the 
> battle. Your second great resource you sent me acknowledges this - that 
> prioritization, eg enumeration from most important to least, as well as 
> clumping, are the true battle.
> A point of clarification on my end: it would be interesting to see what could 
> be added on top of existing ctakes in order to facilate a solution to the 
> second problem - clumping and prioritizing. (For instance, from the second 
> article, an acute process may have nothing todo with the past medical history 
> and if an algorithm were concerned with all members as equals, it would miss 
> the issue at hand). 
> Just as a thought: working back from the known natural history of diseases 
> would possibly be a route to a solution.
> This is probably well known stuff, so please forgive my ignorance if its all 
> been done/thought of before.
> Again, the two links were very helpful, thank you.
> Jg
> —
> Sent from Mailbox for iPhone
> On Thu, Oct 31, 2013 at 2:04 PM, Finan, Sean
> <sean.fi...@childrens.harvard.edu> wrote:
>> I don't know if what I write below truly applies to the discussion, but here 
>> it is.
>>>much of a problem list definition may already be contained to varying degrees
>>> in existing cTakes databases.
>> The UMLS does provide a problem list, but I haven't looked at it.
>> http://www.nlm.nih.gov/research/umls/Snomed/core_subset.html
>> This might be a paper of interest to you:
>> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2655994/
>> It discusses the use of nlp to create something like a problem list.
>> Sean
>> ________________________________________
>> From: John Green [john.travis.gr...@gmail.com]
>> Sent: Thursday, October 31, 2013 12:02 PM
>> To: dev@ctakes.apache.org
>> Subject: Re: Sundry
>> Pei and Tim - Good questions.
>> The bottom line is that OPQRST is the algorithm that every clinician uses
>> to characterize the history of a sign, symptom or constellation of
>> symptoms. Each letter has multiple meanings, but generally they're grouped.
>> O for onset, was it quick or slow in onset, P for palliative or provoking
>> phenomenon, that is, does tylenol make it better? Does it feel better when
>> you lean forward? Is it worse with standing? Q is the quality, generally,
>> though I could give more examples of each Ill keep it brief from here, R is
>> generally region or radiation of the pain and or sign, S is the severity,
>> and T is the time course, is it intermittent? When it happens, how long
>> does it last for? I could send documents used to teach new clinicians to
>> better comprehend for anyone interested.
>> OPQRST, while most residents would assume it is only for teaching new
>> clinicians, as Tim said, is a useful tool at all levels. Great clinicians,
>> and I work with some great senior folks, use this everyday. The idea that
>> it is only for teaching is founded on two things: one, that it doubles as a
>> structured mnemonic for characterizing signs and symptoms and two, that
>> everyone so far ingrains this into their clinical skill set, unless they
>> are geared toward teaching, they, after the basic level, never think about
>> it again! Caveat: many good clinicians will tell you to keep it algorithmic
>> so that you're systematic and do not overlook details.
>> What is it's application to ML? Obviously the furthest desired end-state
>> for NLP like cTakes would be understanding a clinical encounter to such a
>> nuanced level that detailed diagnoses could be considered along with
>> treatment plans. While I only know what I've read in Artificial
>> Intelligence: A Modern Approach and picked up from friends over the years
>> who were good knowledgeable in this field, I feel that OPQRST would be a
>> huge benefit toward beginning to outline the problem of more rigorous ML
>> characterization of the clinical narrative.
>> The utility of OPQRST may not still be entirely clear to those who have
>> never been presented with a clinical encounter. Let me try one more stab:
>> Take the classic example of chest pain. A man comes to the ER with chest
>> pain. Is the onset quick? Yes doc, it was all of a sudden. This might
>> support a diagnosis of, say, MI, aortic dissection, pulmonary embolism, but
>> less likely someone would call GERD sudden. Palliative or provoking
>> features? Well, when I take 8 antacids it gets better (GERD), or, When I
>> take my wifes nitroglycerine it got better for a little while (angina), or,
>> when I took my wifes nitroglycerine it did nothing (pericarditis?).
>> Quality: Is it stabbing? Ya doc, its stabbing (less likely MI). Is it
>> crushing? Like an elephant on your chest? Ya doc, that's it. (more likely
>> MI), and so on.
>> Now of course, cTakes could be used for a real life encounter like this
>> (middleware) at some point, but likely it would be taking a history and
>> proposing a diagnosis (middleware again Tim, yes). But the point is, the
>> first steps toward knowing what were dealing with at the historical level
>> is centered around OPQRST, and it just occurred to me to ask what we
>> thought about the feasibility of something like that.
>> In retrospect, it may be too tough, but at some point it would need done,
>> just as much as a clinician must learn it.
>> One final point: problem lists. These are absolutely essential to any
>> clinician in making a diagnosis. Again, often times, they dont think about
>> it, but they use them. When writing the above it occurred to me: much of a
>> problem list definition may already be contained to varying degrees in
>> existing cTakes databases. It would be an interesting and worthwhile paper,
>> I think, to see how well cTakes compiled problem lists matched Medical
>> Students, Residents, and Attending physician's problem lists. If anyone is
>> interested in this line of thought, I would be interested in collaborating.
>> It would be very easy, and the data may actually already exist to compare.
>> Forgive me if its already been done, but, if it hasnt, then it would go a
>> long way toward proving cTakes efficacy in regards to high-order processes.
>> And if it hasnt been done and someone does it at a later date, please, send
>> me an email to the paper!
>> JG
>> On Wed, Oct 30, 2013 at 10:08 AM, Tim Miller <
>> timothy.mil...@childrens.harvard.edu> wrote:
>>> Thanks for bumping this Pei, it reminds me I meant to respond to it.
>>>
>>> The OPQRST does sound like a great ML project. At a glance I might think a
>>> sequence model over sentences (like a CRF) would be a good model.
>>> But I'm wondering what the end use case is? Is it for teaching OPQRST to
>>> new clinicians? Or maybe as a sort of middleware for other projects where
>>> it might be a useful feature? Without a physician's intuition I sometimes
>>> suffer from a failure of imagination on these things.
>>>
>>> Tim
>>>
>>>
>>>
>>> On 10/30/2013 09:59 AM, Chen, Pei wrote:
>>>
>>>> Hi John,
>>>> I was away for a little bit and finally got a chance to catch up on
>>>> emails...
>>>>
>>>>  2) I work for the DoD and have latched on to several IRB approved
>>>>> projects
>>>>> within that community where Ill be using cTakes, though minimally at
>>>>> first.
>>>>> This is just a statement, a bug in the ear of the community of what
>>>>> people
>>>>> are up to.
>>>>>
>>>> This is really news!  Looking forward to hearing more...
>>>>
>>>>  has anyone considered (and maybe the components already do this in some
>>>>> way I
>>>>> haven't explored yet - time is ever limited) adding an OPQRST classifier?
>>>>>
>>>> I'm not too familiar on how OPQRST would be determined from the patient's
>>>> record.
>>>> Just curious, how is it currently determined manually now?  Is it a
>>>> single score determined by a formula/rule(s)?
>>>> Seems like another good use case for cTAKES output-- clinically focused.
>>>> --Pei
>>>>
>>>
>>>

Reply via email to