All,

One workaround for the 3 minute query timeout that Dan mentioned is to change 
the default timeout from 180 seconds to 1 second after logging in to the 
webclient.

To do this, select the icon with a green check mark in the top right of the 
"query tool" tab in the webclient.  Then, set timeout to 1 second.  The 
webclient will still report that the query timed out but it will do so very 
quickly after running the query rather than 3 minutes later.

See ticket #24 comment 
4<http://informatics.gpcnetwork.org/trac/Project/ticket/24?cnum_edit=4#comment:4>
 for screenshots.

I plan to keep looking for the root of the problem that causes the queries to 
time out (rather than returning a count of 0 patients right away like I think 
it should).  But, the workaround noted above would likely save time for those 
working on the scavenger hunt.

Thanks.

Regards,

Nathan

From: Greater Plains Collaborative Software Development 
[mailto:GPC-DEV@LISTSERV.KUMC.EDU] On Behalf Of Dan Connolly
Sent: Monday, February 03, 2014 1:03 AM
To: GPC-DEV@LISTSERV.KUMC.EDU
Subject: cohort characterization data elements scavenger hunt in babel

Now that 
babel<http://informatics.gpcnetwork.org/trac/Project/wiki/SoftwareDev#babel> is 
more or less working, in discussion with Russ and Jim, we did this exercise a 
couple times of picking a proposed data element for characterizing our ALS, 
breast cancer, or obesity cohort from table 6.1 of the GPC 
proposal<http://frontiersresearch.org/frontiers/sites/default/files/frontiers/documents/GPC-PCORI-CDRN-Research-Plan-Template-KUMCv44.pdf>
 and looking it up in the terminologies from each of our sites:



The "data elements save test" shows that it's possible to save a query and 
hence a set of terms, even though there are no facts in the system. 
Unfortunately, it's a bit tedious due to...

  *   #24<http://informatics.gpcnetwork.org/trac/Project/ticket/24> babel 
queries run 3 minutes before they're available for saving, sharing
But you can see that we found some BCRA1 cancer marker terms. And I was a 
little puzzled by what I found for BMI, just within my own back yard (KUMC 
HERON).

As I thought about doing this for even a dozen or so data elements for each 
cohort across our sites, I realize this is a lot of work. So after a few more 
kinks are worked out, I'm inclined to start a sort of scavenger hunt to 
"crowdsource" the work:

  *   1 point for saving a query of any sort in the SHARED folder. (like the 
"data elements save test")

     *   up to 3 per team. No grinding. ;-)

  *   5 points for a query that shows use of one of the proposed data elements 
from at least one site, clearly indicating the relevant cohort(s). (BCRA1 would 
qualify, if I put "breast cancer" in the name (or put it in a "breast cancer" 
subfolder of SHARED, which doesn't work just yet))
  *   10 points for a query that shows consistent use of such an i2b2 concept 
across at least two sites.
  *   15 points for a query that turns up a data modelling issue based on 
differences in i2b2 usage within or between sites (like the BMI query that 
turned into #23<http://informatics.gpcnetwork.org/trac/Project/ticket/23>)
  *   30 points for a query that captures the results of surveying all the 
sites for some data element
  *   add 15 points to the above for coordinating with the relevant cohort 
experts to get them to endorse the usage as sensible.
  *   40 points for taking something from table 6.1 that's too vague, such as 
"diagnostic tests" and coordinating with the relevant ALS/cancer/obesity 
experts to refine it to one or more terms that we might expect to find in our 
i2b2 data
  *   100 points for a query that is endorsed by the respective cohort experts 
as capturing all of the data elements necessary to do (preliminary) cohort 
characterization.
Multipliers for use of MU2 standards should fit in there somewhere... hmm...

While we're working the kinks out, I'm open to input on what the points should 
be redeemable for. ;-)

--
Dan

Reply via email to