--- In FairfieldLife@yahoogroups.com, ruthsimplicity <no_re...@...> wrote: > > --- In FairfieldLife@yahoogroups.com, "sparaig" <LEnglish5@> wrote: > > > > --- In FairfieldLife@yahoogroups.com, Vaj <vajradhatu@> wrote: > > > > > > > > > On Feb 14, 2009, at 8:06 PM, ruthsimplicity wrote: > > > > > > > In TM research there is a prevalence of small, nearly > insignificant > > > > results. This is ripe for seeing a pattern when there is none. If > > > > the results were dramatic, then the attention of outside researchers > > > > is attracted and usually the work is either confirmed or debunked. > > > > Like cold fusion. But if your blood pressure drops two points > or your > > > > IQ increases 2 points, even if statistically significant, it is hard > > > > to get outside people very interested because it just isn't that > > > > interesting. > > > > > > > > > Well, the idea and approach of the TM org is to not mention the > actual > > > figures or not mention them in a way makes the obviously > insignificant > > > result seem small. SO instead of saying "TM reduces blood pressure > > > 0.08 % from normal baseline BP in healthy individuals" they'll > instead > > > push something like "TM reduces blood pressure, TM decreases blood > > > pressure, TM is good at reducing blood pressure", etc. and saturate > > > the web and broadcast media as much as they can. In other words, > > > instead of poisoning the well, they sweeten it. People like "sweet" > > > news. > > > > > > > Marketing is another issue. > > > > > > L > > > Yes, but it is hard to separate the issues. We acknowledge that > everyone has some bias and everyone likes to be right. This is > exhibited in risks of confirmation bias and risks of using a too > narrow an approach. However, the risks are not the same for everyone > everywhere. A marketing blitz by your supporting organization which > tends to exaggerate results reflects on you as part of the > organization. Some, like Orme-Johnson and Haglin, both market and > research, which makes it look like they are even more biased than > most.
Researchers of Buddhist meditation are interviewed by NPR to tout their latest studies, make comments about how "they already knew that buddhist meditation" worked and didn't need to perform the studies they were doing to show it worked, etc. The bias may not be as obvious, or as straightforward, but it certainly is there in many cases, IMHO. The woman who did the ADHD pilot study has participated in > marketing her study. Travis has done talks that wax eloquent about the > power of TM. How often do the TMO researchers test alternative > hypotheses? And isn't a particular complaint of TM research that there > is evidence of expectation bias in that they view all their data as > fitting their expectation that TM works? > se above Anyone who practices the technique they study has that problem, IMHO. > It is all part of trying to evaluate the bias risks. We do not have > access to their actual procedures, to their hard data. We can't know > to what extent their biases effect a particular study. But given the > fact that false positives are likely prevalent in research anyway, > that Orme-Johnson has said that they lean towards trying to show an > effect in their research, that many of the TM researchers participate > in exaggerated marketing claims,that the TMO researchers truly believe > TM works, my bias concerns are greater with the TMO than with > Davidson. All bias is not created equal. False positives AND false negatives are prevalent in research. > > This is separate from my discussion of pattern recognition, but as all > these things are it is related. The issue of pattern recognition is > two-fold. One is positive, the ability of trained experts to spot new > and interesting patterns. The other is negative, the risk of seeing a > pattern when none is there. > The Law of Fives is an interesting thing, but hopefully statistics and good faith scientific procedures will reduce it sufficiently, in the long run, to allow us to get some idea of what is what. L