At 04:40 PM 6/3/2005, rmiller wrote:
At 03:25 PM 6/3/2005, you wrote:

(snip)
I spoke with Schmidt in '96. He told me that it is very unlikely that causation can be reversed, but rather that the retropk results suggest many worlds.

But that is presumably just his personal intuition, not something that's based on any experimental data (like getting a message from a possible future or alternate world, for example).

Actually, he couldn't say why the result came out the way it did. His primary detractor back then, was Henry Stapp---whom Schmidt invited to take part in the experiment. After which Stapp modified his views somewhat.




When these ESP researchers are able to do a straightforward demonstration like this, that's when I'll start taking these claims seriously, until then "extraordinary claims require extraordinary evidence".
(snip)

The issue is not the Z score in isolation, it's 1) whether we trust that the correct statistical analysis has been done on the data to obtain that Z score (whether reporting bias has been eliminated, for example)--that's why I suggested the test of trying to transmit a 10-digit number using ESP, which would be a lot more transparent--and 2) whether we trust that the possibility of cheating has been kept small enough, which as the article I linked to suggested, may not have been met in the PEAR results:


"Suspicions have hardened as sceptics have looked more closely at the fine detail of Jahn's results. Attention has focused on the fact that one of the experimental subjects - believed actually to be a member of the PEAR lab staff - is almost single-handedly responsible for the significant results of the studies. It was noted as long ago as 1985, in a report to the US Army by a fellow parapsychologist, John Palmer of Durham University, North Carolina, that one subject - known as operator 10 - was by far the best performer. This trend has continued. On the most recently available figures, operator 10 has been involved in 15 percent of the 14 million trials yet contributed a full half of the total excess hits. If this person's figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results.

First, you're right about that set of the PEAR results, but operator 10 was involved in the original anomalies experiments---she was not involved in the remote viewing (as I understand). But p<0.05 is "weakly" significant? Hm. It was good enough for Fisher. . .it's good enough for the courts (Daubert).


"Sceptics like James Alcock and Ray Hyman say naturally it is a serious concern that PEAR lab staff have been acting as guinea pigs in their own experiments. But it becomes positively alarming if one of the staff - with intimate knowledge of the data recording and processing procedures - is getting most of the hits.

I agree, but again, I don't think Operator 10 was involved in all the experiments. Have any of these skeptics tried to replicate? I believe Ray Hyman is an Oregon State English Prof, so he probably couldn't replicate some of the PEAR lab work, but surely there are others who could.

"Alcock says t(snip) . . . distort Jahn's results. "


If Hyman and Alcock believe Jahn et al were cheating, then they shouldn't mince words; instead, they should file a complaint with Princeton.



Of course, both these concerns would be present in any statistical test, even one involving something like the causes of ulcers like in the quote you posted above, but here I would use a Bayesian approach and say that we should start out with some set of prior probabilities, then update them based on the data. Let's say that in both the tests for ulcer causes and the tests for ESP our estimate of the prior probability for either flawed statistical analysis or cheating on the part of the experimenters is about the same. But based on what we currently know about the way the world works, I'd say the prior probability of ESP existing should be far, far lower than the prior probability that ulcers are caused by bacteria. It would be extremely difficult to integrate ESP into what we currently know about the laws of physics and neurobiology. If someone can propose a reasonable theory of how it could work without throwing everything else we know out the window, then that could cause us to revise these priors and see ESP as less of an "extraordinary claim", but I don't know of any good proposals (Sarfatti's seems totally vague on the precise nature of the feedback loop between the pilot wave and particles, for example, and on how this would relate to ESP phenomena...if he could provide a mathematical model or simulation showing how a simple brain-like system could influence the outcome of random quantum events in the context of his theory, then it'd be a different story).

A couple of observations. . .
1. We don't really know the prior probabilities, hence the Bayesian approach might not be the best. Bayesian analysis starts with assumptions and works great when the assumptions are correct. It doesn't work so great if the assumptions are false. Besides, I've always been suspicious of any great statistical theory attributed to a reclusive minister and published well after he's no longer around to explain how he came up with it. 2. We don't really know how the world works, but we have a model and ESP doesn't happen to fit it very well. 3. ESP typically refers to some form of "clairvoyance", which doesn't accurately describe, say, the retro pk experiments, and only vaguely describes Sheldrake's work. So, rejection of clairvoyance experiments shouldn't be a reason to reject all the others. 4. Did you really meant to say "laws of physics and neurobiology?" That's pretty Newtonian-sounding, and itself a handwave of such weirdness as entanglement (when was *that* law passed. . .and *who* passed it???) As for neuro, we're learning new and weird things all the time. We observe a complex array of evidence and fit it into a model. Sometimes the model is predictive, and that's fine. But where we go wrong is assuming the model is the system, and that anything not fitting the model is "incorrect" or in violation of the system (break these laws of physics and neurobiology at your peril!) Some things don't fit the model, and that's all I'm proposing---exploring some of these outliers to see if it makes sense vis a vis a new decoherence model. Of course, if we uncover cheating in any of the aforementioned experiments, we should have the guilty parties arrested and jailed in a dry cell. 5. Most scientists ask for the model first, and the experiment later. I suggest this might not be the best way to uncover truth, or even flaws in the original model. I;m not a big fan of the crop circle business, and don't plan to defend either side--but it seems that the issue is poison to scientists--besides being scared to death of being characterized as "three or four sigs out from the mean" most of them can't fit crop circles into any kind of previous model (other than college kids with boards and great math skills stamping down barley in the middle of the night. In fog.) So, there seems to be a reticence for scientists to study something when (a) there's no grant money available and (b) there's no prior model to fall back onto.

Bottom line, science is very good about being conservative (which is fine), but not very good about discovering anything outside the model (which is bad.)

RM


Reply via email to