> From: Dan Minette <[EMAIL PROTECTED]>

> 

> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On

> > > I realize that you think that, but it raises an obvious question.  What
> > do
> > > you do when different studies produce different results?  How do you
> > think
> > > the results of the studies should be weighed against each other?
> > >
> > 
> > First who funds the respective studies?  
> 
> OK, I'll agree that studies funded by someone with a clear financial
> interest, such as cell phone companies funding a cell phone safety study,
> are suspect.  Studies funded by groups with political interests should also
> be taken with a grain of salt.
> 
> 
> >Second, which study has a larger correlation? (Isn't that the n value?)  
> 
> I'm not sure what you are getting at here.  If there isn't a correlation,
> then studies which show the largest correlations are the most wrong.  Maybe
> you are talking about ones with the smallest backgrounds against which to
> measure results.  Thus, looking at a subset of tumors that occur close to
> the cell phone's location would be a good idea.
> 

I think, I meant the p-value.

> >Third, size and time scale of the study.  
> 
> That's fine
> 
> >Fouth, additional related studies that show simmilar / dissimmilar
> >findings.
>  
> In the case of cell phones, there are a number of studies, by various
> groups, most of which do not show an effect.  Indeed, the variation is
> larger than what expects from statistics.  Methodology comes into question.
> 
> The Swedish study which reported a large effect involved self selection
> because it was a mailed survey.  This opens up the possibility of
generating
> a false positive.  Other long term studies did not have this methodology
and
> did not report such a result.  From the FDA, we have:
> 
> <quote>
> Several studies have been recently published on the risk of long term cell
> phone use (> 10 years) and brain cancer1. The results reported by Hardell
et
> al. are not in agreement with results obtained in other long term studies.
> Also, the use of mailed questionnaire for exposure assessment and lack of
> adjustments for possible confounding factors makes the Hardell et al. study
> design significantly different from other studies. These facts along with
> the lack of an established mechanism of action and absence of supporting
> animal data make it difficult to interpret Hardell et al. findings.
> <end quote>
> 
> I've seen discussions of the problems with the methodology on websites from
> European researches that confirm this, so it's not just the FDA.
> 
> Another point is how directly one can relate the results of a study to the
> question at hand.  For example, simply because a rat fed with an amount of
X
> equal to X times it's body weight develops problems doesn't mean that a
> human who eats 0.001 X it's body weight of the same substance will also
> develop problems.  Or, we cannot assume, because blood cells on a slide
> exposed to a flux density of X coagulate, that a flux density of X/A in the
> brain will cause any problems.  
> 

I rember there being rat studies that showed the exact kind of effect as this
study shows, but on rats.  Doesn't  that strengthen the results of this
study?

Also another study showed decreased sperm counts from hip cell phones:
<<http://scienceblogs.com/pharyngula/2006/04/the_effect_of_porn_on_male_fer.ph
p>>

_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to