On Wed, 20 Apr 2016 12:54 am, Rustom Mody wrote:

> I wonder who the joke is on:
> 
> | A study comparing Canadian and Chinese students found that the latter
> | were better at complex maths

Most published studies are wrong.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/

- Has that study been replicated by others? Have people tried to 
  replicate it? Were negative findings published, or do they
  languish in some researcher's bottom drawer? (Publication bias
  is a big problem in research.)

- Was the study well-designed, and the given conclusions supported
  by the study? How well did it survive the critical attention of
  experts in that field? Did the study account for differences in
  mathematics education?

- Did the study have sufficient statistical power to support the
  claimed results? Most published studies are invalid since they
  simply lack the power to justify their conclusion.

- Is the effect due to chance? Remember, with a p-value of 0.05 (the
  so-called 95% significance level), one in twenty experiments will
  give a positive result just by chance. A p-value of 0.05 does not
  mean "these results are proven", it just means "if every single 
  thing about this experiment is perfect, then the chances that these
  results are due by chance alone is 1 in 20".

Anyone who has played (say) Dungeons and Dragons, or other role-playing
games, will know that events with a probability of 1 in 20 occur very
frequently. To be precise, they occur one time in twenty.

Even if the claimed results are correct, how strong is the effect?

(a) On average, Canadian students get 49.0% on a standard exam that Chinese
students get 89.0% for.

(b) On average, Canadian students get 49.0% on a standard exam that Chinese
students get 49.1% for.

The level of statistical significance is not related to the strength of the
effect: we can be very confident of small effects, and weakly confident of
large effects.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to