Hi again, On Tue, Feb 10, 2026 at 8:34 AM Matthew Brett <[email protected]> wrote: > > Hi, > > On Tue, Feb 10, 2026 at 1:04 AM Robert Kern via NumPy-Discussion > <[email protected]> wrote: > > > > On Mon, Feb 9, 2026 at 5:02 PM Ralf Gommers via NumPy-Discussion > > <[email protected]> wrote: > >> > >> > >> This also presumes that you, or we, are able to determine what usage of AI > >> tools helps or hinders learning. That is not possible at the level of > >> individuals: people can learn in very different ways, plus it will > >> strongly depend on how the tools are used. And even in the aggregate it's > >> not practically possible: most of the studies that have been referenced in > >> this and linked thread (a) are one-offs, and often inconsistent with each > >> other, and (b) already outdated, given how fast the field is developing. > > > > > > On this point, I commend to everyone the writing and research of Dr Cat > > Hicks, a psychological scientist studying software teams and tech. One of > > the things I've noticed from her reading these papers in public is that the > > studies are typically (a) not designed by learning scientists, (b) > > uninformed by the basic phenomena of learning science (thus misattributing > > effects as novel or using mismatched instruments), and (c) underpowered. > > This is an emerging object of study. Each paper alone isn't going to > > establish anything about "AI"; they are adding to a body of knowledge that > > might some day, but after a lot of missteps while we work out the right way > > to measure these effects. Each one is interesting, but rarely is the > > headline-ification of the results going to hold water. > > > > https://www.fightforthehuman.com/cognitive-helmets-for-the-ai-bicycle-part-1/ > > Oh sure - the limitations of those studies Stefan and I were quoting > were pretty obvious to me - my training is in (medicine and) > psychology. My point in quoting them is not to say they are > definitive on the overall effect of AI, but to point out that "this > feels much easier and more productive" is not at all the same thing as > "this is saving me significant time and helping me think and learn, > while maintaining quality". And that we ought to care about the > latter, not the former.
It also seems worth pointing out that: a) Generally (this is rather complicated, can say more if it's interesting) low power means it is more difficult to find results that reach statistical significance. So it's striking that the Anthropic study did reach statistical significance for learning-deficit using AI b) For the Anthropic study, the learning-loss effect of AI was huge - 17% - so certainly worth further investigation and c) Wouldn't a sensible person worry that asking a machine to do the thinking for you, would result in learning loss? So - yes - much more work needs doing, but even the current stuff raises serious questions that need to be addressed. Cheers, Matthew _______________________________________________ NumPy-Discussion mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3//lists/numpy-discussion.python.org Member address: [email protected]
