The PR is about the advice that the Carpentries will give to learners at
Carpentries workshops.  Not advice for every single person in every single
context. Someone without the social support of a carpentries workshop might
get a lot of value out of an LLM, but in a workshop, it's a different
scenario.

In the Carpentries, our current actively taught curricula are designed for
complete novices. Carpentries workshops are delivered by a team of at least
two instructors, at least one of which has pedagogical training in
effective, evidence-based teaching practices. Carpentries workshops have
helpers in the room to keep a favorable ratio of complete novices to
competent practitioners. Carpentries workshops have frequent formative
assessment and time for learners to distill what they have
learned. Carpentries workshops often include think-pair-share time or other
human-paired exercises.

In this particular context, I think that an LLM is a strictly worse
resource than the people in the room; it is literally random and writing
effective prompts is *another layer* of things to learn. In a carpentries
workshop, an LLM would increase cognitive load, allow learners to "get an
answer" without thinking about what is important, potentially reinforce
common misconceptions, or worst of all get absurdly buggy code (like the
cases where llms try to use private methods of libraries....) that these
learners lack any relevant experience to even break down what is wrong.
LLMs will also, in general, output deprecated and security-flawed code at a
much higher rate than even a novice relying on a carpentries lesson, human
instructors, and up to date library documentation.

Having a separate lesson that is taught like a bonus module* after the
workshop to take someone with the basic skills and a bit of practice and
how to adapt what they have been doing to go faster with an AI assistant
might be a good idea and a good lesson, but that does not mean someone with
no experience should start there.

*we had instructor training bonus modules for a while they were ~1.5? hour
trainings for continuing professional development.

*Sarah M Brown, PhD*
sarahmbrown.org
Assistant Professor of Computer Science
University of Rhode Island



On Fri, Mar 21, 2025 at 2:35 AM Paola Di Maio <[email protected]>
wrote:

> Okay, so we need to learn and teach how to code
> How do we go about it? It depends, whatever means are at your disposal
>
> You can code to code camp, find a free online course etc etc
>
> For me ChatGPT and the other tools are like 'teachers' you may be able to
> ask questions and  get some answers
>
> When I first learned how to code, it must have been Pascal. honestly I had
> a lousy human teacher, who could not answer most
> of my questions anyway. He thought I was a pain in the neck because I
> asked things that were not in the lesson plan
> He was there to walk me through some text book and exercises and then give
> me a mark.
>
> How many people are put off learning code because their teachers are not
> really 'good teachers' *or maybe they have limited
> time and patient to deal with difficult students
>
> Given great teachers - there are many around for sure - with abundant time
> and patience, of course, it would be great to learn from humans
> but given their limited availability . we can lern through books, online
> courses, We are lucky there are plenty of excellent first class free
> resources including The Carpentries lessons . Actually when I first got
> onto the Carpentries I was told that they were not written for beginners
> They
> presumed some familiarity with coding the respective languages, and were
> teaching specific tasks.
>
> Even what human teachers say may need critical reading at time  - ,
> because they teach is what they think/know/believe/have experience of
>
> So we must learn how to critically evaluate what our teachers, humans or
> otherwise teach us anyway
> Coding is language. AI code generators are just another source of learning
> These days we learn from subjects from online sources. We better keep up
> with the
> evolving learning environments and methods
>
> Thanks for the valuable opportunity to exchange!
>
> PDM
>
>
> On Fri, Mar 21, 2025 at 8:45 AM Adam Obeng <[email protected]> wrote:
>
>> Thank you to the other lurkers for inspiring this lurker to also
>> participate.
>>
>> I think the point is very well made that the way someone who can already
>> code uses GenAI differently to someone who can't, so we can't necessarily
>> endorse folks taking the existing lessons with GenAI tools.
>>
>> But which analogy is right: Is using GenAI for coding without knowing how
>> to code like using a calculator without knowing how to count? Or is it like
>> using Python without knowing C?
>>
>>
>> On Thu, Mar 20, 2025, at 1:44 PM, Federica Gazzelloni wrote:
>>
>> I am genuinely excited to be living in these times, witnessing the
>> advancements in technology that are reshaping the way we work and learn.
>>
>> Comparing this era to when I was in school, it’s fascinating—and perhaps
>> a little intimidating—how these tools now empower us to achieve results
>> faster than ever before, while simultaneously elevating our roles to
>> expert, managerial levels.
>> While it’s true that the new generations won’t experience the
>> labor-intensive learning processes of the past, this isn’t necessarily a
>> disadvantage.
>>
>> The removal of some of the more tedious elements of learning allows for a
>> deeper focus on understanding, critical evaluation, and mastery.
>>
>> In fact, the expertise required to trust and verify the output of AI
>> tools demands even greater intellectual engagement.
>>
>>  This shift doesn’t diminish learning; it enriches it.
>>
>> Consider a reliable assistant always at hand—one that delivers tirelessly
>> without fear of failure while we focus our efforts into assessing,
>> correcting, and optimizing.
>>
>> The act of working alongside AI pushes us to expand our knowledge of the
>> subject matter and its broader context, ultimately enhancing our learning
>> journey.
>> Rather than replacing foundational learning, AI encourages us to think
>> critically, explore new approaches, and refine our expertise in ways that
>> were previously unimaginable. It’s this partnership with technology that
>> makes learning not only more efficient but also more dynamic and
>> forward-thinking.
>>
>>
>> Best,
>> Federica
>>
>>
>> On Thu, 20 Mar 2025 at 19:55, Hao Ye <[email protected]> wrote:
>>
>> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>>
>>
>> I think the single most important thing to think about in applying "how
>> to use AI" advice to this context is expert awareness gap (or blind spot in
>> broader lit). If you **already knew how to program** before chatgpt came
>> out, then your experience using them is irrelevant to our target learners.
>> You are using it with the knowledge of programming you had before you
>> worked with an LLM. The crux of the issue is that people cannot really test
>> their knowledge of the underlying concepts that you still need to know
>> when you work with AI assistance unless you write some code on your own.
>>
>>
>> THANK YOU
>>
>> Now i can delete the message I've been churning around in my drafts
>> folder. :)
>>
>> Best,
>> --
>> Hao Ye
>> (he/him/his)
>> [email protected]
>>
>>
>> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>>
>> I have also been following this "with one eye".
>>
>> I think the single most important thing to think about in applying "how
>> to use AI" advice to this context is expert awareness gap (or blind spot in
>> broader lit). If you **already knew how to program** before chatgpt came
>> out, then your experience using them is irrelevant to our target learners.
>> You are using it with the knowledge of programming you had before you
>> worked with an LLM. The crux of the issue is that people cannot really test
>> their knowledge of the underlying concepts that you still need to know
>> when you work with AI assistance unless you write some code on your own.
>>
>> We have had calculators for a long time, but it has remained essential
>> that children learn the *concept* of adding and subtracting and relating it
>> to combining things and taking them away, typically through counting.  As
>> we teach in instructor training, the goal of the carpentries is to help
>> learners get a good mental model so they can learn more independently
>> later. If they use the AI right away, they are deprived of the chance to
>> build the initial mental model.
>>
>> Someone shared in a carpentries slack channel a while ago a post about an
>> art class using paper first before digital tools, because digital tools
>> help you go faster, but learning is necessarily slow.
>>
>> I feel strongly that it would be in opposition to our goal of applying
>> evidence-based teaching practices to encourage the use of AI from the
>> beginning.
>>
>> *Sarah M Brown, PhD*
>> sarahmbrown.org
>> Assistant Professor of Computer Science
>> University of Rhode Island
>>
>>
>>
>> On Wed, Mar 19, 2025 at 6:20 AM Jannetta Steyn via discuss <
>> [email protected]> wrote:
>>
>> Thank you Anelda for the link to that post. It did make me realise that
>> the one thing I didn't mention in my post was that apart from testing one
>> also has to make sure your code is still readable and maintainable. I did
>> say in my previous post that it needs to get the job done "in the right
>> way" which implies that but it is probably worth stating it implicitly.
>>
>> Jannetta
>>
>> *Dr. Jannetta Steyn*
>> *Training Lead *
>> *Senior Research Software Engineer*
>> The Catalyst
>> Newcastle University
>> 3 Science Square
>> Newcastle Helix
>> Newcastle upon Tyne
>> NE4 5TG
>> ORCID: 0000-0002-0231-9897
>> RSE Team: https://rse.ncldata.dev
>> Personal website: http://jannetta.com
>>
>>
>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>
>> Book time to meet with me
>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>
>>
>> ------------------------------
>>
>> *From:* Anelda Van der Walt <[email protected]>
>> *Sent:* 19 March 2025 09:57
>> *To:* discuss <[email protected]>
>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>> Generative AI
>>
>> Hi all,
>>
>> I've also been following this conversation with one eye, being someone
>> who use ChatGPT all the time for coding-related questions because I don't
>> code often enough and forget some basics/struggle to debug rediculous error
>> messages.
>>
>> By chance, I'm subscribed to a newsletter which included a blog post
>> about this exact topic today -
>> https://simplybegin.co.uk/skip-ai-tools-and-learn-for-yourself/. Might
>> be of interest.
>>
>> Kind regards,
>>
>> Anelda
>>
>> On Wed, Mar 19, 2025 at 11:54 AM Jannetta Steyn via discuss <
>> [email protected]> wrote:
>>
>> Hi Everyone
>>
>> I've been following the conversation "with one eye" while busy with a
>> shed load of other things so I hope my comment is not completely off track.
>>
>> One word I have not noticed (I might have missed it) is "testing". I
>> don't really think it matters what tools one uses, as long as it gets the
>> job done the right way. But, the only way too prove that the actual job is
>> being done is through thorough testing. If you blindly believe AI tools,
>> which is what really worries me, you are in for a world of trouble. Just a
>> couple of weeks ago I witnessed an RA excitedly showing how she got CoPilot
>> to write code for her and enthusiastically telling everyone that they don't
>> even need to learn to code because CoPilot will do it for you.
>>
>> One of my favourite talks is by Prof Brian Randall:
>> https://www.youtube.com/watch?v=PSULNuNP29M. I think it is worth the
>> watch.
>>
>> One thing that things like ChatGPT is good for is explaining existing
>> code, especially for people still learning and trying to figure out what
>> some existing code does.
>>
>> Jannetta
>>
>> *Dr. Jannetta Steyn*
>> *Training Lead *
>> *Senior Research Software Engineer*
>> The Catalyst
>> Newcastle University
>> 3 Science Square
>> Newcastle Helix
>> Newcastle upon Tyne
>> NE4 5TG
>> ORCID: 0000-0002-0231-9897
>> RSE Team: https://rse.ncldata.dev
>> Personal website: http://jannetta.com
>>
>>
>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>
>> Book time to meet with me
>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>
>>
>> ------------------------------
>>
>> *From:* Toby Hodges via discuss <[email protected]>
>> *Sent:* 19 March 2025 09:38
>> *To:* discuss <[email protected]>
>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>> Generative AI
>>
>>
>> ⚠ External sender. Take care when opening links or attachments. Do not
>> provide your login details.
>> Thanks everyone. Responding to a couple of specific points/questions:
>>
>> @Lex wrote
>>
>> This point
>>
>> 1. For most problems you will encounter at this stage, help and answers
>> can be found among the first results returned by searching the internet.
>>
>>
>> is to me not very helpful as I think it takes longer time and can be more
>> frustrating to sift through the search results, while asking a chatbot is
>> just as helpful for a fraction of the effort.
>>
>>
>> I intended for this point to be read in the context of the preceding
>> paragraph about (some of) the ethical concerns with LLMs. The implication
>> being that search results can be similarly helpful, at a considerably lower
>> cost. I could make the implicit explicit, by writing something like
>>
>> “Although it might take you slightly longer to find them, the answers
>> available online will have been provided directly and willingly by a human,
>> at a fraction of the environmental cost of getting equivalent help from a
>> genAI tool.”
>>
>> I chose to keep things vague, 1. for brevity, 2. for simplicity, and 3.
>> because opinions vary considerably among the Instructor community on
>> whether or not such concerns are a “dealbreaker” for the routine use of
>> genAI. (See also the Instructor Note at the beginning of the section.)
>>
>> @Somebody (sorry I cannot see who!) wrote
>>
>>
>> I wonder if more could be said about *how* to demonstrate the use of
>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>> opportunities to demonstrate a better way.
>>
>> So we could show getting some code from an LLM, and then the steps of
>> examining variables and understanding their types, inserting debugging
>> "print" statements, looking up documentation, considering alternative
>> solutions, and explaining our thought process.
>>
>>
>> If we want to cover this, it needs to be in a separate lesson or as an
>> almost total rewrite of existing materials IMO. Delving into this in detail
>> would be too time consuming during a workshop otherwise, at the cost of all
>> the other important things we want to teach people.
>>
>> I hope that next week’s community sessions (Tuesday 25 March, 12:00 and
>> 21:00 UTC! Sign up on the etherpad!
>> <https://pad.carpentries.org/community-sessions-2025>) will be an
>> opportunity for some Instructors to describe and maybe demonstrate how they
>> have been teaching exactly this.
>>
>> Thanks again,
>>
>> Toby
>>
>> On 19. Mar 2025, at 01:58, Allen Lee <[email protected]> wrote:
>>
>> Lots of great discussion here, and glad to see the community engagement
>> around this important topic. I posted some comments on the GitHub PR but
>> wanted to re-share these two links here as I think they are worth the time
>> (sorry, the video is 2hrs 🫠!)
>>
>> https://garymarcus.substack.com/p/decoding-and-debunking-hard-forks
>>
>> https://www.youtube.com/watch?v=EWvNQjAaOHw
>>
>> Cheers,
>>
>>
>>
>>
>> --
>> *Allen Lee*
>> Senior Global Futures Scientist
>> School of Complex Adaptive Systems
>> *Arizona State University*
>>
>> Mail Code: 2701
>> Tempe, AZ 85287-2701
>> *p: *480-727-4646
>> *email: *[email protected]
>>
>> *git: *https://github.com/alee
>> *orcid: *https://orcid.org/0000-0002-6523-6079
>> Center for Behavior, Institutions, and the Environment
>> <https://complexity.asu.edu/cbie>
>> Network for Computational Modeling in the Social and Ecological Sciences
>> <https://www.comses.net/>
>>
>>
>>
>>
>>
>> On Tue, Mar 18, 2025 at 5:40 PM Paola Di Maio <[email protected]>
>> wrote:
>>
>> I do teach prompt engineering, and would encourage a Carpentries
>> module/course/lesson
>> The question really is learning how to use AI intelligently and critically
>>
>> Materials on how to use AI for learning to code are already plenty but
>> how to work with specific platforms and packages may need
>> refining/require a more specialised LLM
>> <image.png>
>> In theory, the LLM  can learn from your interaction, so plenty of scope
>> for Carpentries Instructors to teach the LLM how to code as well :-)
>>
>>
>> On Wed, Mar 19, 2025 at 8:26 AM Reed A. Cartwright <[email protected]>
>> wrote:
>>
>> My experience with using AI for coding is that if you are not asking it
>> questions from an algorithms class or similar, you get a lot of
>> hallucinations that do not work, e.g. R packages that don't exist. You can
>> ask it follow up questions and it will eventually fix the issues, but that
>> requires having a firm mental model and the ability to read code and know
>> how it would work in practice.
>> I can see the utility of a Prompt Engineering Carpentries lesson, but I
>> have no idea how to properly teach prompt engineering.
>>
>> --
>> Reed A. Cartwright, PhD
>> Associate Professor of Genomics, Evolution, and Bioinformatics
>> School of Life Sciences and The Biodesign Institute
>> Arizona State University
>> Address: The Biodesign Institute, PO Box 876401, Tempe, AZ 85287-6401 USA
>> Packages: The Biodesign Institute, 1001 S. McAllister Ave, Tempe, AZ
>> 85287-6401 USA
>> Office: Biodesign B-220C, 1-480-965-9949
>> Website: http://cartwrig.ht/
>> <https://urldefense.com/v3/__http://cartwrig.ht/__;!!IKRxdwAv5BmarQ!eK7h6FQJomgaZ0TF4wrpnUvBl0tbFH_vtWOXP7s_vKvM_jV2jEQj951z1w0YCG4WV87pX9F3uxj5he-kAHagHY3Omg$>
>>
>>
>> On Tue, Mar 18, 2025 at 5:04 PM Paul Harrison via discuss <
>> [email protected]> wrote:
>>
>>
>>
>> I wonder if more could be said about *how* to demonstrate the use of
>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>> opportunities to demonstrate a better way.
>>
>> So we could show getting some code from an LLM, and then the steps of
>> examining variables and understanding their types, inserting debugging
>> "print" statements, looking up documentation, considering alternative
>> solutions, and explaining our thought process.
>>
>> It's not so different from the skills needed to read other people's code.
>>
>> Since LLM output is random it's hard to script this fully, but that also
>> seems in keeping with the Carpentries workshop format.
>>
>>
>>
>> --
>>
>> Dr Toby Hodges (he/him)
>> Director of Curriculum
>> The Carpentries | https://carpentries.org
>>
>> Schedule a meeting with me: https://calendly.com/tobyhodges
>>
>>
>> This list is for the purpose of general discussion about The Carpentries
> including community activities, upcoming events, and announcements. Some
> other lists you may also be interested in include discuss-hpc, discuss-r,
> and our local groups. Visit https://carpentries.topicbox.com/groups/ to
> learn more. All activity on this and other Carpentries spaces should abide
> by The Carpentries Code of Conduct found here:
> https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html
> *The Carpentries <https://carpentries.topicbox.com/latest>* / discuss /
> see discussions <https://carpentries.topicbox.com/groups/discuss> +
> participants <https://carpentries.topicbox.com/groups/discuss/members> +
> delivery options
> <https://carpentries.topicbox.com/groups/discuss/subscription> Permalink
> <https://carpentries.topicbox.com/groups/discuss/Tdbbca9597a63abaa-Mdf73bdf240e0f18c5af5154e>
>

------------------------------------------
This list is for the purpose of general discussion about The Carpentries 
including community activities, upcoming events, and announcements.  Some other 
lists you may also be interested in include discuss-hpc, discuss-r, and  our 
local groups. Visit https://carpentries.topicbox.com/groups/ to learn more. All 
activity on this and other Carpentries spaces should abide by The Carpentries 
Code of Conduct found here: 
https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html

The Carpentries: discuss
Permalink: 
https://carpentries.topicbox.com/groups/discuss/Tdbbca9597a63abaa-Me402af6d0a2456537d485466
Delivery options: https://carpentries.topicbox.com/groups/discuss/subscription

Reply via email to