I am genuinely excited to be living in these times, witnessing the
advancements in technology that are reshaping the way we work and learn.

Comparing this era to when I was in school, it’s fascinating—and perhaps a
little intimidating—how these tools now empower us to achieve results
faster than ever before, while simultaneously elevating our roles to
expert, managerial levels.
While it’s true that the new generations won’t experience the
labor-intensive learning processes of the past, this isn’t necessarily a
disadvantage.

The removal of some of the more tedious elements of learning allows for a
deeper focus on understanding, critical evaluation, and mastery.

In fact, the expertise required to trust and verify the output of AI tools
demands even greater intellectual engagement.

 This shift doesn’t diminish learning; it enriches it.

Consider a reliable assistant always at hand—one that delivers tirelessly
without fear of failure while we focus our efforts into assessing,
correcting, and optimizing.

The act of working alongside AI pushes us to expand our knowledge of the
subject matter and its broader context, ultimately enhancing our learning
journey.
Rather than replacing foundational learning, AI encourages us to think
critically, explore new approaches, and refine our expertise in ways that
were previously unimaginable. It’s this partnership with technology that
makes learning not only more efficient but also more dynamic and
forward-thinking.


Best,
Federica


On Thu, 20 Mar 2025 at 19:55, Hao Ye <[email protected]> wrote:

> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>
>
>> I think the single most important thing to think about in applying "how
>> to use AI" advice to this context is expert awareness gap (or blind spot in
>> broader lit). If you **already knew how to program** before chatgpt came
>> out, then your experience using them is irrelevant to our target learners.
>> You are using it with the knowledge of programming you had before you
>> worked with an LLM. The crux of the issue is that people cannot really test
>> their knowledge of the underlying concepts that you still need to know
>> when you work with AI assistance unless you write some code on your own.
>
>
> THANK YOU
>
> Now i can delete the message I've been churning around in my drafts
> folder. :)
>
> Best,
> --
> Hao Ye
> (he/him/his)
> [email protected]
>
>
> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>
>> I have also been following this "with one eye".
>>
>> I think the single most important thing to think about in applying "how
>> to use AI" advice to this context is expert awareness gap (or blind spot in
>> broader lit). If you **already knew how to program** before chatgpt came
>> out, then your experience using them is irrelevant to our target learners.
>> You are using it with the knowledge of programming you had before you
>> worked with an LLM. The crux of the issue is that people cannot really test
>> their knowledge of the underlying concepts that you still need to know
>> when you work with AI assistance unless you write some code on your own.
>>
>> We have had calculators for a long time, but it has remained essential
>> that children learn the *concept* of adding and subtracting and relating it
>> to combining things and taking them away, typically through counting.  As
>> we teach in instructor training, the goal of the carpentries is to help
>> learners get a good mental model so they can learn more independently
>> later. If they use the AI right away, they are deprived of the chance to
>> build the initial mental model.
>>
>> Someone shared in a carpentries slack channel a while ago a post about an
>> art class using paper first before digital tools, because digital tools
>> help you go faster, but learning is necessarily slow.
>>
>> I feel strongly that it would be in opposition to our goal of applying
>> evidence-based teaching practices to encourage the use of AI from the
>> beginning.
>>
>> *Sarah M Brown, PhD*
>> sarahmbrown.org
>> Assistant Professor of Computer Science
>> University of Rhode Island
>>
>>
>>
>> On Wed, Mar 19, 2025 at 6:20 AM Jannetta Steyn via discuss <
>> [email protected]> wrote:
>>
>>> Thank you Anelda for the link to that post. It did make me realise that
>>> the one thing I didn't mention in my post was that apart from testing one
>>> also has to make sure your code is still readable and maintainable. I did
>>> say in my previous post that it needs to get the job done "in the right
>>> way" which implies that but it is probably worth stating it implicitly.
>>>
>>> Jannetta
>>>
>>> *Dr. Jannetta Steyn*
>>> *Training Lead *
>>> *Senior Research Software Engineer*
>>> The Catalyst
>>> Newcastle University
>>> 3 Science Square
>>> Newcastle Helix
>>> Newcastle upon Tyne
>>> NE4 5TG
>>> ORCID: 0000-0002-0231-9897
>>> RSE Team: https://rse.ncldata.dev
>>> Personal website: http://jannetta.com
>>>
>>>
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>  Book
>>> time to meet with me
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>> ------------------------------
>>> *From:* Anelda Van der Walt <[email protected]>
>>> *Sent:* 19 March 2025 09:57
>>> *To:* discuss <[email protected]>
>>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>>> Generative AI
>>>
>>> Hi all,
>>>
>>> I've also been following this conversation with one eye, being someone
>>> who use ChatGPT all the time for coding-related questions because I don't
>>> code often enough and forget some basics/struggle to debug rediculous error
>>> messages.
>>>
>>> By chance, I'm subscribed to a newsletter which included a blog post
>>> about this exact topic today -
>>> https://simplybegin.co.uk/skip-ai-tools-and-learn-for-yourself/. Might
>>> be of interest.
>>>
>>> Kind regards,
>>>
>>> Anelda
>>>
>>> On Wed, Mar 19, 2025 at 11:54 AM Jannetta Steyn via discuss <
>>> [email protected]> wrote:
>>>
>>> Hi Everyone
>>>
>>> I've been following the conversation "with one eye" while busy with a
>>> shed load of other things so I hope my comment is not completely off track.
>>>
>>> One word I have not noticed (I might have missed it) is "testing". I
>>> don't really think it matters what tools one uses, as long as it gets the
>>> job done the right way. But, the only way too prove that the actual job is
>>> being done is through thorough testing. If you blindly believe AI tools,
>>> which is what really worries me, you are in for a world of trouble. Just a
>>> couple of weeks ago I witnessed an RA excitedly showing how she got CoPilot
>>> to write code for her and enthusiastically telling everyone that they don't
>>> even need to learn to code because CoPilot will do it for you.
>>>
>>> One of my favourite talks is by Prof Brian Randall:
>>> https://www.youtube.com/watch?v=PSULNuNP29M. I think it is worth the
>>> watch.
>>>
>>> One thing that things like ChatGPT is good for is explaining existing
>>> code, especially for people still learning and trying to figure out what
>>> some existing code does.
>>>
>>> Jannetta
>>>
>>> *Dr. Jannetta Steyn*
>>> *Training Lead *
>>> *Senior Research Software Engineer*
>>> The Catalyst
>>> Newcastle University
>>> 3 Science Square
>>> Newcastle Helix
>>> Newcastle upon Tyne
>>> NE4 5TG
>>> ORCID: 0000-0002-0231-9897
>>> RSE Team: https://rse.ncldata.dev
>>> Personal website: http://jannetta.com
>>>
>>>
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>  Book
>>> time to meet with me
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>> ------------------------------
>>> *From:* Toby Hodges via discuss <[email protected]>
>>> *Sent:* 19 March 2025 09:38
>>> *To:* discuss <[email protected]>
>>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>>> Generative AI
>>>
>>>
>>> ⚠ External sender. Take care when opening links or attachments. Do not
>>> provide your login details.
>>> Thanks everyone. Responding to a couple of specific points/questions:
>>>
>>> @Lex wrote
>>>
>>> This point
>>>
>>> 1. For most problems you will encounter at this stage, help and answers
>>> can be found among the first results returned by searching the internet.
>>>
>>>
>>> is to me not very helpful as I think it takes longer time and can be
>>> more frustrating to sift through the search results, while asking a chatbot
>>> is just as helpful for a fraction of the effort.
>>>
>>>
>>> I intended for this point to be read in the context of the preceding
>>> paragraph about (some of) the ethical concerns with LLMs. The implication
>>> being that search results can be similarly helpful, at a considerably lower
>>> cost. I could make the implicit explicit, by writing something like
>>>
>>> “Although it might take you slightly longer to find them, the answers
>>> available online will have been provided directly and willingly by a human,
>>> at a fraction of the environmental cost of getting equivalent help from a
>>> genAI tool.”
>>>
>>> I chose to keep things vague, 1. for brevity, 2. for simplicity, and 3.
>>> because opinions vary considerably among the Instructor community on
>>> whether or not such concerns are a “dealbreaker” for the routine use of
>>> genAI. (See also the Instructor Note at the beginning of the section.)
>>>
>>> @Somebody (sorry I cannot see who!) wrote
>>>
>>> I wonder if more could be said about *how* to demonstrate the use of
>>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>>> opportunities to demonstrate a better way.
>>>
>>> So we could show getting some code from an LLM, and then the steps of
>>> examining variables and understanding their types, inserting debugging
>>> "print" statements, looking up documentation, considering alternative
>>> solutions, and explaining our thought process.
>>>
>>>
>>> If we want to cover this, it needs to be in a separate lesson or as an
>>> almost total rewrite of existing materials IMO. Delving into this in detail
>>> would be too time consuming during a workshop otherwise, at the cost of all
>>> the other important things we want to teach people.
>>>
>>> I hope that next week’s community sessions (Tuesday 25 March, 12:00 and
>>> 21:00 UTC! Sign up on the etherpad!
>>> <https://pad.carpentries.org/community-sessions-2025>) will be an
>>> opportunity for some Instructors to describe and maybe demonstrate how they
>>> have been teaching exactly this.
>>>
>>> Thanks again,
>>>
>>> Toby
>>>
>>> On 19. Mar 2025, at 01:58, Allen Lee <[email protected]> wrote:
>>>
>>> Lots of great discussion here, and glad to see the community engagement
>>> around this important topic. I posted some comments on the GitHub PR but
>>> wanted to re-share these two links here as I think they are worth the time
>>> (sorry, the video is 2hrs 🫠!)
>>>
>>> https://garymarcus.substack.com/p/decoding-and-debunking-hard-forks
>>>
>>> https://www.youtube.com/watch?v=EWvNQjAaOHw
>>>
>>> Cheers,
>>>
>>> --
>>> *Allen Lee*
>>> Senior Global Futures Scientist
>>> School of Complex Adaptive Systems
>>> Arizona State University
>>> Mail Code: 2701
>>> Tempe, AZ 85287-2701
>>> *p: *480-727-4646
>>> *email: *[email protected]
>>> *git: *https://github.com/alee
>>> *orcid: *https://orcid.org/0000-0002-6523-6079
>>>
>>> Center for Behavior, Institutions, and the Environment
>>> <https://complexity.asu.edu/cbie>
>>> Network for Computational Modeling in the Social and Ecological Sciences
>>> <https://www.comses.net/>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 18, 2025 at 5:40 PM Paola Di Maio <[email protected]>
>>> wrote:
>>>
>>> I do teach prompt engineering, and would encourage a Carpentries
>>> module/course/lesson
>>> The question really is learning how to use AI intelligently and
>>> critically
>>>
>>> Materials on how to use AI for learning to code are already plenty but
>>> how to work with specific platforms and packages may need
>>> refining/require a more specialised LLM
>>> <image.png>
>>>
>>> In theory, the LLM  can learn from your interaction, so plenty of scope
>>> for Carpentries Instructors to teach the LLM how to code as well :-)
>>>
>>>
>>> On Wed, Mar 19, 2025 at 8:26 AM Reed A. Cartwright <[email protected]>
>>> wrote:
>>>
>>> My experience with using AI for coding is that if you are not asking it
>>> questions from an algorithms class or similar, you get a lot of
>>> hallucinations that do not work, e.g. R packages that don't exist. You can
>>> ask it follow up questions and it will eventually fix the issues, but that
>>> requires having a firm mental model and the ability to read code and know
>>> how it would work in practice.
>>>
>>> I can see the utility of a Prompt Engineering Carpentries lesson, but I
>>> have no idea how to properly teach prompt engineering.
>>>
>>> --
>>> Reed A. Cartwright, PhD
>>> Associate Professor of Genomics, Evolution, and Bioinformatics
>>> School of Life Sciences and The Biodesign Institute
>>> Arizona State University
>>> Address: The Biodesign Institute, PO Box 876401, Tempe, AZ 85287-6401 USA
>>> Packages: The Biodesign Institute, 1001 S. McAllister Ave, Tempe, AZ
>>> 85287-6401 USA
>>> Office: Biodesign B-220C, 1-480-965-9949
>>> Website: http://cartwrig.ht/
>>> <https://urldefense.com/v3/__http://cartwrig.ht/__;!!IKRxdwAv5BmarQ!eK7h6FQJomgaZ0TF4wrpnUvBl0tbFH_vtWOXP7s_vKvM_jV2jEQj951z1w0YCG4WV87pX9F3uxj5he-kAHagHY3Omg$>
>>>
>>>
>>> On Tue, Mar 18, 2025 at 5:04 PM Paul Harrison via discuss <
>>> [email protected]> wrote:
>>>
>>>
>>> I wonder if more could be said about *how* to demonstrate the use of
>>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>>> opportunities to demonstrate a better way.
>>>
>>> So we could show getting some code from an LLM, and then the steps of
>>> examining variables and understanding their types, inserting debugging
>>> "print" statements, looking up documentation, considering alternative
>>> solutions, and explaining our thought process.
>>>
>>> It's not so different from the skills needed to read other people's code.
>>>
>>> Since LLM output is random it's hard to script this fully, but that also
>>> seems in keeping with the Carpentries workshop format.
>>>
>>>
>>>
>>> --
>>>
>>> Dr Toby Hodges (he/him)
>>> Director of Curriculum
>>> The Carpentries | https://carpentries.org
>>>
>>> Schedule a meeting with me: https://calendly.com/tobyhodges
>>>
>>> This list is for the purpose of general discussion about The Carpentries
> including community activities, upcoming events, and announcements. Some
> other lists you may also be interested in include discuss-hpc, discuss-r,
> and our local groups. Visit https://carpentries.topicbox.com/groups/ to
> learn more. All activity on this and other Carpentries spaces should abide
> by The Carpentries Code of Conduct found here:
> https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html
> *The Carpentries <https://carpentries.topicbox.com/latest>* / discuss /
> see discussions <https://carpentries.topicbox.com/groups/discuss> +
> participants <https://carpentries.topicbox.com/groups/discuss/members> +
> delivery options
> <https://carpentries.topicbox.com/groups/discuss/subscription> Permalink
> <https://carpentries.topicbox.com/groups/discuss/Tdbbca9597a63abaa-M4bb163821fb47847fe19425c>
>

------------------------------------------
This list is for the purpose of general discussion about The Carpentries 
including community activities, upcoming events, and announcements.  Some other 
lists you may also be interested in include discuss-hpc, discuss-r, and  our 
local groups. Visit https://carpentries.topicbox.com/groups/ to learn more. All 
activity on this and other Carpentries spaces should abide by The Carpentries 
Code of Conduct found here: 
https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html

The Carpentries: discuss
Permalink: 
https://carpentries.topicbox.com/groups/discuss/Tdbbca9597a63abaa-Mb269ce6523ec06f146f85ac9
Delivery options: https://carpentries.topicbox.com/groups/discuss/subscription

Reply via email to