Hi,

On Mon, Feb 9, 2026 at 10:00 PM Ralf Gommers via NumPy-Discussion
<[email protected]> wrote:
>
>
>
> On Mon, Feb 9, 2026 at 6:23 PM Matthew Brett via NumPy-Discussion 
> <[email protected]> wrote:
>>
>> Hi,
>>
>> I thought your (Ralf's) distinction was interesting, so here's some
>> more reflection.  The distinction starts at:
>>
>> > we don't prescribe to others how they are and aren't allowed to contribute 
>> > (to the extent possible)
>>
>> I think it's correct that it's not sensible for policies to reflect
>> things like dislike of AI's use of energy or the effects on the
>> environment of AI data centers.   However, it seems obvious to me that
>> it is sensible for policies to take into account the effect of AI on
>> learning.
>
>
> Why would that be obvious? It seems incredibly presumptuous to decide for 
> other people what methods or tools they are or aren't allowed to use for 
> learning. We're not running a high school or university here.
>
> At most we can provide docs and a happy path for some types of tools, but 
> that's about it. We cannot prescribe anything.
>
>>  But why the distinction?
>>
>> On reflection, it seems to me that policies should reflect only on the
>> interests of the project, but those interests should be seen broadly,
>> and include planning for future community and maintainers.  Thus,
>> environmental concerns might well be important in general, but do not
>> bear directly on the work of the project.  Therefore the project's
>> managers have no mandate to act on that concern, at least without
>> explicit consensus.   However, any sensible project should be thinking
>> about the state of maintenance in 5 or 10 years.  Therefore, the
>> project does have a potential mandate to prefer tools that will lead
>> to better overall understanding, communication, community building, or
>> code quality in the future.
>
>
> This also presumes that you, or we, are able to determine what usage of AI 
> tools helps or hinders learning. That is not possible at the level of 
> individuals: people can learn in very different ways, plus it will strongly 
> depend on how the tools are used. And even in the aggregate it's not 
> practically possible: most of the studies that have been referenced in this 
> and linked thread (a) are one-offs, and often inconsistent with each other, 
> and (b) already outdated, given how fast the field is developing.
>
> It's easy to think of ways that using AI tools for contributing could help 
> with learning:
>
> Simple time gain: once one has done the same thing a number of times and it 
> becomes routine, automate it with AI so the contributor can spend more time 
> focusing on learning about new topics.
> Improved code quality and internal consistency from letting AI tools fix up 
> and verify design rules (e.g., how type promotion is handled) will lead to 
> the ability to learn the concepts from the code base in a more consistent 
> fashion.
> Use as a brainstorming tool to suggest multiple design options, broadening 
> discovery.
> We could ask AI tools to write internal design documentation, of the kind 
> that only a few handfuls of maintainers would be able to write (but almost 
> never do, because we're too busy). There are important parts of the code base 
> that have no documentation beyond some scattered code comments.
> Give contributors feedback that the maintainers often don't have the time or 
> interest to give, in a timely fashion or at all.
> Writing throwaway prototypes of ideas for NumPy that would otherwise take too 
> long to implement and would never get done, thereby allowing to learn if 
> something is feasible at all, or a good idea.
> Learning to use the AI tools themselves: this may well become an essential 
> skill for most software-related roles in the near future.
>
> Same for future community & new maintainers:
>
> Current maintainers may enjoy both learning something new and automating the 
> more tedious parts of maintenance, so they can focus on the more interesting 
> parts. That will aid maintainer retention.
>
> Ilhan's point is a great example here. He just finished a massive amount of 
> work rewriting code from Fortran into C. And now found that AI tools can be 
> quite helpful in that endeavour (while 6 months ago they weren't). This work 
> must have been extremely tedious (thanks again for biting that bullet Ilhan). 
> And it really wasn't fun to review either.
>
> New contributors may default to working with these tools more often than not, 
> and be turned off from contributing by rules that say they cannot use their 
> default workflow.
>
> I'm sure it's not hard to think of more along these lines, but I hope the 
> point is clear.

Yes - but that's a different point - I was only pointing out learning
is relevant, and that it is worth discussing.

Cheers,

Matthew
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to