Hi Matteo, You can use what you prefer Laczen or Jehudi.
I agree that slop PR's should be rejected. The paragraph however excludes a perfectly reasonable solution where AI is used to generate a PR, this is human reviewed and if OK forwarded to a bot or agent that does the submission. With the provided wording such use can be allowed while still having the possibility to reject bad use. Kind regards, Jehudi On Mon, Feb 23, 2026, 18:19 Matteo Golin <[email protected]> wrote: > Hi Laczen (or Jehudi from your sign-off, please tell me which you'd prefer > I use), > > Using AI to help write descriptions for a non-native speaker, in my > opinion, falls under a reasonable and fair use of AI. I think that use is > covered by adding some personal competency to the AI output in the policy, > as a non-native speaker would be doing to get corrections in their > description. However, I completely disagree that it would be fair use to > automate the creation and submission of a PR by a bot or agent, which is > what the paragraph on automated tooling covers. That has no > human-in-the-loop interaction, and is slop in my opinion. So I would not > support that change to the policy wording. > > Best, > Matteo > > On Mon, Feb 23, 2026 at 12:05 PM Laczen JMS <[email protected]> wrote: > > > Hi Matteo, > > > > IMHO the last paragraph is too stringent. I can imagine situations > > where the code changes have been made by humans but AI is used to > > analyze them and create context for reviewers, especially for persons > > who are non-native English speakers. Automating the creation and > > submission of a PR by a bot or agent is a next natural step. > > > > I would rephrase the last paragraph as: > > > > "Using AI generated content to issues or PR's via automated tooling > > such as bots or agents can lead to low-value PR's or issues. Users > > that post such contributions can be banned and/or reported to GitHub." > > > > This allows rejecting bad use as intended, while keeping the path open > > for good use. > > > > Kind regards, > > > > Jehudi > > > > Op ma 23 feb 2026 om 16:28 schreef Matteo Golin <[email protected] > >: > > > > > > Hello everyone, > > > > > > Following the discussion thread on this topic [1], I am starting a vote > > to > > > adopt the Matplotlib AI use policy [2] in NuttX. > > > > > > The proposal is to include the policy verbatim in the contributing > guide > > > under the new header "Restrictions on Generative AI usage". I have > > changed > > > "our discourse server" to "our mailing list/forums" in order to match > the > > > methods NuttX uses for communication. > > > > > > The policy is copy-pasted below with that change: > > > > > > """ > > > > > > We expect authentic engagement in our community. > > > > > > - > > > > > > Do not post output from Large Language Models or similar generative > AI > > > as comments on GitHub or our mailing list/forums, as such comments > > tend to > > > be formulaic and low content. > > > - > > > > > > If you use generative AI tools as an aid in developing code or > > > documentation changes, ensure that you fully understand the proposed > > > changes and can explain why they are the correct approach. > > > > > > Make sure you have added value based on your personal competency to > your > > > contributions. Just taking some input, feeding it to an AI and posting > > the > > > result is not of value to the project. To preserve precious core > > developer > > > capacity, we reserve the right to rigorously reject seemingly AI > > generated > > > low-value contributions. > > > > > > In particular, it is also strictly forbidden to post AI generated > content > > > to issues or PRs via automated tooling such as bots or agents. We may > ban > > > such users and/or report them to GitHub. > > > """ > > > > > > Best, > > > Matteo > > > > > > [1]: https://www.mail-archive.com/[email protected]/msg14336.html > > > [2]: > > > > > > https://matplotlib.org/devdocs/devel/contribute.html#restrictions-on-generative-ai-usage > > >
