On Fri, 6 Feb 2026 at 18:08, Nathan via NumPy-Discussion
<[email protected]> wrote:
>
> I’m nervous about subtle inconsistencies and hallucinations, especially from 
> contributions that are mostly vibe-coded. To me, that means the code needs 
> much more careful review than human-written contributions because the nature 
> of the errors made are different.

AI code does need more careful review. Using AI to write good code
means doing that careful review *before* opening a PR. A policy should
make it clear that vibe coding is not acceptable and that the author
needs to have put effort into understand the problem and check the
code and so on. That won't stop vibe code PRs but at least it sets the
expectations and you can point at the policy when closing a PR.

I think that accepting AI generated PRs requires greater
human-to-human trust like the reviewer needs to have much more trust
in the author that they did check things carefully themselves.
Otherwise the effort ratio between author and reviewer is moving
massively in the wrong direction.

In the past new contributors could earn trust by demonstrating that
they had made some complicated code but AI makes it easy to make
superficially good code. That makes it harder to judge whether the
code actually is good and therefore harder to build trust in the
author precisely when you need more trust in them.

--
Oscar
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to