On Thu, 5 Feb 2026 at 09:46, Peter Stahlecker <[email protected]> wrote: > > I feel the PRs for sympy (and likely for other packages, too) serve a dual > purpose: > > 1. make sympy even better > 2. When the reviewers discuss with submitters, the (mostly young) submitters > learn something. > > Point 1.could be taken over by AI sometime, but them Point 2. would be dead,. > At least in my view point 2 is a VERY important one, and should not be given > up lightly.
I was thinking about this a bit. If you look at the most recent PRs then almost half of them are from one person and all of their PRs have been merged by me (except one that got closed): https://github.com/sympy/sympy/pulls?q=is%3Apr+is%3Amerged That person is a new contributor and it says in the AI disclosure that he is using AI but that he checks and edits the code afterwards. I actually can't tell that he is using AI because he is using it in a reasonable way: the end result is still what you would have ended up with if the code was written by someone who knows what they are doing and is not using AI. Note that he does not use LLMs to write comments or messages so the communication is all human. Those pull requests are mostly for a part of the codebase that I don't know very well so I'm not really able to verify that everything is correct (I could with more effort but don't want to). There are no maintainers who know it well but I am happy to review and merge those PRs based largely on trust that he knows what he is doing and is improving things. You can see the beginning of that trust developing through the human communication here: https://github.com/sympy/sympy/pull/28994 Over time through human communication and PRs the trust and mutual understanding grows and we end up at a point where it becomes easy to review and merge the PRs. The problems with a lot of the other PRs are that: - It is hard to build trust when there is no actual human communication (people using LLMs to write messages). - Many of those people are demonstrably less experienced and so cannot really be trusted in the competency sense either. - They are often trying to make changes in parts of the codebase that might typically be considered "more advanced" where you actually needed the author to have a higher level of competency/trust (the AI made them over-confident, without AI they would have looked at the code and decided not to mess with it). - Before AI they would have been able to earn some trust just based on the fact that they figured out how to produce seemingly working code and make the tests pass but that is now meaningless. Basically we can't just merge all PRs on trust but we also can't review all PRs on a zero trust basis (that is far too much work). Right now we are getting too many PRs on a zero trust basis and at the same time it has become really hard to build trust. I think I am realising now how damaging the LLM messages are. Not having actual human-to-human communication is a massive barrier to trust building. -- Oscar -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/sympy/CAHVvXxRqnZjQom7kc_0OhXsZ%2Bh-0h_Y1w_uKAiqS1dcx5aoHNg%40mail.gmail.com.
