As a matter of fact, yes some experience with chatgPT: I want an ellipse to roll on a smooth line without slipping. I tried to get the correct function linking the speed of the contact point to the rotational speed of the ellipse. Whatever I tried did not work. So I asked chatgPT.
- it gave an answer it seemed fully confident of. I could easily try the answer, it was wrong. - I told it it was wrong. The answer: very good observation! Now I give the fully correct answer. Wrong again. - several more cylces like this, still no correct answer (I still do not have it) Mine is an exceedingly simply application of LLM. How does a novice want to check whether some reply given by LLM does solve the sympy issue on hand? Peter Oscar schrieb am Mittwoch, 4. Februar 2026 um 20:08:21 UTC+1: > On Wed, 4 Feb 2026 at 17:29, Peter Stahlecker > <[email protected]> wrote: > > > > I had wondered before, why anybody would push a PR he/she did not do > him/herself -and might not even understand- , but Jason told me people are > so eager to get into GSoC, > > and they need at least one PR merged. > > I think it is important to understand that AI gives people false > confidence. The people doing this think that they do understand the > code. They also believe that the code made with the help of the AI is > better than what they would have produced without the AI. Actually the > real problem here is not that they used AI to write the code, it is > that they used AI instead of *reading* any of the existing code. If > you didn't have AI you would have to read the code before you could > write anything. > > The AI gives them some little bit of code that looks understandable > and they think they understand it but you can't truly understand a > small piece of code without understanding all the code around it. The > true understanding of some code is not just understanding just what it > does but why it is the way it is rather than any of a number of > alternatives that might superficially seem similar in the same > context. You don't get that understanding if the AI takes you straight > to seemingly working code. > > There are empirical studies now comparing programmers using AI and not > using AI. It has been shown more than once I think that even > experienced programmers using AI will produce more bugs but at the > same time have more confidence in the code. It has also been shown > that people/teams using AI can have reduced productivity but at the > same time believe that their productivity has increased. > > Have you ever tried using something like ChatGPT Peter? > > ChatGPT is a sickening thing to talk to. I don't think that people > using LLMs to write emails and things understand just how much its > language upsets me. I imagine that you can have a conversation like: > > You: Hey ChatGPT I want to make a PR for GSOC and I want to fix issue > 12345. I think maybe we can fix it by adding some code to make the > thing negative. > > ChatGPT: Wow that is an amazing idea Peter -- you are a genius! I'll > write some code for that write now. Here you go: > (lots of generic samey looking code) > This will certainly fix the issue based on your very innovative and > creative suggestion and the code is > * professionally written > * passes all relevant requirements and coding standards > This code based on your insightful idea will make an excellent PR -- > the SymPy maintainers will surely love this PR! > > -- > Oscar > -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/sympy/9ce00f59-ef00-4037-85ba-b58efa60ee7an%40googlegroups.com.
