David Masterson <[email protected]> writes:

> "Dr. Arne Babenhauserheide" <[email protected]> writes:
>
>> Jim Porter <[email protected]> writes:
>>> I'd much rather my limited time and energy go towards building up the
>>> next generation of free software hackers than to reviewing the output
>>> of a statistical model so I can root out all the highly-plausible but
>>> nevertheless incorrect bits.

>> When I see signs of LLM use in suggested code, I usually don’t read on.
>> And if someone says “my AI said”, I don’t read on but ask them to
>> summarize the parts they verified. Reading LLM produced stuff is no fun.
>
> Probably a silly question, but what are the implications of using an LLM
> to annalyze the suggested code?

If the suggested code is written by an LLM, that doesn’t change the
situation: I still have to also check the code myself.

I wouldn’t wave through code without reading it myself just because
there are no linter warnings, not even because all tests pass, so I
can’t wave it through because the AI acks.

An AI may help to bring additional checks, like a linter, but I always
have to check code myself. And annoying contributors with "haggle with
the AI first" would be as bad as requiring reviewers to read
AI-generated code.

If you want to let AI check code *in addition* to your checking, and
then check its output and give the contributor the concise version of
the valid points, that can help. But it most likely won’t be fun for
you. Similar to tests that fail randomly.

Summary: Make sure that code you contribute is nice to review for
people. To do that, you always have to know it yourself.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to