Jean Louis <[email protected]> writes:
> despite the absence of any substantive issue.

So you’re in a research stage and people (including me) misunderstand
you as wanting to discuss pros and cons of using LLM for patches?

>> --skipped unchecked AI code--
>
> You see, that’s precisely what concerns me—the prejudice against code
> you initially refused to read, only to later engage with it. My worry
> isn’t about the code itself or how it’s generated, but rather the
> obvious bias reflected in your dismissive note:

And you missed the point that this was a correct expectation. I wasted
my time reading it because it was objectively bad.

I skipped it, because I didn’t want to repeat what you did: fill the
space with something that would waste the time of people reading it.

Why did you paste it into the mailing list if you didn’t want someone to
look at the code?

> By contrast, I focus on what I excel at: crafting compelling,
> profitable quotations.
…
> I don't want to program.

Then don’t paste code. Don’t speak with others in code. Sounds like
you’re doing management, not programming, so when you pasted the code
here, are you really suprised that the result is bad?

And that you didn’t realize it because you didn’t even read it?

Pasting that code was the perfect example of the danger of using LLM for
patches.

>> Isn’t that the job of M-x occur which already exists?
>
> I haven't used `occur` nearly as much as you have—clearly—so it didn't
> occur to me that it would count matching lines.
…
> - I obtained a perfectly usable function—one that directly counters
>   the sweeping claims I keep reading here about how all LLM-generated
>   code is inherently broken. No edits. No debugging. It just worked.

Badly. While there already exists a good solution. I expect that at some
point LLMs *will* tell you to use M-x occur instead and the result will
be better.

But you’re right that this actually is beside the point.

Sending patches without reviewing them yourself will still be a no-go,
because you’re offloading your task (check whether what you’re
suggesting -- the code -- is ok for review) on the people reading it.

That’s the point.

>> If a human contributor writes this code, I correct them once and then
>> they know what to look out for. If someone just pastes LLM code, I could
>> just as well talk into the void because the next LLM code will likely
>> contain similar mistakes.
>
> And then he moves the goalposts.
>
> First, the code was dismissed without a glance—"skipped unchecked AI
> code." Now that it's been read, the criticism shifts from "it's
> probably wrong" to "it's inefficient." 

No, the criticism wasn’t "it’s probably wrong", but "you didn’t check
it, why should I waste my time with it?".

When I decided to check it after all, I found that I really wasted my
time and should not have.

>> The point is that the contributors don’t even know. Paste something
>> unchecked into a mailing list to force everyone to either waste time or
>> ignore it.
…
> "I didn't want to waste time reading it, but now that I have, let me
> tell you why your solution that solved your problem is actually a
> waste of everyone's time."
…
> Do you see the pattern? First it's "I won't engage." Then it's "I
> engaged and found it lacking." The goalposts keep moving because the
> real issue was never the code—it was who wrote it and how.

No, it was that you pasted stuff unchecked into a mailing list.

> apprenticeship. I don't need to earn my right to ask questions by
> first mastering the internals of `occur`. I needed a function that
> counts matches, and now I have one.

If you don’t understand it, and you know you don’t understand it, keep
the code to yourself. Don’t get others to read what you didn’t read. Or
-- if you want to share it -- read it first. Maybe ask the LLM about
parts you don’t understand until you do understand them.

>> Before you share LLM output, it’s your job to check it for validity.
>
> Exactly—and that's the tell, isn't it?
>
> The conversation was never about the function. The function was just
> an illustration, a passing example of how I *found* something
> useful. But you latched onto it like a lifeline

I commented on that code, because people trying to get bad LLM-generated
code into a mature codebase is a problem I am actually confronted with.
And I’ve already hit exactly the same issue (O(n²) runtime). And it
takes far to much time to explain why this can’t go in.

This is an actual pain point I already have in other projects.

It is a substantial problem I actually face that you’re dismissing as
"despite the absence of any substantive issue.".

> Meanwhile LLM is useful to millions of programmers out there.

And if they check the code themselves and understand it before they post
it somewhere, that’s all OK. If they overlook something, that’s still
OK, because next time they won’t overlook it.

It stops being OK if I have to explain why a PR can’t be accepted and
they don’t understand that they shouldn’t have posted it in that state
because they never even read the code so they didn’t know the state it
was in.

>> If you don’t know whether something generalizes, don’t claim it does.
>
> I said I can see humanity advancing, that these tools point somewhere
> new. Your response? "You did not try. You know that you don't know."
> As if I claimed certainty.

That part wasn’t about generalizing. It was about claiming something
while you know that you don’t know.

> I said what I see looks like envy disguised in ethics. Your response?
> "This sentence is ad-hominem. A long form of 'you're just envious'."

You said that it *is*, not that it looks like. 

Pointing at a perceived character flaw is ad-hominem.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to