On Wed, Jan 07, 2026 at 03:50:30PM -0800, [email protected] wrote:
> Lorenzo Stoakes wrote:
> [..]
> > And it's not like I'm asking for much, I'm not asking you to rewrite the
> > document, or take an entirely different approach, I'm just saying that we
> > should highlight that :
> >
> > 1. LLMs _allow you to send patches end-to-end without expertise_.
> >
> > 2. As a result, even though the community (rightly) strongly disapproves of
> >    blanket dismissals of series, if we suspect AI slop [I think it's useful
> >    to actually use that term], maintains can reject it out of hand.
> >
> > Point 2 is absolutely a new thing in my view.
>
> I worry what this sentiment does to the health of the project. Is
> "hunting for slop" really what we want to be doing? When the accusation
> is false, what then?

Yeah that's a very good point, and we don't want a witch hunt.

In fact in practice already I've had discussions with other maintainers about
series that seemed to have LLM elements in them (entirely in good faith I might
add).

Really I'm talking about series that are _very clearly_ slop.

And it's about the asymmetry between maintainer resource and the capacity for
people to send mountains of code.

The ability to send things completely end-to-end is the big difference here
vs. other tooling.

>
> If the goal of the wording change is to give cover and license for that
> kind of activity, I have a hard time seeing that as good for the
> project.

I agree entirely, and I absolutely do not want that.

>
> It has always been the case that problematic submitters put stress on
> maintainer bandwidth. Having a name for one class of potential
> maintainer stress in a process document does not advance the status quo.
>
> A maintainer is trusted to maintain the code and have always been able
> to give feedback of "I don't like it, leaves a bad taste", "I don't
> trust it does what it claims", or "I don't trust you, $submitter, to be
> able to maintain the implications of this proposal long term". That
> feedback is not strictly technical, but it is more actionable than "this
> is AI slop".

I really don't think it is the case that maintainers can simplly dismiss an
entire series like that.

The reason why is that, unlike e.g. a coccinelle script or something, this
won't be doing just cleanups, or fixing scope, or whatever.

LLMs can uniquely allow you to send a series that is entirely novel,
introducing new functionality or making significant changes.

For good reason, the community frowns upon just-rejecting that kind of
series without providing technical feedback.

There's a spectrum of opinions on these tools - on the extreme positive
side you have people who'd say we _should_ accept such series, or at least
review them in detail each time. On the extreme negative people would say
you should reject anything like this altogether even if you don't state
that an LLM helped you.

I think you'd probably agree both extremes are silly, but even many
moderate positions would leave the 'should we review these in detail'
rather blurry.

And thus it isn't therefore entirely clear that a maintainer dismissing
these kinds of series out of hand wouldn't be violating the norm of 'don't
reject series without technical reasoning'.

It would therefore be useful for the document to make it clear that they in
fact can.

Otherwise I fear we don't have an answer for the asymmetry issue. And as I
said to Linus, I think it'd be useful to be able to reference the document
in doing so.

Cheers, Lorenzo

Reply via email to