On Mon, Feb 09, 2026 at 11:39:39AM -0800, Rolando Abarca via Chicken-hackers 
wrote:
> Hi Felix (and chicken-hackers),
> 
> Thanks for bringing this to the list. I think it's an important
> conversation to have.

Hi Rolando,

Thank you for engaging in a thoughtful and considerate way.  This AI
stuff is very divisive, and it's great we can have a respectful
conversation about it.

> First, I want to be clear: I'm totally fine if this doesn't get merged. The
> feedback you've already shared has been incredibly valuable, and that alone
> made the exercise worthwhile. Getting insight into how CHICKEN's build
> system works, the static vs dynamic considerations, and the cond-expand
> approach... that's exactly the kind of learning I was hoping for.

It's excellent that you're intending to learn from this, and I hope that
you'll consider becoming a full-time contributor.  But I have to note
that this way of learning is kinda new, and I'm not sure we have the
resources for it at the moment.  Our community is quite small and the
active core contributors are even fewer.

Traditionally, one would start by submitting small patches to the core,
which might get approved or rejected, but at least you'd get feedback
and this kind of feedback would be somewhat light on the core
contributors because we're talking about small amounts of code.
It is also clear that the submitter is still learning, there will be
somewhat basic mistakes indicating the level of familiarity with the
codebase.

The patch you submitted is medium-sized and somewhat of an ambitious
change, which (no offense) you probably would have been unable to do
unassisted.  This kind of dynamic fundamentally changes the equation
for outside contributions.

I like the change, don't get me wrong, but reviewing such a big pile
of code is mentally hard work.  Add to that the fact that you can't
trust a single line of code because LLMs tend to intersperse decent
code with weird artifacts.  This means it requires an extremely thorough
vetting, which will be very exhausting to whomever takes on the job of
reviewing.

> On the broader topic of LLM-assisted contributions: I work in BigTech, and
> I can tell you things are moving faster than most people outside these
> environments would expect.  AI-assisted development is becoming the norm,
> not the exception. I think the CHICKEN community (and really, any open
> source project) will need to figure out how to handle this sooner rather
> than later.

It's a big difference to working in a big company, or even a big open source
project.  We're nothing like those.

> One idea: what if there was a way for agents to self-check contributions
> before submission? Something like a CONTRIBUTING_AI.md or similar document
> that outlines the specific code quality concerns, style expectations, and
> common pitfalls. An agent could review the PR against those criteria before
> a human even sees it. This wouldn't replace human review, but it could
> raise the quality bar and focus reviewer attention on what matters most.

I have no idea how that would even work.  Perhaps you could make a patch
submission so we can see what that would look like?  But perhaps first
we want to establish some ground rules about AI usage first.  If we want
to blanket ban any AI-assisted contributions, there's no point to
including such a file.  If anything, the presence of such a file would
encourage *more* AI-assisted contributions, increasing the load on our
small team even more.

> That said, if the community isn't ready to tackle this now, I completely
> understand. This work isn't blocking anything critical. It started as a
> personal exploration to learn CHICKEN's internals, and it served that
> purpose well. But if there's interest, it could also be an opportunity to
> start thinking through how the project handles this new wave of
> contributions.

I think that's a good idea, it seems our industry is headed that way,
whether we like it or not.

> Either way, I appreciate the thoughtful engagement.

Likewise!

Cheers,
Peter

Reply via email to