> I do not trust AI, inherently. It might guide you at best. Sometimes
> even that is not true. I believe that by introducing these prompts /
> context files we will make it nonsensical less often.



Personally, I have zero interest in engaging with PRs from AI.   I'm not here 
for the workload, i'm here for the people.

I can see AI is gaining traction for writing ephemeral software, but that's not 
us.  Infrastructure software is a different category, and the trust our users 
have in us to write the code their databases run in production has to remain a 
human relationship, I believe.  I think this applies to all ASF projects as 
well (the Community over Code ethos).

Our rule is that three people need to be involved in a contribution, with at 
least two being committers.  Common sense can reduce this to two people one 
committer on trivial parts of the code.  If AI writes a PR and then the first 
human presents it as their work, taking ownership for having read and 
understood the code in the PR, then I'm fine being a reviewer.  (And so long as 
all legalities are ok.)   The Cassandra codebase is never going to be ephemeral 
– if the reviewer has to review code line by line then I expect the author to 
have too.  Contributions are not point-in-time solutions, they are additions to 
a codebase that needs to be maintained and evolve.  Longevity is more important 
to us than customisability (both are important), and mistaking the two is an 
architectural dead end.

Like the dependabot PRs, I would rather see those contributions be claimed and 
authored by a human (rather than a bot or AI account), with commit messages 
adding extra context where due.  I don't believe this slows us down in taking 
advantage of AI, especially with the right docs in place like AGENT.md and 
SpecKit, just that it keeps the human interaction in the project a first-class 
tenet.  Beyond the aspect of fostering trust, I think this is paramount because 
the biggest risk isn't in failing to accept all the right answers fast enough, 
but in failing to see right answers to wrong questions (e.g. longevity before 
customisability).

Without a doubt, our lives are becoming orders of magnitude more productive by 
taking advantage of AI, especially in areas of assist and debugging.  Areas 
like assist and debugging are particularly safe as they remain within our 
closed loops, avoiding the proliferation of slop that can easily overwhelm us.  
I can see we're also going to be forced to use them in defence of the slop 
that's coming our way.  

Triaging and debugging and reporting on: docs, tickets, PRs, CI, bugs – yes 
please!  But with a focus on bringing people together.

p.s. the emdashes are mine, thanks.

Reply via email to