There's been some momentum building for AGENTS.md files, both on the project and on the agent side:

    https://agents.md/

Same idea and benefits, but it might help to align folks on a "standard" that will work well across agents.

I also think that more and better code documentation can be very beneficial when using agents to help with working out implementation details. I spent a bunch of time in January writing an introduction to Apache Ratis (Raft as a library: https://github.com/apache/ratis/blob/master/ratis-docs/src/site/markdown/index.md). The code itself is pretty well-documented but it was hard for me to build a mental model of how to integrate with. AI was very effective in taking the granular in-code documentation and synthesizing an overview from it. Going the other way, the in-code documentation has made it possible for me to deep dive the Ratis code to root cause bugs, etc. Agents can get a lot out of good class- and method-level documentation.

-- Joel.

On 2/16/2026 8:03 PM, Bernardo Botella wrote:
CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Thanks for bringing this up Stefan!!

A really interesting topic indeed.


I’ve also heard ideas around even having Claude.md type of files that help LLMs 
understand the code base without having to do a full scan every time.

So, all and all, putting together something that we as a community think that 
describe good practices + repository information not only for the main 
Cassandra repository, but also for its subprojects, will definitely help 
contributors adhere to standards and us reviewers to ensure that some steps at 
least will have been considered.

Things like:
- Repository structure. What every folder is
- Tests suits and how they work and run
- Git commits standards
- Specific project lint rules (like braces in new lines!)
- Preferred wording style for patches/documentation

Committed to the projects, and accesible to LLMs, sound like really useful 
context for those type of contributions (that are going to keep happening 
regardless).

So curious to read what others think.
Bernardo

PD. Totally agree that this should change nothing of the quality bar for code 
reviews and merged code

On Feb 16, 2026, at 6:27 PM, Štefan Miklošovič <[email protected]> wrote:

Hey,

This happened recently in kernel space. (1), (2).

What that is doing, as I understand it, is that you can point LLM to
these resources and then it would be more capable when reviewing
patches or even writing them. It is kind of a guide / context provided
to AI prompt.

I can imagine we would just compile something similar, merge it to the
repo, then if somebody is prompting it then they would have an easier
job etc etc, less error prone ... adhered to code style etc ...

This might look like a controversial topic but I think we need to
discuss this. The usage of AI is just more and more frequent. From
Cassandra's perspective there is just this (3) but I do not think we
reached any conclusions there (please correct me if I am wrong where
we are at with AI generated patches).

This is becoming an elephant in the room, I am noticing that some
patches for Cassandra were prompted by AI completely. I think it would
be way better if we make it easy for everybody contributing like that.

This does not mean that we, as committers, would believe what AI
generated blindlessly. Not at all. It would still need to go over the
formal review as anything else. But acting like this is not happening
and people are just not going to use AI when trying to contribute is
not right. We should embrace it in some form ...

1) https://github.com/masoncl/review-prompts
2) https://lore.kernel.org/lkml/[email protected]/
3) https://lists.apache.org/thread/j90jn83oz9gy88g08yzv3rgyy0vdqrv7

Reply via email to