I believe we have some consensus around the points that I summarized.

I would provide the context that the ASF is still formulating foundation
wide policy and in the meantime expects each project to develop its own
approach.

Please read, for example

https://github.com/ossf/wg-vulnerability-disclosures/issues/178

Now, I would like to create a security.md that we reference- content we
start w from our security documentation.

Ai assisted coding ok (human firmly in the middle) , Ai slop is not.

Sent from Gmail Mobile


On Tue, Jan 20, 2026 at 9:32 PM James Dailey <[email protected]> wrote:

> +1
>
> to second Adam's mention of
> https://github.com/databasus/databasus/issues/145
>
> My take aways are:
>
>    1. All devs are using AI or assisted coding in some form (see IDE
>    autocomplete) - not a problem
>    2. Many devs are using AI tooling to help be more productive, new code
>    snippets are proposed and reviewed, - not a problem as long as it is human
>    in the middle and maybe this doesn't need to be disclosed as it is
>    becoming common practice
>    3. Some devs are using AI tools to vibe code which is taken as "I
>    don't understand what the code does" ....  but that may or may not be the
>    case - It should be disclosed in my view.
>    4. Some agentic models are creating slop code and posting to projects
>    without humans involved - NOT OK.  Disallowed.  Spam of a different color
>
>
>
> On Tue, Jan 20, 2026 at 10:43 AM Adam Monsen <[email protected]> wrote:
>
>> It sure is interesting watching this space evolve.
>>
>> I wanted to share a related recent experience of mine that gives me both
>> concern and hope, and has led me to a further recommendation for
>> contributors. First, please review this tiny PR in the repo for the
>> fineract.apache.org website:
>>
>> https://github.com/apache/fineract-site/pull/43
>>
>> There's more conversation than code change in that PR, so it's a simple
>> one to characterize as some form of incompetence, be it human or AI. The
>> initial description is mostly incorrect: The font file is found (using
>> the provided grep test!), and an .xcf source file is deleted that is useful
>> for future edits to the derived .png file. The responses from
>> @Nitinkamlesh mostly didn't make sense, and they dropped comms altogether
>> when I asked direct questions about AI.
>>
>> I'm concerned that this was done without transparency. Had they opened
>> with "I'm an AI, here's how/why I'm doing this, here's how to work with the
>> human operator" it would have been much easier and faster to resolve, and
>> would have engendered rather than destroyed trust.
>>
>> The part that gives me hope is that I didn't use any new/fancy AI
>> detector tool and I didn't need to. I think we can double down on
>> fundamentals to immunize ourselves to future malicious or incompetent
>> behavior.
>>
>> To my previous suggestions in Re: Ai assisted Dev on Apache Fineract
>> <https://lists.apache.org/thread/q1fnzbodv5rbxjogmnxktpwvbb4qjp54>, I'd
>> add as a general recommendation/reminder to all contributors: *Be
>> transparent*. Share your env/tooling/experiences. Ask for help as you
>> scour docs, code, PRs, issues, check with actual users, chat, email, write
>> spikes, run builds/tests, write new tests, and all that with and without
>> AI. This is foundational computer science and FOSS community competence we
>> should all seek to continually improve at.
>>
>

Reply via email to