> On Sep 22, 2025, at 5:53 AM, Yongjun Hong <[email protected]> wrote: > > Hello everyone, > ... > The core idea: The heart of my idea is an AI assistant I'm calling the > "Maintainer's Co-Pilot" (MCP). But instead of trying to automatically > scrape decades of data (which sounds impossibly hard), my proposed approach > is to start by *teaching* it. The idea is that we, as maintainers, would > feed it the important stuff: PMC meeting minutes, design documents, and > architectural decision records (ADRs). (This is getting easier now that > tools like Google Meet can automatically generate meeting transcripts). The > process would be like curating the project's 'brain'. > > With that foundation, imagine if we could: > - Have the MCP draft a new ADR after being fed the notes from a few > different design meetings. > - Let a new contributor ask, "Hey, what's the history behind the new auth > module?" and get a real, sourced answer like, "Based on the PMC minutes > from last September, the team chose option A because..." > - Connect the dots between scattered discussions on a single topic that > happened months apart.
This is an intriguing idea. One of my struggles with LLMs and so-called “AI” in general is always the question, can I trust the data? Having a known-good data set, which we, the subject matter experts, curate and maintain, is an interesting way to ensure that the answers are trustworthy - or at least more so than asking ChatGPT or whatever. Bill’s reaction is, I presume, rooted in the hallucinations that various AI assistants have all the time, and that’s a valid concern. However, the reality is that people are *already* using various random AI assistants to answer exactly the kinds of questions that you reference here, and getting suspect answers because the data is not trustworthy. Taking control of this reality, rather than denying it, seems like a reasonable way forward. That said, while the idea is interesting to me, it’s utterly outside of my expertise to contribute to such an effort on the technical front. — Rich Bowen [email protected]
