Dan, thanks for taking care of this.
My overall not-strongly-held take is that we shouldn't try to be overly proscriptive at this stage. Wait and see if a problematic pattern emerges and then deal with it. But my main reason for weighing in: I haven't yet seen evidence that the LLMs produce useful kernel changes, but AI is looking to be useful at finding bugs. If an AI-generated bug report comes in the form of a purported code fix then it's "thanks for the bug report", delete the email then get in and fix the issue in our usual way. As we work through these issues, please let's not accidentally do anything which impedes our ability to receive AI-generated bug reports. If that means having to deal with poor fixes for those bugs then so be it - the benefit of the bug report outweighs the cost of discarding the purported fix.

