QEMU's AI generated content policy does not flesh out the exception
process yet.  Do it, while at the same time keeping things informal: ask
contributors to explain what they would like to use AI for, and let them
reach a consensus with the project on why it is credible to claim DCO
compliance in that specific scenario.

In other words, exceptions do not "solve the AI copyright problem".  They
take a position that a reasonable contributor could have, and assert that
we're comfortable with the argument.

Suggested-by: Daniel P. BerrangĂ© <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
 docs/devel/code-provenance.rst | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/docs/devel/code-provenance.rst b/docs/devel/code-provenance.rst
index dba99a26f64..103e0a97d76 100644
--- a/docs/devel/code-provenance.rst
+++ b/docs/devel/code-provenance.rst
@@ -326,8 +326,13 @@ The QEMU project thus requires that contributors refrain 
from using AI content
 generation agents which are built on top of such tools.
 
 This policy may evolve as AI tools mature and the legal situation is
-clarifed. In the meanwhile, requests for exceptions to this policy will be
-evaluated by the QEMU project on a case by case basis. To be granted an
-exception, a contributor will need to demonstrate clarity of the license and
-copyright status for the tool's output in relation to its training model and
-code, to the satisfaction of the project maintainers.
+clarified.
+
+Exceptions
+^^^^^^^^^^
+
+The QEMU project welcomes discussion on any exceptions to this policy,
+or more general revisions. This can be done by contacting the qemu-devel
+mailing list with details of a proposed tool, model, usage scenario, etc.
+that is beneficial to QEMU, while still mitigating the legal risks to the
+project.  After discussion, any exception will be listed below.
-- 
2.51.0


Reply via email to