Hi, Apache Fory Community,

I would like to start a discussion about introducing an AI
contribution policy for Apache Fory.

Background
- Apache Fory is a performance-critical foundational serialization framework.
- We are receiving more AI-assisted contributions.
- We need clear expectations to keep code quality, review efficiency,
and legal/provenance safety at the level expected by the project.

Goals of the proposed policy
- Keep human accountability as the core rule (AI can assist, but
contributors own the result).
- Require careful self-review of AI-assisted code before submission.
- Require practical verification evidence for non-trivial changes
(tests/spec/perf evidence where applicable).
- Require licensing/provenance compliance aligned with ASF guidance.
- Reduce low-signal submissions and review overhead.

What this policy is NOT
- It is not a ban on AI tools.
- It does not require disclosing private prompts, model details, or
internal enterprise workflows.

Current draft and related changes
- AI policy draft: AI_CONTRIBUTION_POLICY.md
- PR template updates for author checklist: .github/pull_request_template.md
- Related PR: https://github.com/apache/fory/pull/3437
- ASF reference: https://www.apache.org/legal/generative-tooling.html

Questions for discussion
1. Is the proposed scope appropriate for Apache Fory?
2. Is the privacy-safe disclosure approach clear and sufficient?
3. Are the verification requirements (tests/spec/perf evidence)
balanced and practical?
4. Any concerns about legal/governance wording or enforcement language?
5. What changes are needed before we consider adoption?

Please share feedback, suggestions, and concerns.

Thanks,
Shawn Yang

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to