On Thu, May 16, 2024 at 06:34:13PM +0100, Peter Maydell wrote:
> On Thu, 16 May 2024 at 18:20, Michael S. Tsirkin <m...@redhat.com> wrote:
> >
> > On Thu, May 16, 2024 at 05:22:27PM +0100, Daniel P. Berrangé wrote:
> > > AFAICT at its current state of (im)maturity the question of licensing
> > > of AI code generator output does not have a broadly accepted / settled
> > > legal position. This is an inherant bias/self-interest from the vendors
> > > promoting their usage, who tend to minimize/dismiss the legal questions.
> > > >From my POV, this puts such tools in a position of elevated legal risk.
> > >
> > > Given the fuzziness over the legal position of generated code from
> > > such tools, I don't consider it credible (today) for a contributor
> > > to assert compliance with the DCO terms (b) or (c) (which is a stated
> > > pre-requisite for QEMU accepting patches) when a patch includes (or is
> > > derived from) AI generated code.
> > >
> > > By implication, I think that QEMU must (for now) explicitly decline
> > > to (knowingly) accept AI generated code.
> > >
> > > Perhaps a few years down the line the legal uncertainty will have
> > > reduced and we can re-evaluate this policy.
> 
> > At this junction, the code generated by these tools is of such
> > quality that I really won't expect it to pass even cursory code
> > review.
> 
> I disagree, I think that in at least some cases they can
> produce code that would pass our quality bar, especially with
> human supervision and editing after the fact. If the problem
> was merely "LLMs tend to produce lousy output" then we wouldn't
> need to write anything new -- we already have a process for
> dealing with bad patches, which is to say we do code review and
> suggest changes or simply reject the patches. What we *don't* have
> any process to handle is the legal uncertainties that Dan outlines
> above.
> 
> -- PMM


Maybe I'm bad at prompting ;)

-- 
MST


Reply via email to