Tiago Bortoletto Vaz <ti...@debian.org> writes:

> I personally agree with the author's rationale on the aspects pointed
> out (copyright, quality and ethical ones). But at this point I guess we
> might have more questions than answers, that's why I think it'd be
> helpful to have some input before suggesting any concrete proposals.
> Perhaps the most important step now is to get an idea of how Debian
> folks actually feels about this matter.  And how we feel about moving in
> a similar direction to what the gentoo project did.

I'm dubious of the Gentoo approach because it is (as they admit)
unenforceable, which to me means that it's not a great policy.  A position
statement, maybe, but that's a different sort of thing.

I also agree in part with Ansgar: we don't make policies against what
tools people use locally for developing software.

I think the piece that has the most direct impact on Debian is if the
output from the AI software is found to be a copyright infringement and
therefore something that Debian does not have permission to redistribute
or that violates the DFSG.  But we're going to be facing that problem with
upstreams as well, so the scope of that problem goes far beyond the
question of direct contributions to Debian, and I don't think direct
contributions to Debian will be the most significant part of that problem.

This is going to be a tricky and unsettled problem for some time, since
it's both legal (in multiple distributions) and moral, and it's quite
possible that the legal judgments will not align with moral judgments.
(Around copyright, this is often the case.)  I'm dubious of our ability to
get ahead of the legal process on this, given that it's unlikely that
we'll even be able to *detect* if upstreams are using AI.  I think this is
a place where it's better to plan on being reactive than to attempt to be
proactive.  If we get credible reports that software in Debian is not
redistributable under the terms of the DFSG, we should deal with that like
we would with any other DFSG violation.  That may involve making judgment
calls about the legality of AI-generated content, but hopefully this will
have settled out a bit in broader society before we're forced to make a
decision on a specific case.

I also doubt that there is much alignment within Debian about the morality
of copyright infringement in general.  We're a big-tent project from that
perspective.  Our project includes people who believe all software
copyright is an ill-advised legal construction that limits people's
freedom, and people who believe strongly in moral rights expressed through
copyright and in the right of an author to control how their work is used.
We could try to reach some sort of project consensus on the moral issues
here, but I'm a bit dubious we would be successful.

At the moment, my biggest concern about the practical impact of AI is that
most of the output is low-quality garbage and, because it's now automated,
the volume of that low-quality garbage can be quite high.  (I am
repeatedly assured by AI advocates that this will improve rapidly.  I
suppose we will see.  So far, the evidence that I've seen has just led me
to question the standards and taste of AI advocates.)  But I don't think
dealing with this requires any new *policies*.  I think it's a fairly
obvious point of Debian collaboration that no one should deluge their
fellow project members in low-quality garbage, and if that starts
happening, I think we have adequate mechanisms to complain and ask that it
stop without making new policy.

About the only statement that I've wanted to make so far is to say that
anyone relying on AI to summarize important project resources like Debian
Policy or the Developers Guide or whatnot is taking full responsibility
for any resulting failures.  If you ask an AI to read Policy for you and
it spits out nonsense or lies, this is not something the Policy Editors
have any time or bandwidth to deal with.

-- 
Russ Allbery (r...@debian.org)              <https://www.eyrie.org/~eagle/>

Reply via email to