Matt Jolly <kan...@gentoo.org> writes:

>> But where do we draw the line? Are translation tools like DeepL
>> allowed? I don't see much of a copyright issue for these.
>
> I'd also like to jump in and play devil's advocate. There's a fair
> chance that this is because I just got back from a
> supercomputing/research conf where LLMs were the hot topic in every keynote.
>
> As mentioned by Sam, this RFC is performative. Any users that are going
> to abuse LLMs are going to do it _anyway_, regardless of the rules. We
> already rely on common sense to filter these out; we're always going to
> have BS/Spam PRs and bugs - I don't really think that the content being
> generated by LLM is really any worse.
>
> This doesn't mean that I think we should blanket allow poor quality LLM
> contributions. It's especially important that we take into account the
> potential for bias, factual errors, and outright plagarism when these
> tools are used incorrectly.  We already have methods for weeding out low
> quality contributions and bad faith contributors - let's trust in these
> and see what we can do to strengthen these tools and processes.
>
> A bit closer to home for me, what about using a LLMs as an assistive
> technology / to reduce boilerplate? I'm recovering from RSI - I don't
> know when (if...) I'll be able to type like I used to again. If a model
> is able to infer some mostly salvagable boilerplate from its context
> window I'm going to use it and spend the effort I would writing that to
> fix something else; an outright ban on LLM use will reduce my _ability_
> to contribute to the project.

Another person approached me after this RFC and asked whether tooling
restricted to the current repo would be okay. For me, that'd be mostly
acceptable, given it won't make suggestions based on copyrighted code.

I also don't have a problem with LLMs being used to help refine commit
messages as long as someone is being sensible about it (e.g. if, as in
your situation, you know what you want to say but you can't type much).

I don't know how to phrase a policy off the top of my head which allows
those two things but not the rest.

>
> What about using a LLM for code documentation? Some models can do a
> passable job of writing decent quality function documentation and, in
> production, I _have_ caught real issues in my logic this way. Why should
> I type that out (and write what I think the code does rather than what
> it actually does) if an LLM can get 'close enough' and I only need to do
> light editing?

I suppose in that sense, it's the same as blindly listening to any
linting tool or warning without understanding what it's flagging and if
it's correct.

> [...]
> As a final not-so-hypothetical, what about a LLM trained on Gentoo docs
> and repos, or more likely trained on exclusively open-source
> contributions and fine-tuned on Gentoo specifics? I'm in the process of
> spinning up several models at work to get a handle on the tech / turn
> more electricity into heat - this is a real possibility (if I can ever
> find the time).

I think that'd be interesting. It also does a good job as a rhetorical
point wrt the policy being a bit too blanket here.

See https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/
too.

>
> The cat is out of the bag when it comes to LLMs. In my real-world job I
> talk to scientists and engineers using these things (for their
> strengths) to quickly iterate on designs, to summarise experimental
> results, and even to generate testable hypotheses. We're only going to
> see increasing use of this technology going forward.
>
> TL;DR: I think this is a bad idea. We already have effective mechanisms
> for dealing with spam and bad faith contributions. Banning LLM use by
> Gentoo contributors at this point is just throwing the baby out with the
> bathwater.

The problem is that in FOSS, a lot of people are getting flooded with AI
spam and therefore have little regard for any possibly-good parts of it.

I count myself as part of that group - it's very much sludge and I feel
tired just seeing it talked about at the moment.

Is that super rational? No, but we're also volunteers and it's not
unreasonable for said volunteers to then say "well I don't want any more
of that".

I think this colours a lot of the responses here, and it doesn't
invalidate them, but it also explains why nobody is really interested
in being open to this for now. Who can blame them (me included)?

>
> As an alternative I'd be very happy some guidelines for the use of LLMs
> and other assistive technologies like "Don't use LLM code snippets
> unless you understand them", "Don't blindly copy and paste LLM output",
> or, my personal favourite, "Don't be a jerk to our poor bug wranglers".
>
> A blanket "No completely AI/LLM generated works" might be fine, too.
>
> Let's see how the legal issues shake out before we start pre-emptively
> banning useful tools. There's a lot of ongoing action in this space - at
> the very least I'd like to see some thorough discussion of the legal
> issues separately if we're making a case for banning an entire class of
> technology.

I'm sympathetic to the arguments you've made here and I don't want to
act like this sinks your whole argument (it doesn't), but this is
typically not how legal issues are approached. People act conservatively
if there's risk to them, not the other way around ;)

> [...]

Thanks for making me think a bit more about it and considering some
use cases I hadn't really thought about.

I still don't really want ebuilds generated by LLMs, but I could live
with:
a) LLMs being used to refine commit messages;
b) LLMs being used if restricted to suggestions from a FOSS-licenced
codebase

> Matt
>

thanks,
sam

Attachment: signature.asc
Description: PGP signature

Reply via email to