Hi,

> Compare with the shitstorm at:
> https://github.com/pkgxdev/pantry/issues/5358

Thank you for this, it made my day.

Though I'm just a proxy maintainer for now, I also support this initiative,
there should be some guard rails set up around LLM usage.

> 1. Copyright concerns.  At this point, the copyright situation around
> generated content is still unclear.  What's pretty clear is that pretty
> much all LLMs are trained on huge corpora of copyrighted material, and
> all fancy "AI" companies don't give shit about copyright violations.
> In particular, there's a good risk that these tools would yield stuff we
> can't legally use.

IANAL, but IMHO if we stop respecting copyright law, even if indirectly via
LLMs, why should we expect others to respect our licenses? It could be prudent
to wait and see where will this land.

> 2. Quality concerns.  LLMs are really great at generating plausibly
> looking bullshit.  I suppose they can provide good assistance if you are
> careful enough, but we can't really rely on all our contributors being
> aware of the risks.

From my personal experience of using Github Copilot fine tuned on a large
private code base, it functions mostly okay as a more smart auto complete on a
single line of code, but when it comes to multiple lines of code, even when it
comes to filling out boiler plate code, it's at best a 'meh'. The problem is
that while the output looks okay-ish, often it will have subtle mistakes or will
hallucinate some random additional stuff not relevant to the source file in
question, so one ends up having to read and analyze the entire output of the LLM
to fix problems with the code. I found that the mental and time overhead rarely
makes it worth it, especially when a template can do a better job (e.g. this
would be the case for ebuilds).

Since during reviews we are supposed to be reading the entire contribution, not
sure how much difference this makes, but I can see a developer trusting LLM
too much might end up outsourcing the checking of the code to the reviewers,
which means we need to be extra vigilant and could lead to reduced trust of
contributions.

> 3. Ethical concerns.  As pointed out above, the "AI" corporations don't
> give shit about copyright, and don't give shit about people.  The AI
> bubble is causing huge energy waste.  It is giving a great excuse for
> layoffs and increasing exploitation of IT workers.  It is driving
> enshittification of the Internet, it is empowering all kinds of spam
> and scam.

I agree. I'm already tired of AI generated blog spam and so forth, such a waste
of time and quite annoying. I'd rather not have that on our wiki pages too. The
purpose of documenting things is to explain an area to someone new to it or
writing down unique quirks of a setup or a system. Since LLMs cannot write new
original things, just rehash information it has seen I'm not sure how could it
be helpful for this at all to be honest.

Overall my time is too valuable to shift through AI generated BS when I'm trying
to solve a problem, I'd prefer we keep a well curated high quality documentation
where possible.

Zoltan

Attachment: signature.asc
Description: PGP signature

Reply via email to