Re: [gentoo-dev] Last rites: app-admin/salt, dev-python/pytest-salt-factories, dev-python/boto

2024-02-28 Thread Jonas Stein

Hi Patrick,

The salt ebuild has been refactored to remove the tests and modules that 
require dev-python/boto, and the mask has been removed.


Salt has a lot of users, and it would be doing them a disservice to 
remove it from the tree.


Salt is not trivial to distribute, because upstream wants us to run
curl -o bootstrap-salt.sh -L https://bootstrap.saltproject.io

Thank you for saving the salt ebuild.
It really has a lot of users.

--
Best,
Jonas




Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Eli Schwartz
On 2/28/24 6:06 AM, Matt Jolly wrote:
> 
>> But where do we draw the line? Are translation tools like DeepL
>> allowed? I don't see much of a copyright issue for these.
> 
> I'd also like to jump in and play devil's advocate. There's a fair
> chance that this is because I just got back from a
> supercomputing/research conf where LLMs were the hot topic in every
> keynote.
> 
> As mentioned by Sam, this RFC is performative. Any users that are going
> to abuse LLMs are going to do it _anyway_, regardless of the rules. We
> already rely on common sense to filter these out; we're always going to
> have BS/Spam PRs and bugs - I don't really think that the content being
> generated by LLM is really any worse.
> 
> This doesn't mean that I think we should blanket allow poor quality LLM
> contributions. It's especially important that we take into account the
> potential for bias, factual errors, and outright plagarism when these
> tools are used incorrectly.  We already have methods for weeding out low
> quality contributions and bad faith contributors - let's trust in these
> and see what we can do to strengthen these tools and processes.


Why is this an argument *against* performative statement of intent?

There are too many ways for bad faith contributors to maliciously engage
with the community, and no one is proposing a need to lay down rules
that forbid such people.

It is meaningful on its own to specify good faith rules that people
should abide by in order to produce a smoother experience. And telling
people that they are not supposed to do XXX is a good way to reduce the
amount of low quality contributions that Devs need to sift through...


> A bit closer to home for me, what about using a LLMs as an assistive
> technology / to reduce boilerplate? I'm recovering from RSI - I don't
> know when (if...) I'll be able to type like I used to again. If a model
> is able to infer some mostly salvagable boilerplate from its context
> window I'm going to use it and spend the effort I would writing that to
> fix something else; an outright ban on LLM use will reduce my _ability_
> to contribute to the project.


So by this appeal to emotion, you can claim anything is assistive
technology and therefore should be allowed because it's discriminatory
against the disabled if you don't allow it?

Is there some special attribute of disabled persons that means they are
exempted from copyright law?

What counts as assistive technology? Is it any technology that disabled
persons use, or technology designed to bridge the gap for the disabled?
If a disabled person uses vim because shortcuts, does that mean vim is
"assistive technology" because someone used it to "assist" them?

...

I somehow feel like I maybe heard about assistive technology existing
that assisted disabled persons in the process of dictating their
thoughts while avoiding physically stressful typing activities.

It didn't involve having the "assistive technology" provide both the
content and the typing, as that's not really *assisting*.


> In line with the above, if the concern is about code quality / potential
> for plagiarised code, What about indirect use of LLMs? Imagine a
> hypothetical situation where a contributor asks a LLM to summarise a
> topic and uses that knowledge to implement a feature. Is this now
> tainted / forbidden knowledge according to the Gentoo project?


Since your imagined hypothetical involves the use of copyrighted works
by and from a person, which cannot be said to be derivative copyrighted
works of the training data from the LLM -- for the same reason that
reading an article in a handwritten, copyrighted journal about "a topic"
to learn about that topic and then writing software based on the ideas
from the article is not a *derivative copyrighted work* -- the answer is
extremely trivially no?

The copyright issue with LLMs isn't that they ingest blogposts about how
cool ebuilds are and use that knowledge to write ebuilds. The copyright
issue with LLMs is that they ingest github repos full of non-Gentoo
ebuilds copyrighted under who knows what license and then regurgitate
those ebuilds. It is *derivative works*.

Prose summaries about generic topics is a good way to break the link
when it comes to derived works, it doesn't have anything to do with LLMs.


Nonetheless, any credible form of scholarship is going to demand that
participants be well versed in where the line is between saying
something in your own words with citation, and plagiarism.



> As a final not-so-hypothetical, what about a LLM trained on Gentoo docs
> and repos, or more likely trained on exclusively open-source
> contributions and fine-tuned on Gentoo specifics? I'm in the process of
> spinning up several models at work to get a handle on the tech / turn
> more electricity into heat - this is a real possibility (if I can ever
> find the time).


If you can state for a fact that you have done so, then clearly it's not
a copyright violation.

"exclusively 

Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Rich Freeman
On Wed, Feb 28, 2024 at 1:50 PM Arthur Zamarin  wrote:
>
> I know that GitHub Copilot can be limited to licenses, and even to just
> the current repository. Even though, I'm not sure that the copyright can
> be attributed to "me" and not the "AI" - so still gray area.

So, AI copyright is a bit of a poorly defined area simply due to a
lack of case law.  I'm not all that confident that courts won't make
an even bigger mess of it.

There are half a dozen different directions I think a court might rule
on the matter of authorship and derived works, but I think it is VERY
unlikely that a court will rule that the copyright will be attributed
to the AI itself, or that the AI itself ever was an author or held any
legal rights to the work at any point in time.  An AI is not a legal
entity. The company that provides the service, its
employees/developers, the end user, and the authors and copyright
holders of works used to train the AI are all entities a court is
likely to consider as having some kind of a role.

That said, we live in a world where it isn't even clear if APIs can be
copyrighted, though in practice enforcing such a copyright might be
impossible.  It could be a while before AI copyright concerns are
firmly settled.  When they are, I suspect it will be done in a way
that frustrates just about everybody on every side...

IMO the main risk to an organization (especially a transparent one
like ours) from AI code isn't even whether it is copyrightable or not,
but rather getting pulled into arguments and debates and possibly
litigation over what is likely to be boilerplate code that needs a lot
of cleanup anyway.  Even if you "win" in court or the court of public
opinion, the victory can be pyrrhic.

--
Rich



Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Arthur Zamarin
On 27/02/2024 16.45, Michał Górny wrote:
> Hello,
> 
> Given the recent spread of the "AI" bubble, I think we really need to
> look into formally addressing the related concerns.  In my opinion,
> at this point the only reasonable course of action would be to safely
> ban "AI"-backed contribution entirely.  In other words, explicitly
> forbid people from using ChatGPT, Bard, GitHub Copilot, and so on, to
> create ebuilds, code, documentation, messages, bug reports and so on for
> use in Gentoo.
> 
> Just to be clear, I'm talking about our "original" content.  We can't do
> much about upstream projects using it.

I support this motion.

> 
> Rationale:
> 
> 1. Copyright concerns.  At this point, the copyright situation around
> generated content is still unclear.  What's pretty clear is that pretty
> much all LLMs are trained on huge corpora of copyrighted material, and
> all fancy "AI" companies don't give shit about copyright violations.
> In particular, there's a good risk that these tools would yield stuff we
> can't legally use.

I know that GitHub Copilot can be limited to licenses, and even to just
the current repository. Even though, I'm not sure that the copyright can
be attributed to "me" and not the "AI" - so still gray area.

> 2. Quality concerns.  LLMs are really great at generating plausibly
> looking bullshit.  I suppose they can provide good assistance if you are
> careful enough, but we can't really rely on all our contributors being
> aware of the risks.

Let me tell a story. I was interested if I can teach an LLM the ebuild
format, as a possible helper tool for devs/non-devs. My prompt got so
huge, where I was teaching it all the stuff of ebuilds, where to input
the source code (eclasses), and such. At one point, it even managed to
output a close enough python distutils-r1 ebuild - the same level that
`vim dev-python/${PN}/${PN}-${PV}.ebuild` creates using the gentoo
template. Yes, my long work resulted in no gain.

For each other ebuild type: cmake, meson, go, rust - I always got
garbage ebuild. Yes, it was generating a good DESCRIPTION and HOMEPAGE
(simple stuff to copy from upstream) and even 60% accuracy for LICENSE.
But did you know we have "intel80386" arch for KEYWORDS? We can
RESTRICT="install"? We can use "^cat-pkg/pkg-1" syntax in deps? PATCHES
with http urls inside? And the list goes on. Sometimes it was even funny.

So until a good prompt can be created for gentoo, upon which we *might*
reopen discussion, I'm strongly supporting banning AI generating
ebuilds. Currently good templates per category, and just copying other
ebuilds as starting point, and even just skel.ebuild - all those 3
options bring much better result and less time waste for developers.

> 3. Ethical concerns.  As pointed out above, the "AI" corporations don't
> give shit about copyright, and don't give shit about people.  The AI
> bubble is causing huge energy waste.  It is giving a great excuse for
> layoffs and increasing exploitation of IT workers.  It is driving
> enshittification of the Internet, it is empowering all kinds of spam
> and scam.
> 

Many companies who use AI as reason for layoff are just creating a
reasoning out of bad will, or ignorance. The company I work at is using
AI tools as a boost for productivity, but at all levels of management
they know that AI can't replace a person - best case boost him 5-10%.
The current real reason for layoffs is tightening of budget movement
cross the industry (just a normal cycle, soon it would get better), so
management prefer to layoff not themselves. So yeah, sad world.

> 
> Gentoo has always stood out as something different, something that
> worked for people for whom mainstream distros were lacking.  I think
> adding "made by real people" to the list of our advantages would be
> a good thing — but we need to have policies in place, to make sure shit
> doesn't flow in.
> 
> Compare with the shitstorm at:
> https://github.com/pkgxdev/pantry/issues/5358
> 

Great read, really much WTF. This whole repo is just a cluster of AIs
competing against each other.

-- 
Arthur Zamarin
arthur...@gentoo.org
Gentoo Linux developer (Python, pkgcore stack, Arch Teams, GURU)



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Michał Górny
On Wed, 2024-02-28 at 11:08 +0100, Ulrich Mueller wrote:
> > > > > > On Wed, 28 Feb 2024, Michał Górny wrote:
> 
> > On Tue, 2024-02-27 at 21:05 -0600, Oskari Pirhonen wrote:
> > > What about cases where someone, say, doesn't have an excellent grasp of
> > > English and decides to use, for example, ChatGPT to aid in writing
> > > documentation/comments (not code) and puts a note somewhere explicitly
> > > mentioning what was AI-generated so that someone else can take a closer
> > > look?
> > > 
> > > I'd personally not be the biggest fan of this if it wasn't in something
> > > like a PR or ml post where it could be reviewed before being made final.
> > > But the most impportant part IMO would be being up-front about it.
> 
> > I'm afraid that wouldn't help much.  From my experiences, it would be
> > less effort for us to help writing it from scratch, than trying to
> > untangle whatever verbose shit ChatGPT generates.  Especially that
> > a person with poor grasp of the language could have trouble telling
> > whether the generated text is actually meaningful.
> 
> But where do we draw the line? Are translation tools like DeepL allowed?
> I don't see much of a copyright issue for these.

I have a strong suspicion that these translation tools are trained
on copyrighted translations of books and other copyrighted material.

-- 
Best regards,
Michał Górny



signature.asc
Description: This is a digitally signed message part


Re: [gentoo-dev] 2024-02-26-debianutils-drops-installkernel-dep: add news item v2

2024-02-28 Thread Andrew Nowa Ammerlaan

On 27/02/2024 18:24, Hank Leininger wrote:

On 2024-02-27, andrewammerlaan wrote:


Until recently, sys-apps/debianutils was in turn pulled in by
app-misc/ca-certificates, an essential package installed on many
systems. This is no longer the case.[2]. As a result many users may find
that sys-apps/debianutils and therefore sys-kernel/installkernel are no
longer part of the dependency graph and will therefore be cleaned up by
"emerge --depclean".


Sorry for speaking up late: I (mis)read the second sentence differently
from others in this thread, apparently.

"This is no longer the case." might apply to the first part of the
previous sentence, "was in turn pulled in by".

Or it might apply to the second part, "an essential package installed on
many systems."

I think what's meant is the former, it is no longer pulled in. But
someone reading this cold could be forgiven for reading that as
"ca-certificates is no longer an essential package".

Unfortunately my recommendation would be to restore the mention of a
dependency, in some form or fashion, which seems to be something that
was removed due to earlier feedback in this thread.

Maybe:

Until recently, sys-apps/debianutils was in turn pulled in by
app-misc/ca-certificates, an essential package installed on many
systems. That package no longer depends on sys-apps/debianutils.  As a
result many users may find that sys-apps/debianutils and therefore
sys-kernel/installkernel are no longer part of the dependency graph and
will therefore be cleaned up by "emerge --depclean".


I rewrote this paragraph like this:

Until recently, sys-apps/debianutils was in turn pulled in by
app-misc/ca-certificates, an essential package installed on many
systems. However, this dependency of app-misc/ca-certificates on
sys-apps/debianutils was removed[2]. As a result many users may find
that sys-apps/debianutils and therefore sys-kernel/installkernel are no
longer part of the dependency graph and will therefore be cleaned up by
"emerge --depclean".

I think this way it should be very clear what has changed to cause the 
problem.


Best regards,
Andrew



Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Matt Jolly


But where do we draw the line? Are translation tools like DeepL 
allowed? I don't see much of a copyright issue for these.


I'd also like to jump in and play devil's advocate. There's a fair
chance that this is because I just got back from a
supercomputing/research conf where LLMs were the hot topic in every keynote.

As mentioned by Sam, this RFC is performative. Any users that are going
to abuse LLMs are going to do it _anyway_, regardless of the rules. We
already rely on common sense to filter these out; we're always going to
have BS/Spam PRs and bugs - I don't really think that the content being
generated by LLM is really any worse.

This doesn't mean that I think we should blanket allow poor quality LLM
contributions. It's especially important that we take into account the
potential for bias, factual errors, and outright plagarism when these
tools are used incorrectly.  We already have methods for weeding out low
quality contributions and bad faith contributors - let's trust in these
and see what we can do to strengthen these tools and processes.

A bit closer to home for me, what about using a LLMs as an assistive
technology / to reduce boilerplate? I'm recovering from RSI - I don't
know when (if...) I'll be able to type like I used to again. If a model
is able to infer some mostly salvagable boilerplate from its context
window I'm going to use it and spend the effort I would writing that to
fix something else; an outright ban on LLM use will reduce my _ability_
to contribute to the project.

What about using a LLM for code documentation? Some models can do a
passable job of writing decent quality function documentation and, in
production, I _have_ caught real issues in my logic this way. Why should
I type that out (and write what I think the code does rather than what
it actually does) if an LLM can get 'close enough' and I only need to do
light editing?

In line with the above, if the concern is about code quality / potential
for plagiarised code, What about indirect use of LLMs? Imagine a
hypothetical situation where a contributor asks a LLM to summarise a
topic and uses that knowledge to implement a feature. Is this now
tainted / forbidden knowledge according to the Gentoo project?

As a final not-so-hypothetical, what about a LLM trained on Gentoo docs
and repos, or more likely trained on exclusively open-source
contributions and fine-tuned on Gentoo specifics? I'm in the process of
spinning up several models at work to get a handle on the tech / turn
more electricity into heat - this is a real possibility (if I can ever
find the time).

The cat is out of the bag when it comes to LLMs. In my real-world job I
talk to scientists and engineers using these things (for their
strengths) to quickly iterate on designs, to summarise experimental
results, and even to generate testable hypotheses. We're only going to
see increasing use of this technology going forward.

TL;DR: I think this is a bad idea. We already have effective mechanisms
for dealing with spam and bad faith contributions. Banning LLM use by
Gentoo contributors at this point is just throwing the baby out with the
bathwater.

As an alternative I'd be very happy some guidelines for the use of LLMs
and other assistive technologies like "Don't use LLM code snippets
unless you understand them", "Don't blindly copy and paste LLM output",
or, my personal favourite, "Don't be a jerk to our poor bug wranglers".

A blanket "No completely AI/LLM generated works" might be fine, too.

Let's see how the legal issues shake out before we start pre-emptively
banning useful tools. There's a lot of ongoing action in this space - at
the very least I'd like to see some thorough discussion of the legal
issues separately if we're making a case for banning an entire class of
technology.

A Gentoo LLM project formed of experts who could actually provide good
advice / some actual guidelines for LLM use within the project (and
engaging some real-world legal advice) might be a good starting point.
Are there any volunteers in the audience?

Thanks for listening to my TED talk,

Matt


OpenPGP_0x50EC548D52E051C0.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature


[gentoo-dev] [PATCH v2] java-ant-2.eclass: change JAVA_ANT_E_DEPEND to dev-java/ant

2024-02-28 Thread Volkmar W. Pogatzki
Also removes unused eclass variable JAVA_ANT_DISABLE_ANT_CORE_DEP which
becomes obsolete by removal of old dev-java/ant-core-1.10.9-r5.

Signed-off-by: Volkmar W. Pogatzki 
---
 eclass/java-ant-2.eclass | 14 ++
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/eclass/java-ant-2.eclass b/eclass/java-ant-2.eclass
index 35fe84997563..1eccead3067f 100644
--- a/eclass/java-ant-2.eclass
+++ b/eclass/java-ant-2.eclass
@@ -1,4 +1,4 @@
-# Copyright 2004-2023 Gentoo Authors
+# Copyright 2004-2024 Gentoo Authors
 # Distributed under the terms of the GNU General Public License v2

 # @ECLASS: java-ant-2.eclass
@@ -48,14 +48,12 @@ inherit java-utils-2 multilib
 #The implementation of dependencies is handled by java-utils-2.eclass
 #WANT_ANT_TASKS

-# @ECLASS_VARIABLE: JAVA_ANT_DISABLE_ANT_CORE_DEP
-# @DEFAULT_UNSET
+# @VARIABLE: JAVA_ANT_E_DEPEND
+# @INTERNAL
 # @DESCRIPTION:
-# Setting this variable non-empty before inheriting java-ant-2 disables adding
-# dev-java/ant-core into DEPEND.
-if [[ -z "${JAVA_ANT_DISABLE_ANT_CORE_DEP}" ]]; then
-   JAVA_ANT_E_DEPEND+=" >=dev-java/ant-core-1.8.2:0"
-fi
+# Convenience variable adding packages to DEPEND so they need not be added
+# in the ebuild.
+JAVA_ANT_E_DEPEND+=" >=dev-java/ant-1.10.14-r2:0"

 # add ant tasks specified in WANT_ANT_TASKS to DEPEND
 ANT_TASKS_DEPEND="$(java-pkg_ant-tasks-depend)"
--
2.41.0




Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread David Seifert
On Tue, 2024-02-27 at 15:45 +0100, Michał Górny wrote:
> Hello,
> 
> Given the recent spread of the "AI" bubble, I think we really need to
> look into formally addressing the related concerns.  In my opinion,
> at this point the only reasonable course of action would be to safely
> ban "AI"-backed contribution entirely.  In other words, explicitly
> forbid people from using ChatGPT, Bard, GitHub Copilot, and so on, to
> create ebuilds, code, documentation, messages, bug reports and so on
> for
> use in Gentoo.
> 
> Just to be clear, I'm talking about our "original" content.  We can't
> do
> much about upstream projects using it.
> 
> 
> Rationale:
> 
> 1. Copyright concerns.  At this point, the copyright situation around
> generated content is still unclear.  What's pretty clear is that
> pretty
> much all LLMs are trained on huge corpora of copyrighted material, and
> all fancy "AI" companies don't give shit about copyright violations.
> In particular, there's a good risk that these tools would yield stuff
> we
> can't legally use.
> 
> 2. Quality concerns.  LLMs are really great at generating plausibly
> looking bullshit.  I suppose they can provide good assistance if you
> are
> careful enough, but we can't really rely on all our contributors being
> aware of the risks.
> 
> 3. Ethical concerns.  As pointed out above, the "AI" corporations
> don't
> give shit about copyright, and don't give shit about people.  The AI
> bubble is causing huge energy waste.  It is giving a great excuse for
> layoffs and increasing exploitation of IT workers.  It is driving
> enshittification of the Internet, it is empowering all kinds of spam
> and scam.
> 
> 
> Gentoo has always stood out as something different, something that
> worked for people for whom mainstream distros were lacking.  I think
> adding "made by real people" to the list of our advantages would be
> a good thing — but we need to have policies in place, to make sure
> shit
> doesn't flow in.
> 
> Compare with the shitstorm at:
> https://github.com/pkgxdev/pantry/issues/5358
> 

+1

Can we get this added to the agenda for the next council meeting?



Re: [gentoo-dev] RFC: banning "AI"-backed (LLM/GPT/whatever) contributions to Gentoo

2024-02-28 Thread Ulrich Mueller
> On Wed, 28 Feb 2024, Michał Górny wrote:

> On Tue, 2024-02-27 at 21:05 -0600, Oskari Pirhonen wrote:
>> What about cases where someone, say, doesn't have an excellent grasp of
>> English and decides to use, for example, ChatGPT to aid in writing
>> documentation/comments (not code) and puts a note somewhere explicitly
>> mentioning what was AI-generated so that someone else can take a closer
>> look?
>> 
>> I'd personally not be the biggest fan of this if it wasn't in something
>> like a PR or ml post where it could be reviewed before being made final.
>> But the most impportant part IMO would be being up-front about it.

> I'm afraid that wouldn't help much.  From my experiences, it would be
> less effort for us to help writing it from scratch, than trying to
> untangle whatever verbose shit ChatGPT generates.  Especially that
> a person with poor grasp of the language could have trouble telling
> whether the generated text is actually meaningful.

But where do we draw the line? Are translation tools like DeepL allowed?
I don't see much of a copyright issue for these.

Ulrich

[1] https://www.deepl.com/translator


signature.asc
Description: PGP signature