On Sun, 9 Jul 2023 at 02:11, Cameron Simpson <c...@cskk.id.au> wrote:
>
> On 04Jul2023 17:21, Christopher Barker <python...@gmail.com> wrote:
> >3) A rating system built into PyPi -- This could be a combination of
> >two
> >things:
> >  A - Automated analysis -- download stats, dependency stats, release
> >frequency, etc, etc, etc.
> >  B - Community ratings -- upvotes. stars, whatever.
> >
> >If done well, that could be very useful -- search on PyPi listed by rating.
> >However -- :done well" ios a huge challenge -- I don't think there's a way
> >to do the automated system right, and community scoring can be abused
> >pretty easily. But maybe folks smarter than me could make it work with one
> >or both of these approaches.
>
> I have always thought that any community scoring system should allow
> other users to mark up/down other reviewers w.r.t the scores presented.
> That markup should only affect the scoring as presented to the person
> doing the markup, like a personal killfile. The idea is that you can
> have the ratings you see affected by notions that "I trust the opinions
> of user A" or "I find user B's opinion criteria not useful for my
> criteria".
>
> Of course the "ignore user B" has some of the same downsides as trying
> individually ignore certain spam sources: good for a single "bad" actor
> (by my personal criteria) to ignore their (apparent) gaming of the
> ratings but not good for a swarm of robots.

Hi Cameron,

That sounds to me like the basis of a distributed trust network, and
could be useful.

Some thoughts from experience working with Python (and other
ecosystem) packages: after getting to know the usernames of developers
and publishers of packages, I think that much of that trust can be
learned by individuals without the assistance of technology -- that is
to say, people begin to recognize authors that they trust, and authors
that they don't.

How to provide reassurance that each author's identity remains the
same between modifications to packages/code is a related challenge,
though.  FWIW, I don't really like many of the common multi-factor
authentication systems used today, because I don't like seeing
barriers to expression emerge, even when the intent is benevolent.
I'm not sure I yet have better alternatives to suggest, though.

Your message also helped me clarify why I don't like embedding any
review information at all within packaging ecosystems -- regardless of
whether transitive trust is additionally available in the form of
reviews.

The reason is that I would prefer to see end-to-end transparent supply
chain integrity for almost all, if not all, software products.  I'm
typing this in a GMail web interface, but I do not believe that many
people have access to all of the source code for the version that I'm
using.  If everyone did, and if that source included strong dependency
hashes to indicate the dependencies used -- similar to the way that
pip-tools[1] can write a persistent record of a dependency set,
allowing the same dependencies to be inspected and installed by others
-- then people could begin to build their own mental models of what
packages -- and what specific versions of those packages -- are worth
trusting.

In other words: if all of the software and bill-of-materials for it
became open and published, and could be constructed reproducibly[2],
then social trust would emerge without a requirement for reviews.
That would not be mutually-exclusive with the presence of reviews --
verbal, written, or otherwise -- elsewhere.

Thanks,
James

[1] - https://github.com/jazzband/pip-tools/

[2] - https://www.reproducible-builds.org/
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/OOPIHTBTJFHYVJLJVYHWAK4EPYKP6YBH/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to