At 11:33 AM 4/6/2011 -0400, Brian Jones wrote:
I think if PyPI had a ratings system that more closely matched some kind of (perhaps updated) consensus on what would be "useful", this conversation could focus more on what that looks like and who might implement it, which would be more constructive IMHO. Attempting to completely eradicate a feature by trying to collect anecdotal input doesn't seem like the way to go.

Why don't we start a new thread about what a good ratings system looks like and try to get actual consensus from it so real work can get done?

Better still - let's discuss the actual use cases for a rating system, before trying to design one.

So far, the people who've stepped up here say they want to warn people about "bad or broken" packages, or ones that are unmaintained, or where the author is unreachable.

This doesn't sound so much like a use case for a rating/comment system, as it does for a reporting mechanism for unreachable authors, or maybe something else.

Side note: the ratings system's way of summarizing things doesn't make much sense either; setuptools currently rates around 2.36, with 7 people voting 0 or 1, and 7 voting 4 or 5 -- this means that the summary is equally objectionable to *both groups*... too high for the people voting low, too low for the people voting high! So, as Jacob's been saying, the number seems pretty meaningless.

(This is a general problem with ratings of controversial topics, but it's especially likely given that currently, only super-exposed *and* controversial packages seem to *get* reviews at all. None of my other packages has received a single rating, AFAICT, despite many having download counts in the tens of thousands.)


I'd be happy if someone came up with a system that was useful enough to justify its existence. Maybe there are just particular features of the existing system that make it hard or unintuitive for end users to use in a meaningful way, which then makes any resulting 'rating' useless to anyone else who happens by.

The primary problem is that it's set up as an essentially moderator-free forum, so comment quality is low. People ignore bug reporting links and report them in comments anyway, and there's nothing the package author can do about it.

One then ends up with a public bug-solving thread on the package... which then communicates to visitors that this is in fact the way to get a problem solved, thereby encouraging more of the same.

In short, the social design of the feature is poor - a bad attempt at a technical solution to a fundamentally social problem. Its only saving grace is that it is *also* difficult to use (by requiring a PyPI account), so it doesn't get used very much. ;-)

Note also Laura Creighton's comments regarding people wishing for ponies... the people who want comments on PyPI are essentially two groups:

1. People who want to leave negative comments
2. People who want to read other people's comments

I haven't yet seen anybody say, "I want the comment feature because I want to post my in-depth evaluations and positive experiences"... which should be a good indicator of what sort of comments we should expect to see on packages.

Essentially, this is because the people who actually care enough to post an informative review, will do so on their personal blogs.

My guess is that, if PyPI ratings or reviews *required* a link to a blog posting, and had to be a summary or excerpt of that posting, approved by a human moderator, and with the ability for anyone (including the package author) to report that the link has become broken (and to thus get the rating/review removed), then you'd actually have a useful review/rating system.

The persons who want negative reports could still write them, but they'd first have to do so on their own blog, where it's more directly associated with their online persona, and subject to other feedback mechanisms (such as comment on the blog).

And it would discourage bug reports, because, well, who puts a bug report on their blog and expects to get an answer there?

The big downside of course is that somebody would have to run this. It couldn't just be an unattended, hands-free system.



Regardless, I suspect some of the vitriol toward the existing system is really about features of the system, and not about the existence of the system. Catalogs have ratings systems. I'd argue that a lot of sites that have ratings systems are, in fact, catalogs to one degree or another. Including Amazon.

The difference is that those systems have moderation mechanisms, and moderators... they have filters, appointed moderators, and crowd-sourced voting.

And on Amazon, there are *still* people who post eBay-style feedback, "Shipped fast, would buy again" and other such crap on a product review, because they've completely missed the point of Amazon reviews.

Doing a quality rating system means there needs to be somebody who:

1) actually cares about the ongoing quality of the reviews, and
2) has the ability to *do* something about it (like trimming discussions)

(Oh... and on Amazon, manufacturers can post responses to reviews, but the reviewer doesn't get to respond to that, ad infinitum. It's just review+response, with an additional off-page comment system for further discussions... which usually aren't of much value as a "review" system, which is why they're off-page.)


Jacob, what would make you *want* those emails you're getting as a package owner? What would make users *want* to leave feedback that would be useful to maintainers & other users?

I think this is mistaken -- reviews are not author feedback. As a package author, I want people to report problems to wherever I've asked them to report problems: a bug tracker or mailing list. If they want to say something nice, well, why not on their blog?

_______________________________________________
Catalog-SIG mailing list
[email protected]
http://mail.python.org/mailman/listinfo/catalog-sig

Reply via email to