On Wednesday 06 May 2009 23:02:11 Evan Daniel wrote: ... > I'll leave the precise descriptions of the two algorithms to those who > are actually writing the code for now. (Though I have read the > Advogato paper and feel I understand it fairly well -- it's rather > dense, though, and I'd be happy to try to offer a clearer or more > detailed explanation of the paper if that would be helpful.) However, > one of the properties of the Advogato metric (which the WoT algorithm, > AIUI, does not have) is worth discussing, as I think it is > particularly relevant to issues around censorship that are frequently > discussed wrt WoT and Freenet. Specifically, Advogato does not use > negative trust ratings, whereas both WoT and FMS do. > > The concept of negative trust ratings has absolutely nothing to do > with the arbitrary numbers one person assigns to another in their > published trust list. Those can be on any scale you like, whether > it's 0-100, 1-5, or -100 to +100. A system can have or not have > negative trust properties on any of those scales. Instead, negative > trust is a property based on how the trust rating computed for an > identity behaves as other identities *change* their trust ratings. > Let's suppose that Alice trusts Bob, and is trying to compute a trust > rating for Carol (whom she does not have a direct rating for). Alice > has trust ratings for people not named, some of whom have ratings for > Carol published. If the trust computation is such that there exists a > rating Bob can assign to Carol such that Alice's rating of Carol is > worse than if Bob had not rated her at all, then the system exhibits > negative trust behaviors. > > This is, broadly, equivalent to the ability to censor a poster of FMS > or WoT by marking them untrusted. There has been much debate over the > question of censoring posters never, only for spamming, for spamming > plus certain objectionable speech, what should be objectionable, > whether you should censor someone who publishes a trust list that > censors non-spammers, etc. In my opinion, all of that discussion is > very silly to be having in the first place, since the answer is so > well documented: simply don't use a trust metric with negative trust > behaviors! > > The problem of introductions, etc is not magically solved by the > Advogato algorithm. However, I don't think it is made any harder by > it. The dual benefits of provable spam resistance and lack of > censorship are, in my opinion, rather compelling.
The current bootstrapping mechanism relies on negative trust. A newly solved captcha has to yield some amount of trust, or nobody will see the new poster's messages and he won't gain any permanent trust. Which means if he starts spamming, he should be blocked when somebody, or some number of somebody's, say he is spamming, and *NOT* only when the person he bootstrapped through says he is spamming. If we do the latter, then a spammer would simply announce through idle Freetalk instances: if the user does not visit the node, he won't mark the spammer as a spammer, and therefore nobody will be able to mark him down except by marking down the node he bootstrapped through. What is the alternative? AFAICS making the node through which the introduction takes place personally responsible is not going to work. That leaves: - The trust gained from solving a captcha could be limited in duration and number of messages. After a certain number of messages or a certain time (whichever comes first), it could disappear. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20090507/7bd4ae3d/attachment.pgp>
