On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
> <toad at amphibian.dyndns.org> wrote:
> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau <mail at tommyserver.de> 
wrote:
> >> > Evan Daniel schrieb:
> >> >> I don't have any specific ideas for how to choose whether to ignore
> >> >> identities, but I think you're making the problem much harder than it
> >> >> needs to be. ?The problem is that you need to prevent spam, but at the
> >> >> same time prevent malicious non-spammers from censoring identities who
> >> >> aren't spammers. ?Fortunately, there is a well documented algorithm
> >> >> for doing this: the Advogato trust metric.
> >> >>
> >> >> The WoT documentation claims it is based upon the Advogato trust
> >> >> metric. ?(Brief discussion: http://www.advogato.org/trust-metric.html
> >> >> Full paper: http://www.levien.com/thesis/compact.pdf ) ?I think this
> >> >> is wonderful, as I think there is much to recommend the Advogato
> >> >> metric (and I pushed for it early on in the WoT discussions).
> >> >> However, my understanding of the paper and what is actually
> >> >> implemented is that the WoT code does not actually implement it.
> >> >> Before I go into detail, I should point out that I haven't read the
> >> >> WoT code and am not fully up to date on the documentation and
> >> >> discussions; if I'm way off base here, I apologize.
> >> >
> >> > I think, you are:
> >> >
> >> > The advogato idea may be nice (i did not read it myself), if you have
> > exactly 1 trustlist for
> >> > everything. But xor wants to implement 1 trustlist for every app as 
people
> > may act differently e.g.
> >> > on firesharing than on forums or while publishing freesites. You 
basicly
> > dont want to censor someone
> >> > just because he tries to disturb filesharing while he may be tries to
> > bring in good arguments at
> >> > forum discussions about it.
> >> > And i dont think that advogato will help here, right?
> >>
> >> There are two questions here. ?The first question is given a set of
> >> identities and their trust lists, how do you compute the trust for an
> >> identity the user has not rated? ?The second question is, how do you
> >> determine what trust lists to use in which contexts? ?The two
> >> questions are basically orthogonal.
> >>
> >> I'm not certain about the contexts issue; Toad raised some good
> >> points, and while I don't fully agree with him, it's more complicated
> >> than I first thought. ?I may have more to say on that subject later.
> >>
> >> Within a context, however, the computation algorithm matters. ?The
> >> Advogato idea is very nice, and imho much better than the current WoT
> >> or FMS answers. ?You should really read their simple explanation page.
> >> ?It's really not that complicated; the only reasons I'm not fully
> >> explaining it here is that it's hard to do without diagrams, and they
> >> already do a good job of it.
> >
> > It's nice, but it doesn't work. Because the only realistic way for 
positive
> > trust to be assigned is on the basis of posted messages, in a purely 
casual
> > way, and without the sort of permanent, universal commitment that any
> > pure-positive-trust scheme requires: If he spams on any board, if I ever 
gave
> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE TRUST* 
as
> > the only way to block the spam.
> 
> How is that different than the current situation?  Either the fact
> that he spams and you trust him means you lose trust because you're
> allowing the spam through, or somehow the spam gets stopped despite
> your trust -- which implies either that a lot of people have to update
> their trust lists before anything happens, and therefore the spam
> takes forever to stop, or it doesn't take that many people to censor
> an objectionable but non-spamming poster.
> 
> I agree, this is a bad thing.  I'm just not seeing that the WoT system
> is *that* much better.  It may be somewhat better, but the improvement
> comes at a cost of trading spam resistance vs censorship ability,
> which I think is fundamentally unavoidable.

So how do you solve the contexts problem? The only plausible way to add trust 
is to do it on the basis of valid messages posted to the forum that the user 
reads. If he posts nonsense to other forums, or even introduces identities 
that spam other forums, the user adding trust probably does not know about 
this, so it is problematic to hold him responsible for that. In a positive 
trust only system this is unsolvable afaics?

Perhaps some form of feedback/ultimatum system? Users who are affected by spam 
from an identity can send proof that the identity is a spammer to the users 
they trust who trust that identity. If the proof is valid, those who trust 
the identity can downgrade him within a reasonable period; if they don't do 
this they get downgraded themselves?
> 
> There's another reason I don't see this as a problem: I'm working from
> the assumption that if you can force a spammer to perform manual
> effort on par with the amount of spam he can send, then the problem
> *has been solved*.  The reason email spam and Frost spam is a problem
> is not that there are lots of spammers; there aren't.  It's that the
> spammers can send colossal amounts of spam.

Agreed. However positive trust as currently envisaged does not have this 
property, because spammers can gain trust by posting valid messages and then 
use it to introduce spamming identities. Granted there is a limited capacity, 
but they can gain lots of trust by posting, and can therefore send a lot of 
spam via their trusted identities: the multiplier is still pretty good, 
although maybe not hideous.
> 
> The solution, imho, is mundane: if the occasional trusted identity
> starts a spam campaign, I mark them as a spammer.  This is optionally
> published, but can be ignored by others to maintain the positive trust
> aspects of the behavior.  Locally, it functions as a slightly stronger
> killfile: their messages get ignored, and their identity's trust
> capacity is forced to zero.

Does not protect against a spammer's parent identity introducing more 
spammers. IMHO it is important that if an identity trusts a lot of spammers 
it gets downgraded - and that this be *easy* for the user.
> 
> In the context of the routing and data store algorithms, Freenet has a
> strong prejudice against alchemy and in favor of algorithms with
> properties that are both useful and provable from reasonable
> assumptions, even though they are not provably perfect.  Like routing,
> the generalized trust problem is non-trivial.  Advogato has such
> properties; the current WoT and FMS algorithms do not: they are
> alchemical.  In addition, the Advogato metric has a strong anecdotal
> success story in the form of the Advogato site (I've not been active
> on FMS/Freetalk recently enough to speak to them).  Why is alchemy
> acceptable here, but not in routing?

Because the provable metrics don't work for our scenario. At least they don't 
work given the current assumptions and formulations.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/7f05868f/attachment.pgp>

Reply via email to