On Thursday 07 May 2009 10:23:51 Evan Daniel wrote:
> On Thu, May 7, 2009 at 4:00 AM, xor <xor at gmx.li> wrote:
> > On Thursday 07 May 2009 00:02:11 Evan Daniel wrote:
> >> The WoT documentation claims it is based upon the Advogato trust
> >> metric. ?(Brief discussion: http://www.advogato.org/trust-metric.html
> >> Full paper: http://www.levien.com/thesis/compact.pdf ) ?I think this
> >> is wonderful, as I think there is much to recommend the Advogato
> >> metric (and I pushed for it early on in the WoT discussions).
> >> However, my understanding of the paper and what is actually
> >> implemented is that the WoT code does not actually implement it.
> >
> > I must admit that I do not know whether its claim that it implements 
Advogato
> > is right or not. I have refactored the code but I have not modified the 
trust
> > calculation logic and have not checked whether it is Advogato or not. 
Someone
> > should probably do that.
> >
> >> I don't have any specific ideas for how to choose whether to ignore
> >> identities, but I think you're making the problem much harder than it
> >> needs to be.
> >
> > Why exactly? Your post is nice but I do not see how it answers my 
question.
> > The general problem my post is about: New identities are obtained by 
taking
> > them from trust lists of known identities. An attacker therefore could put
> > 1000000 identities in his trust list to fill up your database and slow 
down
> > WoT. Therefore, an decision has to be made when to NOT import new 
identities
> > from someone's trust list. In the current implementation, it is when he 
has a
> > negative score.
> >
> > As I've pointed out, in the future there will be MULTIPLE webs of trust, 
for
> > different contexts - Freetalk, Filesharing, Identity-Introduction (you can 
get
> > a trust value from someone in that context when you solve a captcha he has
> > published), so the question now is: Which context(s) shall be used to 
decide
> > when to NOT import new identity's from someones trust list anymore?
> 
> I have not examined the WoT code.  However, the Advogato metric has
> two attributes that I don't think the current WoT method has: no
> negative trust behavior (if there is a trust rating Bob can assign to
> Carol such that Alice will trust Carol less than if Bob had not
> assigned a rating, that's a negative trust behavior), and a
> mathematical proof as to the upper limit on the quantity of spammer
> nodes that get trusted.
> 
> The Advogato metric is *specifically* designed to handle the case of
> the attacker creating millions of accounts.  In that case, his success
> is bounded (linear with modest constant) by the number of confused
> nodes -- that is, legitimate nodes that have (incorrectly) marked his
> accounts as legitimate.  If you look at the flow computation, it
> follows that for nodes for which the computed trust value is zero, you
> don't have to bother downloading their trust lists, so the number of
> such lists you download is similarly well controlled.
> 
> As for contexts, why should the same identity be treated differently
> in different contexts?  If the person is (believed to be) a spammer in
> one context, is there any reason to trust them in some other context?
> I suppose I don't really understand the purpose of having different
> contexts if your goal is only to filter out spammers.  Wasn't part of
> the point of the modular approach of WoT that different applications
> could share trust lists, thus preventing users from having to mark
> trust values for the same identities several times?

Because treating them as the same unfairly penalises those who trust an 
identity in one context and do not know about that identity's misdeeds in 
another context: the only way to stop the spam is to not trust them, which 
results in a schism.

But this is not just on a whole-application level (Freetalk vs filesharing). 
It also works on a board-by-board level.

IMHO the solution is to only have positive trust for per-context (message) 
trust, but to have trust list trust have a tentative flag. If enabled, this 
will be overridden by any hard (non-tentative) opinions - hard opinions that 
result from marking down spammers and identifying that an identity trusts 
lots of spammers.

The attack then is for a bad guy to chat on the boards but set hard 0 trust 
list trust for everyone. This attack should be detectable.
> 
> Evan Daniel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090507/5809e715/attachment.pgp>

Reply via email to