On Thursday 07 May 2009 11:23:51 Evan Daniel wrote:
> On Thu, May 7, 2009 at 4:00 AM, xor <xor at gmx.li> wrote:
> > On Thursday 07 May 2009 00:02:11 Evan Daniel wrote:
> >> The WoT documentation claims it is based upon the Advogato trust
> >> metric.  (Brief discussion: http://www.advogato.org/trust-metric.html
> >> Full paper: http://www.levien.com/thesis/compact.pdf )  I think this
> >> is wonderful, as I think there is much to recommend the Advogato
> >> metric (and I pushed for it early on in the WoT discussions).
> >> However, my understanding of the paper and what is actually
> >> implemented is that the WoT code does not actually implement it.
> >
> > I must admit that I do not know whether its claim that it implements
> > Advogato is right or not. I have refactored the code but I have not
> > modified the trust calculation logic and have not checked whether it is
> > Advogato or not. Someone should probably do that.
> >
> >> I don't have any specific ideas for how to choose whether to ignore
> >> identities, but I think you're making the problem much harder than it
> >> needs to be.
> >
> > Why exactly? Your post is nice but I do not see how it answers my
> > question. The general problem my post is about: New identities are
> > obtained by taking them from trust lists of known identities. An attacker
> > therefore could put 1000000 identities in his trust list to fill up your
> > database and slow down WoT. Therefore, an decision has to be made when to
> > NOT import new identities from someone's trust list. In the current
> > implementation, it is when he has a negative score.
> >
> > As I've pointed out, in the future there will be MULTIPLE webs of trust,
> > for different contexts - Freetalk, Filesharing, Identity-Introduction
> > (you can get a trust value from someone in that context when you solve a
> > captcha he has published), so the question now is: Which context(s) shall
> > be used to decide when to NOT import new identity's from someones trust
> > list anymore?
>
> I have not examined the WoT code.  However, the Advogato metric has
> two attributes that I don't think the current WoT method has: no
> negative trust behavior (if there is a trust rating Bob can assign to
> Carol such that Alice will trust Carol less than if Bob had not
> assigned a rating, that's a negative trust behavior), and a
> mathematical proof as to the upper limit on the quantity of spammer
> nodes that get trusted.
>
> The Advogato metric is *specifically* designed to handle the case of
> the attacker creating millions of accounts.  In that case, his success
> is bounded (linear with modest constant) by the number of confused
> nodes -- that is, legitimate nodes that have (incorrectly) marked his
> accounts as legitimate.  If you look at the flow computation, it
> follows that for nodes for which the computed trust value is zero, you
> don't have to bother downloading their trust lists, so the number of
> such lists you download is similarly well controlled.
>

Well I'm no mathematician, I cannot comment on that. I think toads argument 
sounds reasonable though: That there must be a way to distrust someone if the 
original person who trusted him disappears.

I do not plan to change the trust logic on my own, I consider myself more as a 
programmer who can implement things than a designer of algorithms etc.

> As for contexts, why should the same identity be treated differently
> in different contexts?  If the person is (believed to be) a spammer in
> one context, is there any reason to trust them in some other context?

Yes, consider following realistic and simple case:
Someone has CRAP files on his computer. Just random crap, there is so much crap 
on the internet and on harddisks. He exports that crap to the future 
"filesharing" WoT client application. For putting only crap in the filesharing, 
people will distrust him.

YET he still might have useful things to say on Freetalk! And should not be 
prevented from that! 

Both things are very likely to occur at once: It is very likely that someone 
has crap files on his harddisk as we all now, and it is very likely that 
someone has something useful to say even though his files are crap.

The disability to do one thing properly does not mean that you suck at 
everything!

> I suppose I don't really understand the purpose of having different
> contexts if your goal is only to filter out spammers.  Wasn't part of
> the point of the modular approach of WoT that different applications
> could share trust lists, thus preventing users from having to mark
> trust values for the same identities several times?

Identities can still be shared which is a large benefit: If someone has proven 
engagement in the open source community on Freetalk by spending lots and lots 
of posts for commenting code or whatever, you might want to trust his binary 
files from the file sharing...

Further, consider the case where the web of trust ITSELF becomes the 
application: One day we might not publish new binary version of Freenet but 
instead upload patches to a git repository which is hosted IN Freenet, then 
the new code lines could receive trust values from reviewers and as soon as 
all code has sufficient trust, the nodes would compile it on their own and 
install it. Thats only possible if one can create sub-webs.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090507/7fa2b754/attachment.pgp>

Reply via email to