On Friday 22 May 2009 22:38:43 Evan Daniel wrote:
> On Fri, May 22, 2009 at 5:03 PM, Matthew Toseland
> <t...@amphibian.dyndns.org> wrote:
> >> >>>> - Making CAPTCHA announcement provide some form of short-lived trust, 
> >> >>>> so if
> >> >>>> the newly introduced identity doesn't get some trust it goes away. 
> >> >>>> This may
> >> >>>> also be implemented.
> >> >>> This would require adding trust to new people, As you can see with 
> >> >>> FMS, having everyone spending
> >> >>> dayly time on trustlist adjustments is just an idea, which wont come 
> >> >>> true. So this would mean that
> >> >>> every identity that is not very active will loose any trust and would 
> >> >>> have to introduce himself
> >> >>> again. More pain and work resulting in less users.
> >> >>
> >> >> See my proposal (other mail in this thread, also discussed
> >> >> previously).  Short-range but long-lived trust is a better substitute,
> >> >> imho.
> >> >
> >> > I would call it censorship because those that see you because of captcha 
> >> > announcement can themselves
> >> > say what happens,
> >> > -if they dont give you trust, most wont see you => you are lost, are 
> >> > censored
> >> > -if the give you trust, everyone will see you => not censored
> >> >
> >> > This would give a small group of people the chance to censor newly 
> >> > announced identities (also the
> >> > group may be different for every identity).
> >>
> >> Then use a more permissive capacity:distance function.  There is no
> >> requirement that you use a shorter range function, or that you use the
> >> same function as everyone else.  IMHO, the default should be somewhat
> >> shorter range, in an attempt to balance the number of people that see
> >> new identities.  As you observe, too few leads to censorship
> >> possibilities (out of malice or just plain laziness).  Too many means
> >> that an identity with CAPTCHA trust only can spam and have everyone
> >> see that spam, which provides the spammer a reasonably efficient way
> >> to send spam.
> >
> > So it's a tradeoff which can be easily configured by the user.
> 
> Yes.  As always, intelligent defaults are important; they should be
> applicable to most newbies, but don't need to meet everyone's needs.
> And if you change it unwisely, you hurt only yourself (other people
> don't even notice).
> 
> >
> > I agree with pretty much all of the above, but the medium-term worry is 
> > that we will start to have to worry about those who trust spammers, and 
> > those who trust those who trust spammers. By eliminating negative trust, 
> > Advogato forces us to either tolerate a certain (and unclear) amount of 
> > spam, or spend a lot of effort on hunting down those who trust spammers, 
> > resulting in massive collateral damage.
> >> >>
> >> >> Also, what do you mean by review of identities added from others?
> >> >> Surely you don't mean that I should have to manually review every
> >> >> poster?  Isn't the whole point of using a wot in the first place that
> >> >> I can get good trust estimates of people I've never seen before?
> >> >
> >> > In FMS, there is currently a simple page, where the latest added 
> >> > identities are listed and how they
> >> > where listed. So if you get many spamming identities and they are all 
> >> > added from 1 trusted peer,
> >> > just remove his trustlist trust and all those new spamming identities 
> >> > wont reach you.
> >
> > We want to make it easy, or nobody will do it. Poring over your trust list 
> > day after day is not most people's idea of fun.
> >
> > There are three approaches, given positive trust only. Depending on the 
> > level of effort exerted by the spammer, we move from one tradeoff between 
> > spam resistance and censorship resistance to the next. IMHO the last stage 
> > involves significant risk of censorship or at least collateral damage, 
> > while obviously having the strongest spam resistance.
> >
> > The first approach is to mark spammers as spammers, and limit the capacity 
> > of trusted identities to create new spammers by for example limits on the 
> > number of identities that can change in a trust list in one day. This means 
> > that everyone will have to mark all the spam identities as spam, much as in 
> > Frost with the Alice bot. It will deter newbies, but it should be usable 
> > for the determined. Note that it is *essential* on a positive trust only 
> > network that our spam markings override others' positive trust levels.
> >
> > The second approach is when we mark an identity as spam, WoT realises that 
> > an identity trusting that spammer also trusts a lot of other spammers, and 
> > proposes that we mark the parent identity as a spammer, at least for 
> > purposes of trust list trust. Hopefully this will be enough. The cost for 
> > every user will be to mark a few spammer posts as spam, and then accept 
> > WoT's recommendation to mark the parent as spammer. "A few" will be an 
> > arbitrary parameter that will have to be argued about, higher means less 
> > chance of marking non-spammers as spammers, but at the cost of seeing more 
> > spam.
> >
> > The third approach is that when we mark the parent identity as spam, WoT 
> > suggests marking those who trust the parent identity also as spammers for 
> > purposes of trust list trust (if we trust them; if we don't, it's not our 
> > problem; we are trying to optimise the network *for other people*, 
> > particularly for newbies, here). We can try to be polite about this using 
> > ultimatums, since it's likely that they didn't deliberately choose to trust 
> > the spam-parent knowing he is a spam-parent - but if they don't respond in 
> > some period by removing him from their trust list, we will have to reduce 
> > our trust in them. This will cause collateral damage and may be abused for 
> > censorship which might be even more dangerous than the current problems on 
> > FMS. However, if there is a LOT of spam, or if we want the network to be 
> > fairly spam-free for newbies, the first two options are insufficient. :|
> 
> I'm not certain you're correct about this.  The first two methods are,
> imho, sufficient to limit spam to levels that are annoying, but where
> the network is still usable.  Even if they download a bunch of
> messages, a new user only has to click the "spam" button once per
> spamming identity, and those are limited in a well defined manner
> (linear with modest coefficient with the number of dummy identities
> the spammer is willing to maintain).
> 
> My suspicion is that if all they can aspire to be is a nuisance, the
> spammers won't be nearly as interested.  There is much more appeal to
> being able to DoS a board or the whole network than being able to
> mildly annoy the users.  So if we limit the amount of damage they can
> do to a sane level, the actual amount of damage done will be
> noticeably less than that limit.

Can we agree that we should implement the second option in WoT then?
> 
> There is another possible optimization we could do (I've just thought
> of it, and I'm not entirely certain that it works or that I like it).
> Suppose that Alice trusts Bob trusts Carol (legitimate but confused)
> trusts Sam (a spammer), and Alice is busy computing her trust list.
> Bob has (correctly) marked Sam as a spammer.  In the basic
> implementation, Alice will accept Sam.  Bob may think that Carol is
> normally correct (and not malicious), and be unwilling to zero out his
> trust list trust for her.  However, since this is a flow computation,
> we can place an added restriction: when Alice calculates trust, flow
> passing through Bob may not arrive at Sam even if there are
> intermediate nodes.  If Alice can find an alternate route for flow to
> go from Alice to Carol or Sam, she will accept Sam.
> 
> This modification is in some ways a negative trust feature, since
> Bob's marking of Sam as a spammer is different from silence.  However,
> it doesn't let Bob censor anyone he couldn't censor by removing Carol
> from his trust list.  Under no circumstances will Alice using Bob's
> trust list result in fewer people being accepted than not using Bob's
> trust list.  It does mean that Bob, as a member of the evil cabal of
> default trust list members for newbies, can (with the unanimous help
> of the cabal) censor identities in a more subtle fashion than simply
> not trusting anyone.
> 
> The caveats: this is a big enough change that it needs a close
> re-examination of the security proof (I'm pretty sure it's still
> valid, but I'm not certain).  If it sounds like an interesting idea, I
> can do that.  Also, I don't think it's compatible with Ford-Fulkerson
> or the other simple flow capacity algorithms.  The changes required
> might be non-trivial, possibly to the point of changing the running
> time.  Again, I could look at this in detail if it's interesting
> enough to warrant it.

Worth investigating IMHO.
> 
> Evan Daniel

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to