On Friday 22 May 2009 15:39:06 Evan Daniel wrote:
> On Fri, May 22, 2009 at 8:17 AM, Matthew Toseland
> <toad at amphibian.dyndns.org> wrote:
> > On Friday 22 May 2009 08:17:55 bbackde at googlemail.com wrote:
> >> Is'nt his point that the users just won't maintain the trust lists?
> >> I thought that is the problem that he meant.... how can Advogato help us
> > here?
> >
> > Advogato with only positive trust introduces a different tradeoff, which is
> > still a major PITA to maintain, but maybe less of one:
> > - Spammers only disappear when YOU mark them as spammers, or ALL the people
> > you trust do. Right now they disappear when the majority, from the point of
> > view of your position on the WoT, mark them as spammers (more or less).
> 
> When they *fail to mark them as trusted*.  It's an important
> distinction, as it means that in order for the spammer to do anything
> they first have to *manually* build trust.  If an identity suddenly
> starts spamming, only people that originally marked it as trusted have
> to change their trust lists in order to stop them.
> 
> > - If you mark a spammer as positive because he posts useful content on one
> > board, and you don't read the boards he spams you are likely to get marked 
> > as
> > a spammer yourself.
> 
> Depends how militant people are.  I suspect in practice people won't
> do this unless you trust a lot of spammers... in which case they have
> a point.  (This is also a case for distinguishing message trust from
> trust list trust; while Advogato doesn't do this, the security proof
> extends to cover it without trouble.)  You can take an in-between
> step: if Alice marks both Bob and Carol as trusted, and Bob marks
> Carol a spammer, Alice's software notices and alerts Alice, and offers
> to show Alice recent messages from Carol from other boards.
> (Algorithmically, publishing "Sam is a spammer" is no different from
> not publishing anything about Sam, but it makes some nice things
> possible from a UI standpoint.)  This may well get most of the benefit
> of ultimatums with lower complexity.

Right, this is something I keep forgetting to mention. When marking a user as a 
spammer, the UI should ask the user about people who trust that spammer and 
other spammers. However, it does encourage militancy, doesn't it? It certainly 
doesn't solve the problem the way that ultimatums do...
> 
> > - If a spammer doesn't spam himself, but gains trust through posting useful
> > content on various boards and then spends this trust by trusting spam
> > identities, it will be necessary to give him zero message list trust. Again
> > this has serious issues with collateral damage, depending on how
> > trigger-happy people are and how much of a problem it is for newbies to see
> > spam.
> >
> > Technologically, this requires:
> > - Changing WoT to only support positive trust. This is more or less a one 
> > line
> > change.
> 
> If all you want is positive trust only, yes.  If you want the security
> proof, it requires using the network flow algorithm as specified in
> the paper, which is a bit more complex.  IMHO, fussing with the
> algorithm in ways that don't let you apply the security proof is just
> trading one set of alchemy for another -- it might help, but I don't
> think it would be wise.

I was under the impression that WoT already used Advogato, apart from 
supporting negative trust values and therefore negative trust.
> 
> > - Making sure that my local ratings always override those given by others, 
> > so
> > I can mark an identity as spam and never see it again. Dunno if this is
> > currently implemented.
> > - Making CAPTCHA announcement provide some form of short-lived trust, so if
> > the newly introduced identity doesn't get some trust it goes away. This may
> > also be implemented.
> 
> My proposal: there are two levels of trust (implementation starts
> exactly as per Advogato levels).  The lower level is CAPTCHA trust;
> the higher is manually set only.  (This extends to multiple manual
> levels without loss of generality.)  First, the algorithm is run
> normally on the manual trust level.  Then, the algorithm is re-run on
> the CAPTCHA trust level, with modification: identities that received
> no manual trust have severely limited capacity (perhaps as low as 1),
> and the general set of capacity vs distance from root is changed to
> not go as deep.

Not bad. Currently CAPTCHA identities are seen by everyone iirc, this may be a 
desirable (if not scalable) property.
> 
> The first part means that the spammer can't chain identities *at all*
> before getting the top one manually trusted.  The second means that
> identities that only solved a CAPTCHA will only be seen by a small
> number of people -- ie they can't spam everyone.  The exact numbers
> for flow vs depth would need some tuning for both trust levels,
> obviously.  You want enough people to see new identities that they
> will receive manual trust.

Yeah... so far p0s has resisted any suggestion that any identity won't be seen 
by everyone... Also the anti-censorship lobby probably object to any new 
identity not being seen by them.
> 
> > - Limits on identity churn in any trust list (1 new identity per day 
> > averaged
> > over a week or something), to ensure that an attacker who has trust cannot
> > constantly add new identities.
> 
> This is clearly required (to prevent a spammer multiplying a limited
> trust to introduce many throw-away identities, each of which spams up
> to the message count limits).  However, it does present a new problem:
> because trust capacities are limited, it provides a far more effective
> DOS of the CAPTCHA queue than simply answering all CAPTCHAs.  I'm not
> sure how to handle that.  A DOS that prevents new users from joining
> is particularly vicious.

Not sure I follow. You would have to answer all the CAPTCHAs in order to reach 
the limit, no? And anyway, these are captchas, they are not the same as really 
trusted identities, as e.g. in your algorithm above. So it is solvable.
> 
> > It probably also requires:
> > - Some indication of which trusted identities trust a spammer when you mark 
> > an
> > identity as a spammer.
> > - Sending an ultimatum to the trusted identity that trusts more than one
> > spammer: stop trusting spammers or we'll stop trusting you. This would have
> > to be answered in a reasonable time, hence is a problem for those not
> > constantly at their nodes.
> >
> > evanbd has argued that the latter two measures are unnecessary, and that the
> > limited number of spam identities that any one identity can introduce will
> > make the problem manageable. An attacker who just introduces via a CAPTCHA
> > will presumably only get short-lived trust, and if he only posts spam he
> > won't get any positive trust. An attacker who contributes to boards to gain
> > trust to create spamming sub-identities with has to do manual work to gain
> > and maintain reputation among some sub-community. A newbie will not see old
> > captcha-based spammers, only new ones, and those spam identities that the
> > attacker's main, positive identity links to. He will have to manually block
> > each such identity, because somebody is bound to have positive trust for the
> > spammer parent identity.
> 
> Well...  I argue that they *may* be unnecessary.  Specifically, I
> think we can defer implementation until there are problems that
> warrant it.

In terms of spam, that would be the day we release. :)
> 
> > In terms of UI, if evanbd is right, all we need is a button to mark the 
> > poster
> > of a message as a spammer (and get rid of all messages from them), and a
> > small amount of automatic trust when answering a message (part of the UI so
> > it can be disabled). Only those users who know how, and care enough, would
> > actually change the trust for the spammer-parent, and in any case doing so
> > would only affect them and contribute nothing to the community.
> >
> > But if he is wrong, or if an attacker is sufficiently determined, we will 
> > also
> > need some way to detect spam-parents, and send them ultimatums.
> 
> I'm not certain that's the right way to grant manual trust.  (Or
> perhaps we need more than one level of it.)  You don't want a spammer
> to be able to get manual trust by posting a message to the test board
> consisting only of "Hi, can anyone see this?" -- they can do that
> automatically.  I think there should be a pair of buttons, "mark
> spammer" and "mark non-spammer."

No, by posting _interesting_ messages. That is, if somebody replies to a 
message, by default they gain some trust, but there is a box you can uncheck 
... we could even have this turned off by default on test boards?

Nobody will ever bother to click "Mark non-spammer" imho.
> 
> Evan Daniel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090522/d090eb7c/attachment.pgp>

Reply via email to