On Thu, May 14, 2009 at 6:14 PM, Matthew Toseland
<toad at amphibian.dyndns.org> wrote:
> On Thursday 14 May 2009 17:33:29 Evan Daniel wrote:
>> On Thu, May 14, 2009 at 11:32 AM, Matthew Toseland
>> <toad at amphibian.dyndns.org> wrote:
>> >> IMHO these are not solutions to the contexts problem -- it merely
>> >> shifts the balance between allowing spam and allowing censorship. ?In
>> >> one case, the attacker can build trust in one context and use it to
>> >> spam a different context. ?In the other case, he can build trust in
>> >> one context and use it to censor in another.
>> >>
>> >> Right now, the only good answer I see to contexts is to make them
>> >> fully independent. ?Perhaps I missed it, but I don't recall a
>> >> discussion of how any other option would work in any detail -- the
>> >> alternative under consideration appears to be to treat everything as
>> >> one unified context. ?I'm not necessarily against that, but the
>> >> logical conclusion is that you're responsible for paying attention to
>> >> everything someone you've trusted does in all contexts in which you
>> >> trust them -- which, for a unified context, means everywhere.
>> >
>> > Having to bootstrap on each forum would be _bad_. Totally impractical.
>> >
>> > What about ultimatums? "these" above refers to WoT with negative trust,
> right?
>> > Ultimatums: I mark somebody as a spammer, I demand that my peers mark him
> as
>> > a spammer, they evaluate the situation, if they don't mark the spammer as
>> > spammer then I mark them as spammer.
>>
>> Right. ?So all the forums go in a single context. ?I don't see how you
>> can usefully define two different contexts such that trust is common
>> to them but responsibility is not. ?I think the right solution (at
>> least for now) is one context per application. ?So you have to
>> boostrap into the forums app, and into the filesharing app, and into
>> the mail app, but not per-forum. ?Otherwise I have to be able to
>> evaluate possible spam in an application I may not have installed.
>>
>> Ultimatums sound like a reasonable approach. ?Though if Alice sends
>> Bob an ultimatum about Bob's trust for Sam, and Bob does not act, I'm
>> inclined to think that Alice's client should continue downloading
>> Bob's messages, but cease publishing a trust rating for Bob. ?After
>> all, Bob might just be lazy, in which case his trust list is worthless
>> but his messages aren't.
>
> Agreed, I have no problem with not reducing message trust in this case.
>>
>> >> >> Also, I don't see how this attack is specific to the Advogato metric.
>> >> >> It works equally well in WoT / FMS. ?The only thing stopping it there
>> >> >> is users manually examining each other's trust lists to look for such
>> >> >> things. ?If you assume equally vigilant users with Advogato the attack
>> >> >> is irrelevant.
>> >> >
>> >> > It is solvable with positive trust, because the spammer will gain trust
>> > from
>> >> > posting messages, and lose it by spamming. The second party will likely
> be
>> >> > the stronger in most cases, hence we get a zero or worse outcome.
>> >>
>> >> Which second party?
>> >
>> > The group of users affected by the spam. The first party is the group of
> users
>> > who are not affected by the spam but appreciate the spammer's messages to
> a
>> > forum and therefore give him trust.
>>
>> Ah. ?You meant "solvable with *negative* trust" then?
>
> Yes, sorry.

There's a potential problem here (in the negative trust version): if
you post good stuff in a popular forum, and spam in a smaller one, the
fact that the influence of any one person is bounded means that you
might keep your overall trust rating positive.  XKCD describes the
problem well:  http://xkcd.com/325/

I continue to think that the contexts problem is nontrivial, though
different systems will have different tradeoffs.  Fundamentally, I
think that if trust and responsibility apply to different regions,
there are potential problems.

>>
>> >> OK.
>> >>
>> >> I think you really mean "Pure positive only works *perfectly* if every
>> >> user..."
>> >
>> > Hmm, maybe.
>> >
>> >> We don't need a perfect system that stops all spam and
>> >> nothing else. ?Any system will have some failings. ?Minimizing those
>> >> failings should be a design goal, but knowing where we expect those
>> >> failings to be, and placing them where we want them, is also an
>> >> important goal.
>> >>
>> >> Or, looked at another way: ?We have ample evidence that people will
>> >> abuse the new identity creation process to post spam. ?That is a
>> >> problem worth expending significant effort to solve. ?Do we have
>> >> evidence that spammers will actually exert per-identity manual effort
>> >> in order to send problematic amounts of spam?
>> >
>> > I don't see why it would be per-identity.
>>
>> Per fake identity that will be sending spam. ?If they can spend manual
>> effort to create a trusted id, and then create unlimited fake ids
>> bootstrapped from that one to spam with, that's a problem. ?If the
>> amount of effort they have to spend is linear with the number of ids
>> sending spam, that's not a problem, regardless of whether the effort
>> is spent on the many spamming ids or the single bootstrap id.
>
> Because there is a limit on churn, and one spamming identity can be blocked
> trivially. Does this eliminate the need for reducing trust in an identity
> that trusts spammers (hence ultimatums)?

It certainly reduces it; probably to the point where it is a feature
to add later, if it looks like it is needed.

>>
>> >> Personally, I'm not
>> >> worried about there being a little bit of spam; I'm worried about it
>> >> overwhelming the discussions and making the system unusable. ?My
>> >> intuition tells me that we need defenses against such attacks, but
>> >> that they can be fairly minimal -- provided the defenses against
>> >> new-identity labor-free spam are strong.
>> >
>> > So you consider the problem to be contained if a spammer can only trust 20
>> > identities, everyone reads his messages, everyone reads his
> sub-identities'
>> > spams, and then individually blacklist them? Those targeted by the spam
> would
>> > then not trust the spammer's main identity in order to not see his
>> > sub-identities' spam, but those who talk to him would as they don't see
> them.
>> > Maybe you're right, if we have some severe constraints on changes to trust
>> > lists?
>>
>> Yes, I consider that a solution, for two reasons. ?First, that's a
>> manageable amount of spam. ?Annoyingly high, but manageable. ?Second,
>> I think that if the amount of spam they can send is limited to that
>> level, they (generally) won't bother in the first place, and so in
>> practice you will only rarely see even that level of spam.
>
> Right, because it's not an effective DoS. Which means we've won, more or less,
> although it will continue to annoy newbies, and put them off Freenet, and
> thus may be a useful attack.

Hopefully, tweaking the default settings, especially the churn limits
and default trust list, will keep it from being *too* annoying.

>>
>> Constraining trust list changes is definitely required. ?I would start
>> with a system that says that if Alice is calculating her trust web,
>> and Bob has recently removed Sam from his trust list, then when Alice
>> is propagating trust through Bob's node, she starts by requiring one
>> unit of flow go to Sam before anyone else on the list, but that that
>> unit of flow has no effect on Alice's computation of Sam's
>> trustworthiness. ?Or, equivalently, Bob's connection to the supersink
>> is sized as (1 + number of recently un-trusted identities) rather than
>> the normal constant 1. ?That allows Bob to manage his trust list in a
>> prompt fashion, but if he removes people from it then he is prevented
>> from adding new people to replace them too rapidly. ?The definition of
>> "recent" could be tweaked as well, possibly something like only 1
>> identity gets removed from the recent list per time period, rather
>> than a fixed window during which any removed id counts as recently
>> removed.
>
> Ok.
>
> At this point I think we are in sufficient agreement that I would like some
> input from p0s...

There are some details that need work still, especially the mechanism
behind introductions, but I concur.

Evan Daniel

Reply via email to