Evan Daniel schrieb:
> On Fri, May 22, 2009 at 10:48 AM, Thomas Sachau <m...@tommyserver.de> wrote:
>> Matthew Toseland schrieb:
>>> On Friday 22 May 2009 08:17:55 bbac...@googlemail.com wrote:
>>>> Is'nt his point that the users just won't maintain the trust lists?
>>>> I thought that is the problem that he meant.... how can Advogato help us
>>>> here?
>>> Advogato with only positive trust introduces a different tradeoff, which is
>>> still a major PITA to maintain, but maybe less of one:
>>> - Spammers only disappear when YOU mark them as spammers, or ALL the people
>>> you trust do. Right now they disappear when the majority, from the point of
>>> view of your position on the WoT, mark them as spammers (more or less).
>> So this is a disadvantage of avogato against current FMS implementation. 
>> With the current FMS
>> implementation, only a majority of trusted identities need to mark him down, 
>> with avogato, either
>> all original trusters need to mark him down or you need to do it yourself 
>> (either mark him down or
>> everyone, who trusts him, so
>> FMS 1:0 avogato
> 
> As I've said repeatedly, I believe there is a fundamental tradeoff
> between spam resistance and censorship resistance, in the limiting
> case.  (It's obviously possible to have an algorithm that does poorly
> at both.)  Advogato *might* let more spam through than FMS.  There is
> no proof provided for how much spam FMS lets through; with Advogato it
> is limited in a provable manner.  Alchemy is a bad thing.  FMS
> definitely makes censorship by the mob easier.  By my count, that's a
> win for Advogato on both.

I dont think you can divide between "spam resistance" and "censorship 
resistance" for a simple
reason: Who defines what sort of action or text is spam? Many people may mostly 
aggree about some
sort of action or content to be spam, but others could claim the reduced 
visibility censorship.
And i dont see any alchemy with the current trust system of FMS, if something 
is alchemy and not
clear, please point it out, but the exact point please.
And FMS does not make "censorship by a mob easier". Simply because you should 
select the people you
trust yourself. Like you should select your friends and darknet peers yourself. 
If you let others do
it for you, dont argue about what follows (like a censored view on FMS).

>>> - Making CAPTCHA announcement provide some form of short-lived trust, so if
>>> the newly introduced identity doesn't get some trust it goes away. This may
>>> also be implemented.
>> This would require adding trust to new people, As you can see with FMS, 
>> having everyone spending
>> dayly time on trustlist adjustments is just an idea, which wont come true. 
>> So this would mean that
>> every identity that is not very active will loose any trust and would have 
>> to introduce himself
>> again. More pain and work resulting in less users.
> 
> See my proposal (other mail in this thread, also discussed
> previously).  Short-range but long-lived trust is a better substitute,
> imho.

I would call it censorship because those that see you because of captcha 
announcement can themselves
say what happens,
-if they dont give you trust, most wont see you => you are lost, are censored
-if the give you trust, everyone will see you => not censored

This would give a small group of people the chance to censor newly announced 
identities (also the
group may be different for every identity).

> 
>>> - Limits on identity churn in any trust list (1 new identity per day 
>>> averaged
>>> over a week or something), to ensure that an attacker who has trust cannot
>>> constantly add new identities.
>> Only the number of identities added because of solved captchas should be 
>> limited and the limit
>> number is the number of announced captchas, which should be more than around 
>> 1/day. For added
>> identities from others, you will always do some basic review, maybe with 
>> some advanced option to
>> remove all identities introduced by a specific identity.
> 
> No, that is not sufficient.  The attack that makes it necessary (which
> is also possible on FMS, btw -- in fact it's even more effective) is
> fairly simple.  A spammer gets a dummy identity trusted manually by
> other people.  He then has it mark several other identities as
> trustworthy.  Those identities then spam as much as is worthwhile
> (limited only by message count limits, basically).  The spammer then
> removes them from the dummy identity published trust list, adds new
> spamming identities, and repeats.  The result is that his one main
> identity can get a large quantity of spam through, even though it can
> only mark a limited number of child identities trusted and each of
> them can only send a limited amount of spam.
> 
> Also, what do you mean by review of identities added from others?
> Surely you don't mean that I should have to manually review every
> poster?  Isn't the whole point of using a wot in the first place that
> I can get good trust estimates of people I've never seen before?

In FMS, there is currently a simple page, where the latest added identities are 
listed and how they
where listed. So if you get many spamming identities and they are all added 
from 1 trusted peer,
just remove his trustlist trust and all those new spamming identities wont 
reach you.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to